Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-google-and-mozilla-to-remove-extended-validation-indicators-in-chrome-77-and-firefox-70
Bhagyashree R
13 Aug 2019
4 min read
Save for later

Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70

Bhagyashree R
13 Aug 2019
4 min read
Last year, Apple removed the Extended Validation (EV) certificate indicators from Safari on both iOS 12 and Mojave. Now, Google and Mozilla are following suit by removing the EV visual indicators starting from Chrome 77 and Firefox 70. What are Extended Validation Certificates Introduced in 2007, Extended Validation Certificates are issued to applicants after they are verified as a genuine legal entity by a certificate authority (CA). The baseline requirements for an EV certificate are outlined by the CA/Browser forum. Web browsers show a green address bar when visiting a website that is using EV Certificate. You will see the company name alongside the padlock symbol in the green address bar. These certificates can often be expensive. DigiCert charges $344 USD per year, Symantec prices its EV certificate at $995 USD a year, and Thawte at $299 USD a year. Why Chrome and Firefox are removing EV indicators In a survey conducted by Google, users of the Chrome and Safari browsers were asked how much they trusted a website with and without EV indicators. The results of the survey showed that browser identity indicators do not have much effect on users’ secure choices. About 85 percent of users did not find anything strange about a Google login page with the fake URL “accounts.google.com.amp.tinyurl.com”. Seeing these results and prior academic work, the Google Security US team concluded that positive security indicators are largely ineffective. “As part of a series of data-driven changes to Chrome’s security indicators, the Chrome Security UX team is announcing a change to the Extended Validation (EV) certificate indicator on certain websites starting in Chrome 77,” the team wrote in a Google group. Another reason behind this decision was that the EV indicators takes up valuable screen space. Starting with Chrome 77,  the information related to EV indicators will be shown in Page Info that appears when the lock icon is clicked instead of the EV badge: Source: Google Citing similar reasons, the team behind Firefox shared their intention to remove EV indicators from Firefox 70 for desktop yesterday. They also plan to add this information to the identity panel instead of showing it on the identity block. “The effectiveness of EV has been called into question numerous times over the last few years, there are serious doubts whether users notice the absence of positive security indicators and proof of concepts have been pitting EV against domains for phishing,” the team wrote. Many CAs market EV certificates as something that builds visitor confidence, protects them against phishing, and identity fraud. Looking at these advancements, Troy Hunt, a web security expert and the creator of “Have I Been Pwned?” concluded that EV certificates are now dead. In a blog post, he questioned the CAs, “how long will it take the CAs selling EV to adjust their marketing to align with reality?” Users have mixed feelings about this change. “Good riddance, IMO. They never meant much, to begin with, the validation procedures were basically "can you pay the fee?", and they only added to user confusion,” a user said on Hacker News. Many users believe that EV indicators are valuable for financial transactions. A user commented on Reddit, “As a financial institution it was always much easier to just say "make sure it says <Bank name> in the URL bar and it's green" when having a customer on the phone than "Please click on settings -> advanced settings -> security -> display certificate and check the value subject".” To know more, check out the official announcements by Chrome and Firefox teams. Google Chrome to simplify URLs by hiding special-case subdomains Flutter gets new set of lint rules to build better Chrome OS apps Mozilla releases WebThings Gateway 0.9 experimental builds targeting Turris Omnia and Raspberry Pi 4  
Read more
  • 0
  • 0
  • 22452

article-image-intel-amd-laptop-chip-partnership
Abhishek Jha
09 Nov 2017
3 min read
Save for later

Frenemies: Intel and AMD partner on laptop chip to keep Nvidia at bay

Abhishek Jha
09 Nov 2017
3 min read
For decades, Intel and AMD have remained bitter archrivals. Today, they find themselves teaming up to thwart a common enemy – Nvidia. As Intel revealed its partnership with Advanced Micro Devices (AMD) over a next-generation notebook chip, it was the first time the two chip giants collaborated since the ‘80s. The proposed chip for thin and lightweight laptops combines an Intel processor and an AMD graphics unit for complex video gaming. The new series of processors will be part of Intel's 8th-generation Core H-series mobile chips, expected to hit the market in the first quarter of 2018. What it means is that Intel’s high-performance x86 cores will get combined with AMD Radeon Graphics into the same processor package using Intel’s EMIB multi-die technology. That is not all. Intel is also bundling the design with built-in High Bandwidth Memory (HBM2) RAM. The new processor, Intel claims, reduces the usual silicon footprint by about 50%. And with a ‘semi-custom’ graphics processor from AMD, enthusiasts can look forward to discrete graphics-level performances for playing games, editing photos or videos, and other tasks that can leverage modern GPU technologies. What does AMD get? Having struggled to remain profitable in recent times, AMD has been losing share in the discrete notebook GPU market. The deal could bring additional revenues with increased market share. Most importantly, the laptops built with the new processors won’t be competing with AMD’s Ryzen chips (which are also designed for ultrathin laptops). AMD clarified on the difference: While the new Intel chips are designed for serious gamers, Ryzen chips (that are due out at the end of the year) can run games but are not specifically designed for that purpose. "Our collaboration with Intel expands the installed base for AMD Radeon GPUs and brings to market a differentiated solution for high-performance graphics,” Scott Herkelman, vice president and general manager of AMD's Radeon Technologies Group, said. "Together we are offering gamers and content creators the opportunity to have a thinner-and-lighter PC capable of delivering discrete performance-tier graphics experiences in AAA games and content creation applications.” While more information will be available in future, the first machines with the new technology are expected to release in the first quarter of 2018. Nvidia's stock fell on the news. While both AMD and Intel saw their shares surging. A rivalry that began when AMD reverse-engineered the Intel 8080 microchip in 1975 could still be far from over, but in graphics, the two have been rather cordial. Despite hating each other since formation, both decided to pick each other as lesser evil over Nvidia. This is why the Intel AMD laptop chip partnership has a definite future. Currently centered around laptop solutions, this could even stretch to desktops, who knows!
Read more
  • 0
  • 0
  • 22452

article-image-gitlab-12-3-releases-with-web-application-firewall-keyboard-shortcuts-productivity-analytics-system-hooks-and-more
Amrata Joshi
23 Sep 2019
3 min read
Save for later

GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more

Amrata Joshi
23 Sep 2019
3 min read
Yesterday, the team at GitLab released GitLab 12.3, a DevOps lifecycle tool that provides a Git-repository manager. This release comes with Web Application Firewall, Productivity Analytics, new Environments section and much more. What’s new in GitLab 12.3? Web Application Firewall In GitLab 12.3, the team has shipped the first iteration of the Web Application Firewall that is built in the GitLab SDLC platform. The Web Application Firewall focuses on monitoring and reporting the security concerns related to Kubernetes clusters.  Productivity Analytics  From GitLab 12.3, the team has started releasing Productivity Analytics that will help teams and their leaders in discovering the best practices for better productivity. This release will help in drilling into the data and learning insights for improvements in future. Group level analytics workspace can be used to provide performance insight, productivity, and visibility across multiple projects. Environments section This release comes with “Environments” section in the cluster page that gives an overview of all the projects that are making use of the Kubernetes cluster. License compliance  License Compliance feature can be used to disallow a merger when a blacklisted license is found in a merge request.  Keyboard shortcuts This release comes with the new ‘n’ and ‘p’ keyboard shortcuts that can be used to move to the next and previous unresolved discussions in Merge Requests. System hooks System hooks allow automation by triggering requests whenever a variety of events in GitLab take place. Multiple IP subnets This release introduces the ability to specify multiple IP subnets so instead of specifying a single range, it is now possible for large organizations to restrict incoming traffic to their specific needs. GitLab Runner 12.3 Yesterday, the team also released GitLab Runner 12.3, an open-source project that is used for running CI/CD jobs and sending the results back to GitLab. Audit logs In this release, the audit logs for push events are disabled by default for preventing performance degradation on GitLab instances. Few GitLab users are unhappy as some of the features of this release including Productivity Analytics are available to Premium or Ultimate users only. https://twitter.com/gav_taylor/status/1175798696769916932 To know more about this news, check out the official page. Other interesting news in cloud and networking Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 Istio 1.3 releases with traffic management, improved security, and more!    
Read more
  • 0
  • 0
  • 22405

article-image-ionic-react-released-ionic-framework-pivots-from-angular-to-a-native-react-version
Sugandha Lahoti
15 Oct 2019
3 min read
Save for later

Ionic React released; Ionic Framework pivots from Angular to a native React version

Sugandha Lahoti
15 Oct 2019
3 min read
Yesterday the team behind the Ionic Framework announced the general availability of Ionic React, which is a native React version of Ionic Framework, pivoting from its traditional Angular-focused app framework. “Ionic React makes it easy to build apps for iOS, Android, Desktop, and the web as a Progressive Web App”, states the team in a blog post. It uses Typescript and combines core Ionic experience with the tooling and APIs that are tailored to developers. It is a fully-supported, enterprise-ready offering with services, advisory, tooling, and supported native functionality. @ionic/react projects will work like standard React projects, leveraging react-dom and with setup normally found in a Create React App (CRA) app. For routing and navigation, React Router is used under the hood. One difference is the usage of TypeScript, which provides a more productive experience. To use plain JavaScript, you can rename files to use a .js extension then remove any of the type annotations with each file. Explaining the reason behind choosing React, the team says, “With Ionic, we envisioned being able to build rich JavaScript-powered controls and distribute them as simple HTML tags any web developer could assemble into an awesome app. We realized that building a version of the Ionic Framework for React made perfect sense. Combined with the fact that we had several React fans join the Ionic team over the years, there was a strong desire internally to see Ionic Framework support React as well.” How is Ionic React different from React Native The team realized that there was a gap in the React ecosystem that Ionic could fill as an easier mobile and Progressive Web App development solution. Developers were also interested in incorporating it in their existing React Native apps, by building more screens in their app out of a native WebView frame. There were two major reasons why the Ionic team built @ionic/react. First, it is DOM-native and uses the standard react-dom library. In contrast, React Native builds an abstraction on top of iOS and Android native UI controls. The team states, “When we looked at installs for react-dom compared to react-native, it was clear to us that vastly more React development was happening in the browser and on top of the DOM than on top of the native iOS or Android UI systems” Secondly, Ionic is one of the most popular frameworks for building PWA, most notably the Stencil project. React Native, on the other hand, does not officially support Progressive web apps. PWAs are, at best, an afterthought in the React Native ecosystem. @ionic/react has been well appreciated by developers on Twitter. https://twitter.com/dipakcreation/status/1183974237125693441 https://twitter.com/MichaelW_PWC/status/1183836080170323968 https://twitter.com/planetoftheweb/status/1183809368934043653 You can go through Ionic’s blog for additional information and for getting started. Ionic React RC is now out! Ionic 4.1 named Hydrogen is out! React Native Vs Ionic: Which one is the better mobile app development framework?
Read more
  • 0
  • 0
  • 22402

article-image-google-announces-early-access-of-game-builder-a-platform-for-building-3d-games-with-zero-coding
Bhagyashree R
17 Jun 2019
3 min read
Save for later

Google announces early access of ‘Game Builder’, a platform for building 3D games with zero coding

Bhagyashree R
17 Jun 2019
3 min read
Last week, a team within Area 120, Google’s workshop for experimental products, introduced an experimental prototype of Game Builder. It is a “game building sandbox” that enables you to build and play 3D games in just a few minutes. It is currently in early access and is available on Steam. https://twitter.com/artofsully/status/1139230946492682240 Here’s how Game Builder makes “building a game feel like playing a game”: Source: Google Following are some of the features that Game Builder comes with: Everything is multiplayer Game Builder’s always-on multiplayer feature allows multiple users to build and play games simultaneously. Your friends can also play the game while you are working on it. Thousands of 3D models from Google Poly You can find thousands of free 3D models (such as rocket ship, synthesizer, ice cream cone) to use in your games from Google Poly. You can also “remix” most of the models using Tilt Brush and Google Blocks application integration to make it fit for your game. Once you find the right 3D model, you can easily and instantly use it in your game. No code, no compilation required This platform is designed for all skill levels, from enabling players to build their first game to providing game developers a faster way to realize their game ideas. Game Builder’s card-based visual programming allows you to bring your game to life with bare minimum knowledge of programming. You just need to drag and drop cards to answer questions like  “How do I move?.” You can also create your own cards with Game Builder’s extensive JavaScript API. It allows you to script almost everything in the game. As the code is live, you just need to save the changes and you are ready to play the game without any compilation. Apart from these features, you can also create levels with terrain blocks, edit the physics of objects, create lighting and particle effects, and more. Once the game is ready you can share your creations on Steam Workshop. Many people are commending this easy way of game building, but also think that this is nothing new. We have seen such platforms in the past, for instance, GameMaker by YoYo Games. “I just had a play with it. It seems very well thought out. It has a very nice tutorial that introduces all the basic concepts. I am looking forward to trying out the multiplayer aspect, as that seems to be the most compelling thing about it,”  a Hacker News user commented. You can read Google’s official announcement for more details. Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language
Read more
  • 0
  • 0
  • 22367

article-image-ionic-framework-announces-ionic-4-beta
Sugandha Lahoti
30 Jul 2018
3 min read
Save for later

Ionic framework announces Ionic 4 Beta

Sugandha Lahoti
30 Jul 2018
3 min read
Ionic has been a popular UI library among web developers because of it being completely framework-agnostic. Now they have announced the much anticipated Ionic 4 Beta release with a focus on performance, build time improvements, theming and multi-framework capabilities, and more. Although the release is in beta, Ionic invites developers to try testing and migrate their existing ionic-angular apps. You need to install the latest version of the Ionic CLI (4.0.0) using: npm install -g ionic Since v4 is still in beta, creating projects with it requires a flag when starting a new Ionic app: ionic start myApp tabs --type=angular Your v4 beta app will be created using ng cli conventions and the new Ionic 4 components. Let’s talk about the features Ionic 4 comes packed with. Focus on Web standards Ionic 4 has been rebuilt using standard Web APIs, and each component is packaged as a standards-compliant Web Component. With this standardization, the framework will now rely solely on APIs browser's native support keeping the public API for each component stable. The Ionic team has also developed and open-sourced a Web Component compiler Stencil, to use Web Components for each component more easily. Ionic 4 completely embraces modern Web APIs such as Custom Elements, CSS Variables, and Shadow DOM. Updates for Angular This release also adopts new Angular tooling and features following Angular standards and conventions to make Ionic 4, Angular’s leading mobile solution. Angular developers can now use the Angular CLI directly for Ionic apps. ionic-app-scripts are now replaced with Angular CLI and Router. Changes to Documentation Ionic 4 comes with a completely redesigned Ionic Framework documentation. The documentation increases load and navigation performance, making it easier to update and maintain. There are more examples and previews along with more code snippets. The new docs are built with Stencil. Other improvements New Ionicons 4.0 with reduced sizes, and brand new icon forms reflecting the latest iOS and Material Design styles. Ionic Native 5.0 Beta has been upgraded to be framework independent. Check out the new Native API Docs CLI 4.0, is heavily refactored offering powerful Cordova integration with livereload, custom schematics for generators, and support for multiple projects. New CLI Docs, provide more information in a cleaner and easier to read layout. Shadow DOM, makes it easy to reduce the amount of client-side code by embracing native browser APIs and web-standards. Additionally, Shadow Dom helps in easily consuming Ionic components from any web app by encapsulating its styles. You can learn more about the Ionic 4 Beta release in the Ionic 4 beta docs. The Migration Guide and the Installation guide are also available. Ionic Components Creating Our First App with Ionic How to use SQLite with Ionic to store data?
Read more
  • 0
  • 0
  • 22311
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-openais-new-versatile-ai-model-gpt-2-can-efficiently-write-convincing-fake-news-from-just-a-few-words
Natasha Mathur
15 Feb 2019
3 min read
Save for later

OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words

Natasha Mathur
15 Feb 2019
3 min read
OpenAI researchers demonstrated a new AI model, yesterday, called GPT-2, that is capable of generating coherent paragraphs of text without needing any task-specific training. In other words, give it the first line of a story, and it’ll form the rest. Apart from generating articles, it can also perform rudimentary reading comprehension, summarization, machine translation, and question answering.   GPT-2 is an unsupervised language model comprising 1.5 billion parameters and is trained on a dataset of 8 million web pages. “GPT-2 is simply trained to predict the next word in a 40GB of internet tex”, says the OpenAI team. The OpenAI team states that it is superior to other language models trained on specific domains (like Wikipedia, news, or books) as it doesn’t need to use these domain-specific training datasets. For languages related tasks such as question answering, reading comprehension, and summarization, GPT-2 can learn these tasks directly from the raw text and doesn’t require any training data. The OpenAI team states that the GPT-2 model is ‘chameleon-like’ and easily adapts to the style and content of the input text. However, the team has observed certain failures in the model such as repetitive text, world modeling failures, and unnatural topic switching. Finding a good sample depends on the familiarity of the model with that sample’s context. For instance, when the model is prompted with topics that are ‘highly represented in data’ like Miley Cyrus, Lord of the rings, etc, it is able to generate reasonable samples 50% of the time. On the other hand, the model performs poorly in case of highly technical or complex content. The OpenAI team has specified that it envisions the use of GPT-2 in development of AI writing assistants, advanced dialogue agents, unsupervised translation between languages and enhanced speech recognition systems. It has also specified the potential misuses of GPT-2 as it can be used to generate misleading news articles, and automate the large scale production of fake and phishing content on social media. Due to the concerns related to this misuse of language generating models, OpenAI has decided to release a ‘small’ version of GPT-2  with its sampling code and a research paper for researchers to experiment with. The dataset, training code, or GPT-2 model weights have been excluded from the release. The OpenAI team states that this release strategy will give them and the overall AI community the time to discuss more deeply about the implications of such systems. It also wants the government to take initiatives to monitor the societal impact of AI technologies and to track the progress of capabilities in these systems. “If pursued, these efforts could yield a better evidence base for decisions by AI labs and governments regarding publication decisions and AI policy more broadly”, states the OpenAI team. Public reaction to the news is positive, however, not everyone is okay with OpenAI’s release strategy, and feels that the move signals towards ‘closed AI’ and propagates the ‘fear of AI’: https://twitter.com/chipro/status/1096196359403712512 https://twitter.com/ericjang11/status/1096236147720708096 https://twitter.com/SimonRMerton/status/1096104677001842688 https://twitter.com/AnimaAnandkumar/status/1096209990916833280 https://twitter.com/mark_riedl/status/1096129834927964160 For more information, check out the official OpenAI GPT-2 blog post. OpenAI charter puts safety, standards, and transparency first OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners OpenAI builds reinforcement learning based system giving robots human like dexterity
Read more
  • 0
  • 0
  • 22308

article-image-introducing-grafanas-loki-alpha-a-scalable-ha-multi-tenant-log-aggregator-for-cloud-natives-optimized-for-grafana-prometheus-and-kubernetes
Melisha Dsouza
13 Dec 2018
2 min read
Save for later

Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes

Melisha Dsouza
13 Dec 2018
2 min read
On 11th December, at the KubeCon+CloudNativeCon conference held at Seattle, Graffana labs announced the release of ‘Loki’, which is a horizontally-scalable, highly-available, multi-tenant log aggregation system for cloud natives that was inspired by Prometheus. As compared to other log aggregation systems, Loki does not index the contents of the logs but rather a set of labels for each log stream. Storing compressed, unstructured logs and only indexing metadata, makes it cost effective as well as easy to operate. Users can seamlessly switch between metrics and logs using the same labels that they are already using with Prometheus. Loki can store Kubernetes Pod logs; metadata such as Pod labels is automatically scraped and indexed. Features of Loki Loki is optimized to search, visualize and explore a user's logs natively in Grafana. It is optimized for Grafana, Prometheus and Kubernetes. Grafana 6.0 provides a native Loki data source and a new Explore feature that makes logging a first-class citizen in Grafana. Users can streamline instant response, switch between metrics and logs using the same Kubernetes labels that they are already using with Prometheus. Loki is an open source alpha software with a static binary and no dependencies Loke can be used outside of Kubernetes. But the team says that their r initial use case is “very much optimized for Kubernetes”. With promtail, all Kubernetes labels for a user's logs are automatically set up the same way as in Prometheus. It is possible to manually label log streams, and the team will be exploring integrations to make Loki “play well with the wider ecosystem”. Twitter is buzzing with positive comments for Grafana. Users are pretty excited for this release, complimenting Loki’s cost-effectiveness and ease of use. https://twitter.com/pracucci/status/1072750265982509057 https://twitter.com/AnkitTimbadia/status/1072701472737902592 Head over to Grafana lab’s official blog to know more about this release. Alternatively, you can check out GitHub for a demo on three ways to try out Loki: using Grafana free hosted demo, running it locally with Docker or building from source. Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project Uber open sources its large scale metrics platform, M3 for Prometheus DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps
Read more
  • 0
  • 0
  • 22304

article-image-red-hat-joins-the-risc-v-foundation-as-a-silver-level-member
Vincy Davis
12 Aug 2019
2 min read
Save for later

Red Hat joins the RISC-V foundation as a Silver level member

Vincy Davis
12 Aug 2019
2 min read
Last week, RISC-V announced that Red Hat is the latest major company to join the RISC-V foundation. Red Hat has joined as a Silver level member, which carries US$5,000 due per year, including 5 discounted registrations for RISC-V workshops.  RISC-V states in the official blog post that “As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.” RISC-V is a free and open-source hardware instruction set architecture (ISA) which aims to enable extensible software and hardware freedom in computing design and innovation. As a member of the RISC-V foundation, Red Hat now officially agrees to support the use of RISC-V chips. As RISC-V has not released any major software and hardware, per performance, its customer companies will continue using both Arm and RISC-V chips. Read More: RISC-V Foundation officially validates RISC-V base ISA and privileged architecture specifications In January, Raspberry Pi also joined the RISC-V foundation. Though it has not announced if it will be releasing a RISC-V developer board, instead of using Arm-based chips. IBM has been a RISC-V foundation member for many years. In October last year, Red Hat, the major distributor of open-source software and technology was acquired by IBM for $34 Billion, with an aim to deliver next-generation hybrid multi cloud platform. Subsequently, it would want Red Hat to join the RISC-V Foundation as well. Other tech giants like Google, Qualcomm, Samsung, Alibaba, and Samsung are also part of the  RISC-V foundation. Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap AdaCore joins the RISC-V Foundation, adds support for C and Ada compilation
Read more
  • 0
  • 0
  • 22253

article-image-vevo-youtube-account-hacked
Vijin Boricha
12 Apr 2018
2 min read
Save for later

Vevo’s YouTube account Hacked: Popular videos deleted

Vijin Boricha
12 Apr 2018
2 min read
In this ever-growing technology era, one has to ensure the data they put on the internet is in safe hands. No matter which platform you use to share data, there is always a risk of your data being misused. Recently, a group of hackers managed to breach Vevo’s YouTube channel taking down their most-watched videos. This security breach alarmed a lot of viewers as they witnessed something unexpected when searching for popular music videos like ‘Despacito’. The hackers not only took down these videos but also replaced them with a different thumbnail and video title. Apparently, the thumbnail picture used was of a masked gang with guns taken from a Netflix show Casa de Papel and the video title consisted of their nicknames (Prosox and Kuroi’sh). Immediately after this news spread like wildfire, YouTube claimed that it was Vevo that was hacked and not YouTube.  Vevo is owned by the big three record companies in the United States: Warner Music Group, Universal Music Group, and Sony Music Entertainment. Vevo only hosts music videos from artists signed to Sony Music Entertainment and Universal Music Group and those are published on YouTube. YouTube also claimed that there is a big difference between YouTube and Vevo. Anyone with a google account can upload a video to YouTube’s mainstream. But this isn’t the case for Vevo. Vevo is managed by administrators responsible for uploading videos to the website and the Vevo YouTube channel. This means only authorized personnel have access to Vevo’s platform, which is broadcasted on YouTube. This personnel does not have any access to the rest of YouTube overall. It was Vevo’s servers that were hacked as all the affected videos came from that server. Since this attack catered to specific music artists it is still unclear if the hackers got through individual artist accounts or had a wider breakthrough Vevo accounts. So far, only one hacker has claimed that they used scripts to alter video titles. Vevo has already started fixing their security breaches where they have claimed that their affected videos and catalog have been restored to full working order. They are also currently investigating the source of the breach. You can know more about this developing news originally reported by BBC. Check out other latest news: Cryptojacking is a growing cybersecurity threat, report warns Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 22251
article-image-responsible-tech-leadership-or-climate-washing-microsoft-hikes-its-carbon-tax-and-announces-new-initiatives-to-tackle-climate-change
Sugandha Lahoti
17 Apr 2019
5 min read
Save for later

Responsible tech leadership or climate washing? Microsoft hikes its carbon tax and announces new initiatives to tackle climate change

Sugandha Lahoti
17 Apr 2019
5 min read
Microsoft is taking a stand against climate devastation by hiking its internal carbon tax in a new sustainability drive. On Tuesday, the company announced that it nearly doubling its internal carbon fee to $15 per metric ton on all carbon emissions. The company introduced the internal carbon tax back in 2012. The fee is charged based on energy use from the company’s data centers, offices, and factories, and emissions from its employees' business air travel. Now, the funds from this higher fee will maintain Microsoft’s carbon neutrality and help meet their sustainability goals. https://twitter.com/satyanadella/status/1118241283133149184 Microsoft is aiming to use 70% renewable energy to power its data centers by 2023. For comparison, Google reached 100% renewable energy for its global operations — including both their data centers and offices in 2017. In April, this year Apple announced that its global facilities are powered with 100 percent clean energy. This achievement includes retail stores, offices, data centers and co-located facilities in 43 countries. Amazon has been the slow one in this race. Although, Amazon announced that it would power data centers with 100 percent renewable energy; since 2018 Amazon has reportedly slowed down its efforts to use renewable energy using only 50 percent. Microsoft has started the construction of  17 new buildings at their Washington headquarters. These buildings will run on 100 percent carbon-free electricity. Also, the amount of carbon associated with the construction materials of these buildings will be reduced by at least 15 percent, with a goal of reaching 30 percent. This would be monitored through Embodied Carbon Calculator for Construction (EC3), a new tool to track the carbon emissions of raw building materials. What is missing from this plan, is a complete transition off of fossil fuels rather than relying on carbon offsets. Microsoft is also joining the Climate Leadership Council (CLC). CLC is an international policy institute which promotes a national carbon pricing approach. “In addition to our internal carbon tax”, Microsoft says, “we supported the recent Washington state ballot measure on pricing carbon and believe it’s time for a robust national discussion on carbon pricing to lower emissions in an economically sound way.” Microsoft is also aggregating and hosting the environmental data sets on its cloud platform, Azure, and make them publicly available. These data sets, Microsoft notes, “are large government datasets contain satellite. and aerial imagery, among other things, and require petabytes of storage. By making them available in our cloud, we will advance and accelerate the work of grantees and researchers around the world.” Finally, the company will also scale up the work it does with other nonprofits and companies tackling environmental issues through their own data and Artificial Intelligence expertise. Responsible tech leadership or climate washing? Although, Microsoft plans to address quite a number of climate change and sustainability issues, what is missing are commitments for structural and business goal level changes or commitments. A report by Gizmodo highlights the lengths that Google, Microsoft, Amazon and other tech companies are going to help the oil industry accelerate the climate crisis—and there continued profits from this process. Per Gizmodo, Bill Gates heads a $1 billion climate action fund and has published his own point-by-point plan for fighting climate change. Notably absent from that plan is “Empowering Oil & Gas with AI”. Microsoft is also two years into a seven-year deal—rumored to be worth over a billion dollars—to help Chevron, one of the world’s largest oil companies, better extract and distribute oil. Microsoft Azure has also partnered with Equinor, a multinational energy company to provide data services in a deal worth hundreds of millions of dollars. Microsoft has also partnered with ExxonMobil to help it triple oil production in Texas and New Mexico. Microsoft also has a 7-year, multibillion-dollar deal with Chevron. Instead of making profits from these deals Microsoft could be prioritizing climate impacts in business decisions, including ending partnerships with fossil fuel companies that accelerate oil and gas exploration and extraction. https://twitter.com/MsWorkers4/status/1098693994903552000 https://twitter.com/MsWorkers4/status/1118540637899354113 Last week, Over 4,520 Amazon employees signed an open letter addressed to Jeff Bezos and Amazon board of directors asking for a company-wide action plan to address climate change and an end to the company’s reliance on dirty energy resources. Their demands: “define public goals and timelines to reduce emissions; complete ban from using fossil fuels; ending partnerships with fossil fuel companies; reducing harm caused by a company’s operations to vulnerable communities first; advocacy for local, federal, and international policies to reduce carbon emissions and fair treatment of all employees during extreme weather events linked to climate change.” Microsoft Workers 4 good who created their own petition for Microsoft to do better, endorsed the stand taken by Amazon employees and called for all employees to encourage their employers to take actions for climate change. Microsoft’s closed employee only petition was launched in February where Microsoft employees were asking the company to help them align employee’s retirement investments with Microsoft’s sustainability mission. https://twitter.com/MsWorkers4/status/1092942849522323456 4,520+ Amazon employees sign an open letter asking for a “company-wide plan that matches the scale and urgency of climate crisis” Minecraft is serious about global warming, adds a new (spigot) plugin to allow changes in climate mechanics. Google moving towards data centers with 24/7 carbon-free energy
Read more
  • 0
  • 0
  • 22231

article-image-introducing-minecraft-earth-minecrafts-ar-based-game-for-android-and-ios-users
Amrata Joshi
20 May 2019
4 min read
Save for later

Introducing Minecraft Earth, Minecraft's AR-based game for Android and iOS users

Amrata Joshi
20 May 2019
4 min read
Last week, the team at Minecraft introduced a new AR-based game called ‘Minecraft Earth’, which is free for Android and iOS users. The most striking feature about Minecraft Earth is that it builds on the real world with augmented reality, I am sure it will remind you of the game Pokémon Go. https://twitter.com/minecraftearth/status/1129372933565108224 Minecraft has around 91 million active players, and now Microsoft is looking forward to taking the Pokémon Go concept on the next level by letting Minecraft players create and share whatever they’ve made in the game with friends in the real world. Users can now build something in Minecraft on their phones and then drop it into their local park for all their friends to see it together at the same location. This game aims to transform single user AR gaming to multi-user gaming while letting users access the virtual world that’s shared by everyone. Read Also: Facebook launched new multiplayer AR games in Messenger Minecraft Earth will be available in beta on iOS and Android, this summer. This game brings modes like creative that has unlimited blocks and items; or survival where you lose all your items when you die. Torfi Olafsson, game director of Minecraft Earth, explains, “This is an adaptation, this is not a direct translation of Minecraft. While it’s an adaptation, it’s built on the existing Bedrock engine so it will be very familiar to existing Minecraft players. If you like building Redstone machines, or you’re used to how the water flows, or how sand falls down, it all works. Olafsson further added, “All of the mobs of animals and creatures in Minecraft are available, too, including a new pig that really loves mud. We have tried to stay very true to the kind of core design pillars of Minecraft, and we’ve worked with the design team in Stockholm to make sure that the spirit of the game is carried through.” Players have to venture out into the real world to collect things just like how it works in Pokemon Go! Minecraft Earth has something similar to pokéstops called “tapables”, which are randomly placed in the world around the player. They are designed to give players rewards that allow them to build things, and players need to collect as many of these as possible in order to get resources and items to build vast structures in the building mode. The maps in this game are based on OpenStreetMap that has allowed Microsoft to place Minecraft adventures into the world. On the Minecraft Earth map, these adventures spawn dynamically and are also designed for multiple people to get involved in. Players can play together while sitting side by side to experience similar adventures at the exact same time and spot. They can also fight monsters, break down structures for resources together, and even stand in front of a friend to block them from physically killing a virtual sheep. Players can even see the tools that fellow players have in their hands on your phone’s screen, alongside their username. Microsoft is also using its Azure Spatial Anchors technology in Minecraft Earth which uses machine vision algorithms so that real-world objects can be used as anchors for digital content. Niantic, a Pokémon Go developer had to recently settle a lawsuit with angry homeowners who had pokéstops placed near their houses. With what happened with Pokemon Go in the past could be a threat for games like Minecraft Earth too. As there are many challenges in bringing augmented reality within private spaces. Saxs Persson, creative director of Minecraft said, “There are lots of very real challenges around user-generated content. It’s a complicated problem at the scale we’re talking about, but that doesn’t mean we shouldn’t tackle it.” https://twitter.com/Toadsanime/status/1129374278384795649 https://twitter.com/ExpnandBanana/status/1129419087216562177 https://twitter.com/flamnhotsadness/status/1129429075490160642 https://twitter.com/pixiebIush/status/1129455271833550848 To know more about Minecraft Earth, check out Minecraft’s page. Game rivals, Microsoft and Sony, form a surprising cloud gaming and AI partnership Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players OpenAI Five beats pro Dota 2 players; wins 2-1 against the gamers
Read more
  • 0
  • 0
  • 22189

article-image-introducing-firefox-replay-a-tool-that-allows-firefox-tabs-to-record-replay-and-rewind-their-behavior
Bhagyashree R
02 Dec 2019
3 min read
Save for later

Introducing Firefox Replay, a tool that allows Firefox tabs to record, replay, and rewind their behavior

Bhagyashree R
02 Dec 2019
3 min read
Mozilla is constantly putting its efforts into improving Firefox’s devtools. One such effort is Firefox Replay, an experimental tool that allows Firefox content processes to record their behavior so that it can be replayed and rewound later. The main highlight of Firefox Replay is the “code timeline” that enables you to scan through every code execution at a glance. Along with execution points, the timeline also shows exceptions, events, and network requests in real-time. It also allows you to save your recordings and pick up where you left afterward. How Firefox Replay works The record and replay behavior is achieved by “controlling the non-determinism in the browser.” Initially, it records non-deterministic behaviors (intra-thread and inter-thread) and then replays it later to “force the browser to behave deterministically.” Firefox Replay includes IPC integration to enable communication between a recording or replaying process and the chrome process. Its rewind infrastructure allows a replaying process to restore a previous state. Its debugger integration enables the JS debugger to read the required information from a replaying process and control the process’s execution. Firefox Replay is not officially released yet, however, Mac users can give it try by downloading the nightly builds. Since it is still experimental, Firefox Replay is disabled by default. You can turn it on with the ‘devtools.recordreplay.enabled’ preference. Read also: Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature The team is working on support for other platforms as well. “Windows port work is underway but is not yet working.  The difficulties are in figuring out the set of system library APIs to intercept, in getting the memory management and dirty memory parts of the rewind infrastructure to work, and in handling the different graphics and IPC pathways on different platforms,” the official doc reads. In a discussion on Hacker News, many users were excited to try out this tool. A user commented, “This might be enough to get me to use Firefox to develop with. This could be huge for its market share, a big part of the reason chrome was able to become so popular was because of how good its devtools were (compared to the competition at the time). Firefox definitely managed to catch up but not before lots of devs switched to chrome and stopped checking for compatibility with Firefox.” “This will be an absolute game-changer for web development. I am currently working on a really simplified version of this but as a chrome extension. We deal with a lot of real-time data and have been facing some timing issues (network and user input) which is really hard to reproduce,” a user added. Check out Mozilla’s official docs to know more in detail. Firefox 70 released with better security, CSS, and JavaScript improvements The new WebSocket Inspector will be released in Firefox 71 Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70
Read more
  • 0
  • 0
  • 22179
article-image-introducing-deon-a-tool-for-data-scientists-to-add-an-ethics-checklist
Natasha Mathur
06 Sep 2018
5 min read
Save for later

Introducing Deon, a tool for data scientists to add an ethics checklist

Natasha Mathur
06 Sep 2018
5 min read
Drivendata has come out with a new tool, named, Deon, which allows you to easily add an ethics checklist to your data science projects. Deon is aimed at pushing the conversation about ethics in data science, machine learning, and Artificial intelligence by providing actionable reminders to data scientists. According to the Deon team, “it's not up to data scientists alone to decide what the ethical course of action is. This has always been a responsibility of organizations that are part of civil society. This checklist is designed to provoke conversations around issues where data scientists have particular responsibility and perspective”. Deon comes with a default checklist, but you can also develop your own custom checklists by removing items and sections, or marking items as N/A depending on the needs of the project. There are also real-world examples linked with each item in the default checklist.   To be able to run Deon for your data science projects, you need to have Python 3 or greater. Let’s now discuss the two types of checklists, Default, and Custom, that comes with Deon. Default checklist The default checklist comprises of sections on Data Collection, Data Storage, Analysis, Modeling, and Deployment. Data Collection This checklist covers information on informed consent, Collection Bias, and Limit PII exposure. Informed consent includes a mechanism for gathering consent where users have clear understanding of what they are consenting to. Collection Bias checks on sources of bias introduced during data collection and survey design. Lastly, Limit PII exposure talks about ways that can help minimize the exposure of personally identifiable information (PII). Data Storage This checklist covers sections such as Data security, Right to be forgotten and Data retention plan. Data Security refers to a plan to protect and secure data. Right to be forgotten includes a mechanism by which an individual can have his/her personal information. Data Retention consists of a plan to delete the data if no longer needed. Analysis This section comprises information on Missing perspectives, Dataset bias, Honest representation, Privacy in analysis and Auditability. Missing perspectives address the blind spots in data analysis via engagement with relevant stakeholders. Dataset bias discusses examining the data for possible sources of bias and consists of steps to mitigate or address them. Honest representation checks if visualizations, summary statistics, and reports designed honestly represent the underlying data. Privacy in analysis ensures that the data with PII are not used or displayed unless necessary for the analysis. Auditability refers to the process of producing an analysis which is well documented and reproducible. Modeling This offers information on Proxy discrimination, Fairness across groups, Metric selection,  Explainability, and Communicate bias. Proxy discrimination talks about ensuring that the model does not rely on variables or proxies that are discriminatory. Fairness across groups is a section that cross-checks whether the model results have been tested for fairness with respect to different affected groups. Metric selection considers the effects of optimizing for defined metrics and other additional metrics. Explainability talks about explaining the model’s decision in understandable terms. Communicate bias makes sure that the shortcomings, limitations, and biases of the model have been properly communicated to relevant stakeholders. Deployment This covers topics such as Redress, Roll back, Concept drift, and Unintended use. Redress discusses with an organization a plan for response in case users get harmed by the results. Roll back talks about a way to turn off or roll back the model in production when required. Concept drift refers to changing relationships between input and output data in a problem over time. This part in a checklist reminds the user to test and monitor the concept drift. This is to ensure that the model remains fair over time. Unintended use prompts the user about the steps to be taken for identifying and preventing uses and abuse of the model. Custom checklists For your projects with particular concerns, it is recommended to create your own checklist.yml file. Custom checklists are required to follow the same schema as checklist.yml. Custom Checklists need to have a top-level title which is a string, and sections which are a list. Each section in the list must have a title, a section_id, and then a list of lines. Each line must include a line_id, a line_summary, and a line string which is the content. When changing the default checklist, it is necessary to keep in mind that Deon’s goal is to have checklist items that are actionable. This is why users are advised to avoid suggesting items that are vague (e.g., "do no harm") or extremely specific (e.g., "remove social security numbers from data"). For more information, be sure to check out the official Drivendata blog post. The Cambridge Analytica scandal and ethics in data science OpenAI charter puts safety, standards, and transparency first 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017
Read more
  • 0
  • 0
  • 22173

article-image-safari-technology-preview-91-gets-beta-support-for-the-webgpu-javascript-api-and-wsl
Bhagyashree R
13 Sep 2019
3 min read
Save for later

Safari Technology Preview 91 gets beta support for the WebGPU JavaScript API and WSL

Bhagyashree R
13 Sep 2019
3 min read
Yesterday, Apple announced that Safari Technology Preview 91 now supports the beta version of the new WebGPU graphics API and its shading language, Web Shading Language (WSL). You can enable the WebGPU beta support by selecting Experimental Features > WebGPU in the Developer menu. The WebGPU JavaScript API WebGPU is a new graphics API for the web that aims to provide "modern 3D graphics and computation capabilities.” It is a successor to WebGL, a JavaScript API that enables 3D and 2D graphics rendering within any compatible browser without the need for a plug-in. It is being developed in the W3C GPU for the Web Community Group with engineers from Apple, Mozilla, Microsoft, Google, and others. Read also: WebGL 2.0: What you need to know Comparing WebGPU and WebGL WebGPU is different from WebGL in the respect that it is not a direct port of any existing native API, but a similarity between the two is that they both are accessed through JavaScript. However, the team does have plans to make it accessible through WebAssembly as well in the future. In WebGL, rendering a single object requires writing a series of state-changing calls. On the other hand, WebGPU combines all the state-changing calls into a single object named pipeline state object. It validates the state after the pipeline is created to prevent expensive state analysis inside the draw call. Also, wrapping an entire pipeline state in a single function call reduces the number of exchanges between Javascript and WebKit’s C++ browser engine. Similarly, resources in WebGL are bound one-by-one, while WebGPU batches them up into bind groups. The team explains, “In both of these examples, multiple objects are gathered up together and baked into a hardware-dependent format, which is when the browser performs validation. Being able to separate object validation from object use means the application author has more control over when expensive operations occur in the lifecycle of their application.” The main focus area of WebGPU is to provide improved performance and ease of use as compared to WebGL. The team compared the performance of the two using the 2D graphics benchmark, MotionMark. The performance test they wrote measured how many triangles each with different properties were rendered while maintaining 60 frames per second. Each triangle was rendered with a different draw call and bind group. WebGPU showed a substantially better performance than WebGL: Source: Apple WHLSL is now renamed to WSL In November last year, Apple proposed a new shading language for WebGPU named Web High-Level Shading Language (WHLSL), which was source-compatible with HLSL. After receiving the community feedback, they updated the language to be compatible with OpenGL Shading Language (GLSL), which is a pretty commonly used language among the web developers. Apple renamed this version of the language to Web Shading Language (WSL) and describes it as “simple, low-level, and fast to compile.” Read also: Introducing Web High Level Shading Language (WHLSL): A graphics shading language for WebGPU “There are many Web developers using GLSL today in WebGL, so a potential browser accepting a different high-level language, like HLSL, wouldn’t suit their needs well. In addition, a high-level language such as HLSL can’t be executed faithfully on every platform and graphics API that WebGPU is designed to execute on,” the team wrote. Check out the official announcement by Apple to know more in detail. Other news in web Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users New memory usage optimizations implemented in V8 Lite can also benefit V8 Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more  
Read more
  • 0
  • 0
  • 22165
Modal Close icon
Modal Close icon