Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-docker-faces-public-outcry-as-docker-for-mac-and-windows-can-be-downloaded-only-via-docker-store-login
Melisha Dsouza
23 Aug 2018
4 min read
Save for later

Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login

Melisha Dsouza
23 Aug 2018
4 min read
5 years ago, Docker was the talk of the town because it made it possible to get a number of apps running on the same old servers and it also made packaging and shipping programs easy. But the same cannot be said about Docker now as the company is facing public disapproval on their decision to allow Docker for Mac and Windows only to be downloaded if one is logged into the Docker store. Their quest for  "improving the users experience" clearly is facing major roadblocks. Two years ago, every bug report and reasonable feature request was "hard" or "something you don't want" and would result in endless back and forth for the users. On 02 June 2016, new repository keys were pushed to the docker public repository. As a direct consequence, any run of “apt-get update” (or equivalent) on a system configured with the broken repo will fail with an error “Error https://apt.dockerproject.org/ Hash Sum mismatch.” The issue affected  ALL systems worldwide that were configured with the docker repository. All Debian and ubuntu versions, independent of OS and docker versions, faced the meltdown. It became impossible to run a system update or upgrade on an existing system. This 7 hours interplanetary outage because of Docker had little tech news coverage. All that was done was a few messages on a GitHub issue. You would have expected Docker to be a little bit more careful after the above controversy, but lo and behold! Here , comes yet another badly managed change implementation.. The current matter in question On June 20th 2018, github and reddit were abuzz with comments from confused Docker users on how they couldn’t download Docker for Mac or Windows without logging into the docker store. The following URLs were spotted with the problem: Install Docker for Mac and Install Docker for Windows To this, a docker spokesperson responded saying that the change was incorporated to improve the Docker for Mac and Windows experience for users moving forward. This led to string of accusations from dedicated docker users. Some of their complains included-  Source: github.com            Source: github.com    Source: github.com The issue is still ongoing and with no further statements released from the Docker team, users are left in the dark. Inspite of all the hullabaloo, why choose Docker? A report by Dzone indicates that Docker adoption by companies was up 30% in the last year. Its annual revenue is expected to increase by 4x, growing from $749 million in 2016 to more than $3.4 billion by 2021, representing a compound annual growth rate (CAGR) of 35 percent. So what is this company doing differently? It’s no secret that Docker containers are easy to deploy in a cloud. It can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, which are some of the major languages in configuration management. Specifically, for CI/CD Docker makes it achievable to set up local development environments that are exactly like a live server. It can run multiple development environments from the same host with unique software, operating systems, and configurations. It helps to test projects on new or different servers. Allows multiple users to work on the same project with the exact same settings, regardless of the local host environment. It ensures that applications that are running on containers are completely segregated and isolated from each other. Which means you get complete control over traffic flow and management. So, what’s the verdict? Most users accused Docker’s move as manipulative since they are literally asking people to login with their information to target them with ad campaigns and spam emails to make money. However, there were also some in support of this move. Source: github.com One reddit user said that while there is no direct solution to this issue, You can use https://github.com/moby/moby/releases as a workaround, or a proper package manager if you're on Linux. Hopefully, Docker takes this as a cue before releasing any more updates that could spark public outcry. It would be interesting to see how many companies still stick around and use Docker irrespective of the rollercoaster ride that the users are put through. You can find further  opinions on this matter at reddit.com. Docker isn’t going anywhere Zeit releases Serverless Docker in beta What’s new in Docker Enterprise Edition 2.0?  
Read more
  • 0
  • 0
  • 22159

article-image-amd-competes-with-intel-by-launching-epyc-rome-worlds-first-7-nm-chip-for-data-centers-luring-in-twitter-and-google
Bhagyashree R
09 Aug 2019
5 min read
Save for later

AMD competes with Intel by launching EPYC Rome, world’s first 7 nm chip for data centers, luring in Twitter and Google

Bhagyashree R
09 Aug 2019
5 min read
On Wednesday, Advanced Micro Devices (AMD) unveiled its highly-anticipated second-generation EPYC processor chip for data centers code-named “Rome”. Since the launch, the company has announced its agreements with many tech giants including Intel’s biggest customers, Twitter and Google. Lisa Su, AMD’s president and CEO, said during her keynote at the launch event, “Today, we set a new standard for the modern data center. Adoption of our new leadership server processors is accelerating with multiple new enterprise, cloud and HPC customers choosing EPYC processors to meet their most demanding server computing needs.” EPYC Rome: The world’s first 7nm server chip AMD first showcased the EPYC Rome chip, the world's first 7 nm server processor, at its Next Horizon 2018 event. Based on the Zen 2 microarchitecture, it features up to eight 7 nm-based chiplet processors with a 14 nm-based IO chip in the center interconnected by an Infinity fabric. This chip aims to offer twice the performance per socket and about 4X floating-point performance as compared to the previous generation of EPYC chips. https://www.youtube.com/watch?v=kC3ny3LBfi4 At the launch, one of the performance comparisons based on SPECrate 2017 int-peak benchmark showed the top-of-the-line 64-core AMD Epyc 7742 processor showed double the performance of the top-of-the-line 28-core Intel Xeon Platinum 8280M. Priced at under $ 7,000 it is a lot more affordable than Intel’s chip priced at $13,000. AMD competes with Intel, the dominant supplier of data center chips AMD’s main competitor in the data center chip realm is Intel, which is the dominant supplier of data center chips with more than 90% of the market share. However, AMD was able to capture a small market share with the release of its first-generation EPYC server chips. Coming up with its second-generation chip that is performant yet affordable gives AMD an edge over Intel. Donovan Norfolk, executive director of Lenovo’s data center group, told DataCenter Knowledge, “Intel had a significant portion of the market for a long time. I think they’ll continue to have a significant portion of it. I do think that there are more customers that will look at AMD than have in the past.” The delay in the launch of Intel’s 10 nm chips also might have worked in favor of AMD. After a long wait, it was officially launched earlier this month. Its 7 nm chips are expected to arrive in 2021. Intel fall behind schedule in launching its 10 nm chips has also worked in favor of AMD. Its 7nm chips will most likely arrive in 2021. The EPYC Rome chip has already grabbed the attention of many tech giants. Google is planning to use the EPYC server chip in its internal data centers and also wants to offer it to external developers as part of its cloud computing offerings. Twitter will start using EPYC server in its data centers later this year. Hewlett Packard Enterprise is already using these chips in its three ProLiant servers and plans to have 12 systems by the end of this year. Dell also plans to add second-gen Epyc servers to its portfolio this fall. Following AMD’s customer announcements, Intel shares were down 0.6%  to $46.42 in after-hours trading. Though AMD’s chips are better than Intel’s chips in some computing tasks, they do lag in a few desirable and advanced features. Patrick Moorhead, founder of Moor Insights & Strategy told the Reuters, “Intel chip features for machine learning tasks and new Intel memory technology being with customers such as German software firm SAP SE (SAPG.DE) could give Intel an advantage in those areas.” This news sparked a discussion on Hacker News. A user said, “This is a big win for AMD and for me it reconfirms that their strategy of pushing into the mainstream features that Intel is trying to hold hostage for the "high end" is a good one. Back when AMD first introduced the 64-bit extensions to the x86 architecture and directly challenged Intel who was selling 64 bits as a "high end" feature in their Itanium line, it was a place where Intel was unwilling to go (commoditizing 64-bit processors). That proved pretty successful for them. Now they have done it again by commoditizing "high core count" processors. Each time they do this I wonder if Intel will ever learn that you can't "get away" with selling something for a lot of money that can be made more cheaply forever. ” Another user commented, “I hope AMD turns their attention to machine learning tasks soon not just against Intel but NVIDIA also. The new Titan RTX GPUs with their extra memory and Nvlink allow for some really awesome tricks to speed up training dramatically but they nerfed it by only selling without a blower-style fan making it useless for multi-GPU setups. So the only option is to get Titan RTX rebranded as a Quadro RTX 6000 with a blower-style fan for $2,000 markup. $2000 for a fan. The only way to stop things like this will be competition in the space.” To know more in detail, you can watch the EPYC Rome’s launch event: https://www.youtube.com/watch?v=9Jn9NREaSvc Intel’s 10th gen 10nm ‘Ice Lake’ processor offers AI apps, new graphics and best connectivity Why Intel is betting on BFLOAT16 to be a game changer for deep learning training? Hint: Range trumps Precision. Intel’s new brain inspired neuromorphic AI chip contains 8 million neurons, processes data 1K times faster
Read more
  • 0
  • 0
  • 22126

article-image-introducing-net-live-tv-daily-developer-live-streams-from-net-blog
Matthew Emerick
15 Oct 2020
4 min read
Save for later

Introducing .NET Live TV – Daily Developer Live Streams from .NET Blog

Matthew Emerick
15 Oct 2020
4 min read
Today, we are launching .NET Live TV, your one stop shop for all .NET and Visual Studio live streams across Twitch and YouTube. We are always looking for new ways to bring great content to the developer community and innovate in ways to interact with you in real-time. Live streaming gives us the opportunity to deliver more content where everyone can ask questions and interact with the product teams. We started our journey several years ago with the .NET Community Standup series. It’s a weekly “behind the scenes” live stream that shows you what goes into building the runtimes, languages, frameworks, and tools we all love. As it grew, so did our dreams of delivering even more awesome .NET live stream content. .NET Live TV takes things to a whole new level with the introduction of new shows and a new website. It is a single place to bookmark so you can stay up to date with live streams across several Twitch and YouTube channels and with a single click can join in the conversation. Here are some of the new shows that recently launched that you could look forward to: Expanded .NET Community Standups What started as the ASP.NET Community Standup has grown into 7 unique shows throughout the month! Here is a quick guide to the schedule of the shows that all start at 10:00 AM Pacific: Tuesday: ASP.NET hosted by Jon Galloway with a monthly Blazor focus hosted by Safia Abdalla kicking off Nov 17! Wednesday: Rotating – Entity Framework hosted by Jeremy Likness & Machine Learning hosted by Bri Achtman 1st Thursday: Xamarin hosted by Maddy Leger 2nd Thursday: Languages & Runtime hosted by Immo Landwerth 3rd Thursday: .NET Tooling hosted by Kendra Havens 4th Thursday: .NET Desktop hosted by Olia Gavrysh Packed Week of .NET Shows! DayTime (PT) Show Description Monday 6:00 AM Join Jeff Fritz (csharpfritz) in this start from the beginning series to learn C# in this talk-show format that answers viewers questions and provides interactive samples with every episode. Monday 9:00 AM Join David Pine, Scott Addie, Cam Soper, and more from the Developer Relations team at Microsoft each week as they highlight the amazing community members in the .NET community. Monday 11:00 AM A weekly show dedicated to the topic of working from home. Mads Kristensen from the Visual Studio team invites guests onto the show for conversations about anything and everything related to Visual Studio and working from home. Tuesday 10:00 AM Join members from the ASP.NET teams for our community standup covering great community contributions for ASP.NET, ASP.NET Core, and more. Tuesday 12:00 PM Join Instafluff (Raphael) each week live on Twitch as he works on fun C# game related projects from his C# corner. Wednesday 10:00 AM Join the Entity Framework and the Machine learning teams for their community standups covering great community contributions. Thursday 10:00 AM Join the Xamarin, Languages & Runtime, .NET Tooling, and .NET Desktop teams covering great community contributions for each subject. Thursday 2:00 PM Join Cecil Phillip as he hosts and interviews amazing .NET contributors from the .NET teams and the community. Friday 2:00 PM Join Mads Kristensen from the Visual Studio team each week as he builds extensions for Visual Studio live! More to come! Be sure to bookmark live.dot.net as we are living streaming 5 days a week and adding even more shows soon! If you are looking for even more great developer video content be sure to check out Microsoft Learn TV where in addition to some of the shows from .NET Live TV you will find 24 hour a day streaming content of all topics. The post Introducing .NET Live TV – Daily Developer Live Streams appeared first on .NET Blog.
Read more
  • 0
  • 0
  • 22110

article-image-4-clustering-algorithms-every-data-scientist-know
Sugandha Lahoti
07 Nov 2017
6 min read
Save for later

4 Clustering Algorithms every Data Scientist should know

Sugandha Lahoti
07 Nov 2017
6 min read
[box type="note" align="" class="" width=""]This is an excerpt from a book by John R. Hubbard titled Java Data Analysis. In this article, we see the four popular clustering algorithms: hierarchical clustering, k-means clustering, k-medoids clustering, and the affinity propagation algorithms along with their pseudo-codes.[/box] A clustering algorithm is one that identifies groups of data points according to their proximity to each other. These algorithms are similar to classification algorithms in that they also partition a dataset into subsets of similar points. But, in classification, we already have data whose classes have been identified. such as sweet fruit. In clustering, we seek to discover the unknown groups themselves. Hierarchical clustering Of the several clustering algorithms that we will examine in this article, hierarchical clustering is probably the simplest. The trade-off is that it works well only with small datasets in Euclidean space. The general setup is that we have a dataset S of m points in Rn which we want to partition into a given number k of clusters C1 , C2 ,..., Ck , where within each cluster the points are relatively close together. Here is the algorithm: Create a singleton cluster for each of the m data points. Repeat m – k times: Find the two clusters whose centroids are closest Replace those two clusters with a new cluster that contains their points The centroid of a cluster is the point whose coordinates are the averages of the corresponding coordinates of the cluster points. For example, the centroid of the cluster C = {(2, 4), (3, 5), (6, 6), (9, 1)} is the point (5, 4), because (2 + 3 + 6 + 9)/4 = 5 and (4 + 5 + 6 + 1)/4 = 4. This is illustrated in the figure below : K-means clustering A popular alternative to hierarchical clustering is the K-means algorithm. It is related to the K-Nearest Neighbor (KNN) classification algorithm. As with hierarchical clustering, the K-means clustering algorithm requires the number of clusters, k, as input. (This version is also called the K-Means++ algorithm) Here is the algorithm: Select k points from the dataset. Create k clusters, each with one of the initial points as its centroid. For each dataset point x that is not already a centroid: Find the centroid y that is closest to x Add x to that centroid’s cluster Re-compute the centroid for that cluster It also requires k points, one for each cluster, to initialize the algorithm. These initial points can be selected at random, or by some a priori method. One approach is to run hierarchical clustering on a small sample taken from the given dataset and then pick the centroids of those resulting clusters. K-medoids clustering The k-medoids clustering algorithm is similar to the k-means algorithm, except that each cluster center, called its medoid, is one of the data points instead of being the mean of its points. The idea is to minimize the average distances from the medoids to points in their clusters. The Manhattan metric is usually used for these distances. Since those averages will be minimal if and only if the distances are, the algorithm is reduced to minimizing the sum of all distances from the points to their medoids. This sum is called the cost of the configuration. Here is the algorithm: Select k points from the dataset to be medoids. Assign each data point to its closest medoid. This defines the k clusters. For each cluster Cj : Compute the sum  s = ∑ j s j , where each sj = ∑{ d (x, yj) : x ∈ Cj } , and change the medoid yj  to whatever point in the cluster Cj that minimizes s If the medoid yj  was changed, re-assign each x to the cluster whose medoid is closest Repeat step 3 until s is minimal. This is illustrated by the simple example in Figure 8.16. It shows 10 data points in 2 clusters. The two medoids are shown as filled points. In the initial configuration it is: C1 = {(1,1),(2,1),(3,2) (4,2),(2,3)}, with y1 = x1 = (1,1) C2 = {(4,3),(5,3),(2,4) (4,4),(3,5)}, with y2 = x10 = (3,5) The sums are s1 = d (x2,y1) + d (x3,y1) + d (x4,y1) + d (x5,y1) = 1 + 3 + 4 + 3 = 11 s2 = d (x6,y1) + d (x7,y1) + d (x8,y1) + d (x9,y1) = 3 + 4 + 2 + 2 = 11 s = s1 + s2  = 11 + 11 = 22 The algorithm at step 3 first part changes the medoid for C1 to y1 = x3 = (3,2). This causes the clusters to change, at step 3 second part, to: C1 = {(1,1),(2,1),(3,2) (4,2),(2,3),(4,3),(5,3)}, with y1 = x3 = (3,2) C2 = {(2,4),(4,4),(3,5)}, with y2 = x10 = (3,5) This makes the sums: s1 = 3 + 2 + 1 + 2 + 2 + 3 = 13 s2 = 2 + 2 = 4 s = s1 + s2  = 13 + 4 = 17 The resulting configuration is shown in the second panel of the figure below: At step 3 of the algorithm, the process repeats for cluster C2. The resulting configuration is shown in the third panel of the above figure. The computations are: C1 = {(1,1),(2,1),(3,2) (4,2),(4,3),(5,3)}, with y1 = x3 = (3,2) C2 = {(2,3),(2,4),(4,4),(3,5)}, with y2 = x8 = (2,4) s = s1 + s2  = (3 + 2 + 1 + 2 + 3) + (1 + 2 + 2) = 11 + 5 = 16 The algorithm continues with two more changes, finally converging to the minimal configuration shown in the fifth panel of the above figure. This version of k-medoid clustering is also called partitioning around medoids (PAM). Affinity propagation clustering One disadvantage of each of the clustering algorithms previously presented (hierarchical, k-means, k-medoids) is the requirement that the number of clusters k be determined in advance. The affinity propagation clustering algorithm does not have that requirement. Developed in 2007 by Brendan J. Frey and Delbert Dueck at the University of Toronto, it has become one of the most widely-used clustering methods. Like k-medoid clustering, affinity propagation selects cluster center points, called exemplars, from the dataset to represent the clusters. This is done by message-passing between the data points. The algorithm works with three two-dimensional arrays: sij = the similarity between xi and xj rik = responsibility: message from xi to xk on how well-suited xk is as an exemplar for xi aik = availability: message from xk to xi on how well-suited xk is as an exemplar for xi Here is the complete algorithm: Initialize the similarities: sij = –d(xi , xj )2 , for i ≠ j; sii = the average of those other sij values 2. Repeat until convergence: Update the responsibilities: rik = sik − max {aij + s ij  : j ≠ k} Update the availabilities: aik = min {0, rkk + ∑j  { max {0, rjk } : j ≠ i ∧ j ≠ k }}, for i ≠ k; akk = ∑j  { max {0, rjk } : j ≠ k } A point xk will be an exemplar for a point xi if aik + rik = maxj {aij + rij}. If you enjoyed this excerpt from the book Java Data Analysis by John R. Hubbard, check out the book to learn how to implement various machine learning algorithms, data visualization and more in Java.
Read more
  • 0
  • 0
  • 22097

article-image-homebrews-github-repo-got-hacked-in-30-mins-how-can-open-source-projects-fight-supply-chain-attacks
Savia Lobo
14 Aug 2018
5 min read
Save for later

Homebrew's Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?

Savia Lobo
14 Aug 2018
5 min read
On 31st July 2018, Eric Holmes, a security researcher gained access to Homebrew's GitHub repo easily (He documents his experience in an in-depth Medium post). Homebrew is a free and open-source software package management system with well-known packages like node, git, and many more. It simplifies the installation of software on macOS. The Homebrew repository contains its recently elevated scopes. Eric gained access to git push on Homebrew/brew and Homebrew/homebrew-core. He was able to invade and make his first commit into Homebrew’s GitHub repo within 30 minutes. Attack = Higher chances of obtaining user credentials After getting an easy access to Homebrew’s GitHub repositories, Eric’s prime motive was to uncover user credentials of some of the members of Homebrew GitHub org. For this, he made use of an OSSINT tool by Michael Henriksen called gitrob, which easily automates the credential search. However, he could not find anything interesting. Next, he explored Homebrew’s previously disclosed issues on https://hackerone.com/Homebrew, which led him to the observation that Homebrew runs a Jenkins instance that’s (intentionally) publicly exposed at https://jenkins.brew.sh. With further invasion into the repo, Eric encountered that the builds in the “Homebrew Bottles” project were making authenticated pushes to the BrewTestBot/homebrew-core repo. This further led him to an exposed GitHub API token. The token opened commit access to these core Homebrew repos: Homebrew/brew Homebrew/homebrew-core Homebrew/formulae.brew.sh Eric stated in his post that, “If I were a malicious actor, I could have made a small, likely unnoticed change to the openssl formulae, placing a backdoor on any machine that installed it.” Via such a backdoor, intruders could have gained access to private company networks that use Homebrew. This could further lead to data breach on a large scale. Eric reported this issue to Homebrew developer, Mike McQuaid. Following which, he publicly disclosed the issue on the blog at https://brew.sh/2018/08/05/security-incident-disclosure/. Within a few hours the credentials had been revoked, replaced and sanitised within Jenkins so they would not be revealed in future. Homebrew/brew and Homebrew/homebrew-core were updated so non-administrators on those repositories cannot push directly to master. The Homebrew team worked with GitHub to audit and ensure that the given access token wasn’t used maliciously, and didn’t make any unexpected commits to the core Homebrew repos. As an ethical hacker, Eric reported the vulnerabilities he found to the Homebrew team and did no harm to the repo itself. But, not all projects may have such happy endings. How can one safeguard their systems from supply chain attacks? The precautions which Eric Holmes took were credible. He informed the Homebrew developer. However, not every hacker has good intentions and it is one’s responsibility to make sure to keep a check on all the supply chains associated to an organization. Keeping a check on all the libraries One should not allow random libraries into the supply chain. This is because it is difficult to partition libraries with organization’s custom code, thus both run with the same privilege risking the company’s security. One should make sure to levy certain policies around the code the company wishes to allow. Only projects with high popularity, active committers, and evidence of process should be allowed. Establishing guidelines Each company should create guidelines for secure use of the libraries selected. For this, a prior definition of what the libraries are expected to be used for should be made. The developers should also be detailed in safely installing, configuring, and using each library within their code. Identification of dangerous methods and how to use them safely should also be taken care of. A thorough vigilance within the inventory Every organization should keep a check within their inventories to know what open source libraries they are using. They should also ensure to set up a notification system which keeps them abreast of which new vulnerabilities the applications and servers are affected. Protection during runtime Organizations should also make use of runtime application security protection (RASP) to prevent both known and unknown library vulnerabilities from being exploited. If in case they notice new vulnerabilities, the RASP infrastructure enables one to respond in minutes. The software supply chain is the important part to create and deploy applications quickly. Hence, one should take complete care to avoid any misuse via this channel. Read the detailed story of Homebrew’s attack escape on its blog post and Eric’s firsthand account of how he went about planning the attack and the motivation behind it on his medium post. DCLeaks and Guccifer 2.0: Hackers used social engineering to manipulate the 2016 U.S. elections Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news YouTube has a $25 million plan to counter fake news and misinformation
Read more
  • 0
  • 0
  • 22083

article-image-deepminds-alphazero-shows-unprecedented-growth-in-ai-masters-3-different-games
Sugandha Lahoti
07 Dec 2018
3 min read
Save for later

Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games

Sugandha Lahoti
07 Dec 2018
3 min read
Google’s DeepMind introduced AlphaZero last year as a reinforcement learning program that masters three different types of board games, Chess, Shogi and Go to beat world champions in each case. Yesterday, they announced that a full evaluation of AlphaZero has been published in the journal Science, which confirms and updates the preliminary results. The research paper describes how Deepmind’s AlphaZero learns each game from scratch, without any human intervention or no inbuilt domain knowledge but the basic rules of the game. Unlike traditional game playing programs, Deepmind’s AlphaZero uses deep neural networks, a general-purpose reinforcement learning algorithm, and a general-purpose tree search algorithm. The first play by the program is completely random. Over-time the system uses RL algorithms to learn from wins, losses and draws to adjust the parameters of the neural network. The amount of training varies taking approximately 9 hours for chess, 12 hours for shogi, and 13 days for Go. For searching, it uses Monte-Carlo Tree Search (MCTS)  to select the most promising moves in games. Testing and Evaluation Deepmind’s AlphaZero was tested against the best engines for chess (Stockfish), shogi (Elmo), and Go (AlphaGo Zero). All matches were played for three hours per game, plus an additional 15 seconds for each move. AlphaZero was able to beat all its component in each evaluation. Per Deepmind’s blog: In chess, Deepmind’s AlphaZero defeated the 2016 TCEC (Season 9) world champion Stockfish, winning 155 games and losing just six games out of 1,000. To verify the robustness of AlphaZero, it was also played on a series of matches that started from common human openings. In each opening, AlphaZero defeated Stockfish. It also played a match that started from the set of opening positions used in the 2016 TCEC world championship, along with a series of additional matches against the most recent development version of Stockfish, and a variant of Stockfish that uses a strong opening book. In all matches, AlphaZero won. In shogi, AlphaZero defeated the 2017 CSA world champion version of Elmo, winning 91.2% of games. In Go, AlphaZero defeated AlphaGo Zero, winning 61% of games. AlphaZero’s ability to master three different complex games is an important progress towards building a single AI system that can solve a wide range of real-world problems and generalize to new situations. People on the internet are also highly excited about this new achievement. https://twitter.com/DanielKingChess/status/1070755986636488704 https://twitter.com/demishassabis/status/1070786070806192129 https://twitter.com/TrevorABranch/status/1070765877669187584 https://twitter.com/LeonWatson/status/1070777729015013376 https://twitter.com/Kasparov63/status/1070775097970094082 Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare. Google makes major inroads into healthcare tech by absorbing DeepMind Health. AlphaZero: The genesis of machine intuition
Read more
  • 0
  • 0
  • 22078
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-a-universal-bypass-tricks-cylance-ai-antivirus-into-accepting-all-top-10-malware-revealing-a-new-attack-surface-for-machine-learning-based-security
Sugandha Lahoti
19 Jul 2019
4 min read
Save for later

A universal bypass tricks Cylance AI antivirus into accepting all top 10 Malware revealing a new attack surface for machine learning based security

Sugandha Lahoti
19 Jul 2019
4 min read
Researchers from Skylight Cyber, an Australian cybersecurity enterprise, have tricked Blackberry Cylance’s AI-based antivirus product. They identified a peculiar bias of the antivirus product towards a specific game engine and bypassed it to trick the product into accepting malicious malware files. This discovery means companies working in the field of artificial intelligence-driven cybersecurity need to rethink their approach to creating new products. The bypass is not just limited to Cylance, researchers chose it as it is a leading vendor in the field and is publicly available. The researchers Adi Ashkenazy and Shahar Zini from Skylight Cyber say they can reverse the model of any AI-based EPP (Endpoint Protection Platform) product, and find a bias enabling a universal bypass. Essentially meaning if you could truly understand how a certain model works, and the type of features it uses to reach a decision, you would have the potential to fool it consistently. How did the researchers trick Cylance into thinking bad is good? Cylance’s machine-learning algorithm has been trained to favor a benign file, causing it to ignore malicious code if it sees strings from the benign file attached to a malicious file. The researchers took advantage of this and appended strings from a non-malicious file to a malicious one, tricking the system into thinking the malicious file is safe and avoiding detection. The trick works even if the Cylance engine previously concluded the same file was malicious before the benign strings were appended to it. The Cylance engine keeps a scoring mechanism ranging from -1000 for the most malicious files, and +1000 for the most benign of files. It also whitelists certain families of executable files to avoid triggering false positives on legitimate software. The researchers suspected that the machine learning would be biased toward code in those whitelisted files. So, they extracted strings from an online gaming program that Cylance had whitelisted and appended it to malicious files. The Cylance engine tagged the files benign and shifted scores from high negative numbers to high positive ones. https://youtu.be/NE4kgGjhf1Y The researchers tested against the WannaCry ransomware, Samsam ransomware, the popular Mimikatz hacking tool, and hundreds of other known malicious files. This method proved successful for 100% of the top 10 Malware for May 2019, and close to 90% for a larger sample of 384 malware. “As far as I know, this is a world-first, proven global attack on the ML [machine learning] mechanism of a security company,” told Adi Ashkenazy, CEO of Skylight Cyber to Motherboard, who first reported the news. “After around four years of super hype [about AI], I think this is a humbling example of how the approach provides a new attack surface that was not possible with legacy [antivirus software].” Gregory Webb, chief executive officer of malware protection firm Bromium Inc., told SiliconAngle that the news raises doubts about the concept of categorizing code as “good” or “bad.” “This exposes the limitations of leaving machines to make decisions on what can and cannot be trusted,” Webb said. “Ultimately, AI is not a silver bullet.” Martijn Grooten, a security researcher also added his views to the Cylance Bypass story. He states, “This is why we have good reasons to be concerned about the use of AI/ML in anything involving humans because it can easily reinforce and amplify existing biases.” The Cylance team have now confirmed the global bypass issue and will release a hotfix in the next few days. “We are aware that a bypass has been publicly disclosed by security researchers. We have verified there is an issue which can be leveraged to bypass the anti-malware component of the product. Our research and development teams have identified a solution and will release a hotfix automatically to all customers running current versions in the next few days,” the team wrote in a blog post. You can go through the blog post by Skylight Cyber researchers for additional information. Microsoft releases security updates: a “wormable” threat similar to WannaCry ransomware discovered 25 million Android devices infected with ‘Agent Smith’, a new mobile malware FireEye reports infrastructure-crippling Triton malware linked to Russian government tech institute
Read more
  • 0
  • 0
  • 22074

article-image-amazon-transcribe-streaming-announces-support-for-websockets
Savia Lobo
29 Jul 2019
3 min read
Save for later

Amazon Transcribe Streaming announces support for WebSockets

Savia Lobo
29 Jul 2019
3 min read
Last week, Amazon announced that its automatic speech recognition (ASR) service, Amazon Transcribe, now supports WebSockets. According to Amazon, “WebSocket support opens Amazon Transcribe Streaming up to a wider audience and makes integrations easier for customers that might have existing WebSocket-based integrations or knowledge”. Amazon Transcribe allows developers to add speech-to-text capability to their applications easily with its ASR service. Amazon announced the general availability of Amazon Transcribe in the AWS San Francisco Summit 2018. With Amazon Transcribe API, users can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech. Real-time transcripts from a live audio stream are also possible with the Transcribe API. Until now, the Amazon Transcribe Streaming API has been available using HTTP/2 streaming. However, Amazon adds the new WebSockets support as another integration option for bringing real-time voice capabilities to different projects built using Transcribe. What are WebSockets? WebSockets are a protocol built atop TCP, similar to HTTP. HTTP is excellent for short-lived requests, however, it does not handle persistent real-time communications well. Due to this, the first Amazon Transcribe Streaming API made available uses HTTP/2 streams that solve a lot of the issues that HTTP had with real-time communications. Amazon states, “an HTTP connection is normally closed at the end of the message, a WebSocket connection remains open”. With this advantage, messages can be sent bi-directionally with no bandwidth or latency added by handshaking and negotiating a connection. WebSocket connections are full-duplex, which means that the server and client can both transmit data to and fro at the same time. WebSockets were also designed “for cross-domain usage, so there’s no messing around with cross-origin resource sharing (CORS) as there is with HTTP”. Amazon Transcribe Streaming using Websockets While using the WebSocket protocol to stream audio, Amazon Transcribe transcribes the stream in real-time. When a user encodes the audio with event stream encoding, Amazon Transcribe responds with a JSON structure, which is also encoded using event stream encoding. Key components of a WebSocket request to Amazon Transcribe are: Creating a pre-signed URL to access Amazon Transcribe. Creating binary WebSocket frames containing event stream encoded audio data. Handling WebSocket frames in the response. The different languages that Amazon Transcribe currently supports during real-time transcription include British English (en-GB), US English (en-US), French (fr-FR), Canadian French (fr-CA), and US Spanish (es-US). To know more about WebSockets API in detail, visit Amazon’s official post. Understanding WebSockets and Server-sent Events in Detail Implementing a non-blocking cross-service communication with WebClient[Tutorial] Introducing Kweb: A Kotlin library for building rich web applications
Read more
  • 0
  • 0
  • 22035

article-image-introducing-wasmjit-a-kernel-mode-webassembly-runtime-for-linux
Bhagyashree R
26 Sep 2018
2 min read
Save for later

Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux

Bhagyashree R
26 Sep 2018
2 min read
Written in C90, Wasmjit is a small embeddable WebAssembly runtime. It is portable to most environments but it primarily targets a Linux kernel module that can host Emscripten-generated WebAssembly modules. What are the benefits of Wasmjit? Improved performance: Using Wasmjit you will be able to run WebAssembly modules in kernel-space (ring 0). This will provide access to system calls as normal function calls, which eliminates the user-kernel transition overhead. This also avoids the scheduling overheads from swapping page tables. This provides a boost in performance for syscall-bound programs like web servers or FUSE file systems. No need to run an entire browser: It also comes with a host environment for running in user-space on POSIX systems. This will allow running WebAssembly modules without having to run an entire browser. What tools do you need to get started? The following are the tools you require to get started with Wasmjit: A standard POSIX C development environment with cc and make Emscripten SDK Optionally, you can install kernel headers on Linux, the linux-headers-amd64 package on Debian, kernel-devel on Fedora What’s in the future? Wasmjit currently supports x86_64 and can run a subset of Emscripten-generated WebAssembly on Linux, macOS, and within the Linux kernel as a kernel module. In coming releases we will see more implementations and improvements along the following lines: Enough Emscripten host-bindings to run nginx.wasm Introduction of an interpreter Rust-runtime for Rust-generated wasm files Go-runtime for Go-generated wasm files Optimized x86_64 JIT arm64 JIT macOS kernel module What to consider when using this runtime? Wasmjit uses vmalloc(), a function for allocating a contiguous memory region in the virtual address space, for code and data section allocations. This prevents those pages from ever being swapped to disk resulting in indiscriminate access to the /dev/wasm device. This can make a system vulnerable to denial-of-service attacks. To mitigate this risk, in future, a system-wide limit on the amount of memory used by the /dev/wasm device will be provided. To get started with Wasmjit, check out its GitHub repository. Why is everyone going crazy over WebAssembly? Unity Benchmark report approves WebAssembly load times and performance in popular web browsers Golang 1.11 is here with modules and experimental WebAssembly port among other updates
Read more
  • 0
  • 0
  • 22008

article-image-intel-unveils-the-first-3d-logic-chip-packaging-technology-foveros-powering-its-new-10nm-chips-sunny-cove
Savia Lobo
13 Dec 2018
3 min read
Save for later

Intel unveils the first 3D Logic Chip packaging technology, ‘Foveros’, powering its new 10nm chips, ‘Sunny Cove’

Savia Lobo
13 Dec 2018
3 min read
Yesterday, the chip manufacturing giant unleashed Foveros, its news 3-D packaging technology, which makes it easy to stack logic chips over one another. Intel claims users can see the first products to use Foveros in the second half of next year. Talking about the stacking logic, Raja Koduri, Intel’s chief architect, said, “You can pack more transistors in a given space. And also you can pack different kinds of transistors; if you want to put a 5G radio right on top of a CPU, solving the stacking problem would be great, because you have all of your functionality but also a small form factor.” With the Foveros technology, Intel will allow for smaller "chiplets," which describes fast logic chips sitting atop a base die that handles power, I/O and power delivery. This project will also help Intel overcome one of its biggest challenges, i.e, building full chips at 10nm scale. The Forveros backed product will be a 10 nanometer compute element on a base die, typically used in low-power devices. Source: Intel  Sunny Cove: Intel’s codename for the new 10nm chips Sunny Cove will be at the heart of Intel’s next-generation Core and Xeon processors which would be available in the latter half of next year. According to Intel, Sunny Cove will provide users with an improved latency and will allow for more operations to be executed in parallel (thus acting more like a GPU). On the graphics front, Intel’s also got new Gen11 integrated graphics “designed to break the 1 TFLOPS barrier,” which will be part of these Sunny Cove chips. Intel also promises improved speeds in AI related tasks, cryptography, and machine learning among other new features with the CPUs. According to a detailed report by Ars Technica, “Sunny Cove makes the first major change to x64 virtual memory support since AMD introduced its x86-64 64-bit extension to x86 in 2003. Bits 0 through 47 are used, with the top 16 bits, 48 through 63, all copies of bit 47. This limits virtual address space to 256TB. These systems can also support a maximum of 256TB of physical memory.” Starting from the second half of next year, everything from mobile devices to data centers may feature Foveros processors over time. “The company wouldn't say where, exactly, the first Foveros-equipped chip will end up, but it sounds like it'll be ideal for incredibly thin and light machines”, Engadget reports. To know more about this news in detail, visit Intel Newsroom. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report Apple T2 security chip has Touch ID, Security Enclave, hardware to prevent microphone eavesdropping, amongst many other features! How the Titan M chip will improve Android security  
Read more
  • 0
  • 0
  • 22004
article-image-new-iphone-exploit-checkm8-is-unpatchable-and-can-possibly-lead-to-permanent-jailbreak-on-iphones
Sugandha Lahoti
30 Sep 2019
4 min read
Save for later

New iPhone exploit checkm8 is unpatchable and can possibly lead to permanent jailbreak on iPhones

Sugandha Lahoti
30 Sep 2019
4 min read
An unnamed iOS researcher that goes by the Twitter handle @axi0mX has released a new iOS exploit, checkm8 that affects all iOS devices running on A5 to A11 chipsets. This exploit explores vulnerabilities in Apple’s bootroom (secure boot ROM) which can give phone owners and hackers deep level access to their iOS devices. Once a hacker jailbreaks, Apple would be unable to block or patch out with a future software update. This iOS exploit can lead to a permanent, unblockable jailbreak on iPhones. Jailbreaking can allow hackers to get root access, enabling them to install software that is unavailable in the Apple App Store, run unsigned code, read and write to the root filesystem, and more. https://twitter.com/axi0mX/status/1178299323328499712 The researcher considers checkm8 possibly the biggest news in the iOS jailbreak community in years. This is because Bootrom jailbreaks are mostly permanent and cannot be patched. To fix it, you would need to apply physical modifications to device chipsets. This can only happen with callbacks or mass replacements.  It is also the first bootrom-level exploit publicly released for an iOS device since the iPhone 4, which was released almost a decade ago. axi0mX had also released another jailbreak-enabling exploit called alloc8 that was released in 2017. alloc8 exploits a powerful vulnerability in function malloc in the bootrom applicable to iPhone 3GS devices. However, checkm8 impacts devices starting with an iPhone 4S (A5 chip) through the iPhone 8 and iPhone X (A11 chip). The only exception being A12 processors that come in iPhone XS / XR and 11 / 11 Pro devices, for which Apple has patched the flaw. The full jailbreak with Cydia on latest iOS version is possible, but requires additional work. Explaining the reason behind this iOS exploit to be made public, @axi0mX said “a bootrom exploit for older devices makes iOS better for everyone. Jailbreakers and tweak developers will be able to jailbreak their phones on latest version, and they will not need to stay on older iOS versions waiting for a jailbreak. They will be safer.” The researcher adds, “I am releasing my exploit for free for the benefit of iOS jailbreak and security research community. Researchers and developers can use it to dump SecureROM, decrypt keybags with AES engine, and demote the device to enable JTAG. You still need additional hardware and software to use JTAG.” For now, the checkm8 exploit is released in beta and there is no actual jailbreak yet. You can’t simply download a tool, crack your device, and start downloading apps and modifications to iOS. Axi0mX's jailbreak is available on GitHub. The code isn't recommended for users without proper technical skills as it could easily result in bricked devices. Nonetheless, it is still an unpatchable issue and poses security risks for iOS users. Apple has not yet acknowledged the checkm8 iOS exploit. A number of people tweeted about this iOS exploit and tried it. https://twitter.com/FCE365/status/1177558724719853568 https://twitter.com/SparkZheng/status/1178492709863976960 https://twitter.com/dangoodin001/status/1177951602793046016 The past year saw a number of iOS exploits. Last month, Apple has accidentally reintroduced a bug in iOS 12.4 that was patched in iOS 12.3. A security researcher, who goes by the name Pwn20wnd on Twitter, released unc0ver v3.5.2, a jailbreaking tool that can jailbreak A7-A11 devices. In July, two members of the Google Project Zero team revealed about six “interactionless” security bugs that can affect iOS by exploiting the iMessage Client. Four of these bugs can execute malicious code on a remote iOS device, without any prior user interaction. Researchers release a study into Bug Bounty Programs and Responsible Disclosure for ethical hacking in IoT ‘Dropbox Paper’ leaks out email addresses and names on sharing document publicly DoorDash data breach leaks personal details of 4.9 million customers, workers, and merchants
Read more
  • 0
  • 0
  • 22002

article-image-introducing-stateful-functions-an-os-framework-to-easily-build-and-orchestrate-distributed-stateful-apps-by-apache-flinks-ververica
Vincy Davis
19 Oct 2019
3 min read
Save for later

Introducing Stateful Functions, an OS framework to easily build and orchestrate distributed stateful apps, by Apache Flink’s Ververica

Vincy Davis
19 Oct 2019
3 min read
Last week, Apache Flink’s stream processing company Ververica announced the launch of Stateful Functions. It is an open source framework developed to reduce the complexity of building and orchestrating distributed stateful applications. It is built with an aim to bring together the benefits of stream processing with Apache Flink and Function-as-a-Service (FaaS). Ververica will propose the project, licensed under Apache 2.0, to the Apache Flink community as an open source contribution. Read More: Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more The co-founder and CTO at Ververica, Stephan Ewen says, “Orchestration for stateless compute has come a long way, driven by technologies like Kubernetes and FaaS — but most offerings still fall short for stateful distributed applications.” He further adds, “Stateful Functions is a big step towards addressing those shortcomings, bringing the seamless state management and consistency from modern stream processing to space.” Stateful Functions is designed as a simple and powerful abstraction based on functions that can interact with each other asynchronously. It is also composed of complex networks of functionality. This approach helps in eliminating the requirement of additional infrastructure for application state management, reduces operational overhead and also the overall system complexity. The stateful functions are aimed to help users define independent functions with a small footprint, thus enabling the resources to interact reliably with each other. Each function has a persistent user-defined state in local variables that can be used to arbitrarily message other functions. The stateful function framework simplifies use cases such as: Asynchronous application processes (checkout, payment, logistics) Heterogeneous, load-varying event stream pipelines (IoT event rule pipelines) Real-time context and statistics (ML feature assembly, recommenders) The runtime of stateful functions API is based on the stream processing capability of Apache Flink. It also extends its powerful model for state management and fault tolerance. The major advantage of this framework is that the state and computation are co-located on the same side of the network. This means that “you don’t need the round-tripper record to fetch state from an external storage system nor a specific state management pattern for consistency.”  Though the stateful functions API is independent of Flink, its runtime is built on top of Flink’s DataStream API and uses a lightweight version of process functions. “The core advantage here, compared to vanilla Flink, is that functions can arbitrarily send events to all other functions, rather than only downstream in a DAG,” stated the official blog. Image source: Ververica blog As shown in the above figure, the applications of stateful functions include the multiple bundles of functions that are multiplexed into a single Flink application. This enables them to interact consistently and reliably with each other. This enables the many small jobs to share the same pool of resources and tackle them as needed. Many Twitterati are excited about this announcement. https://twitter.com/sijieg/status/1181518992541933568 https://twitter.com/PasqualeVazzana/status/1182033530269949952 https://twitter.com/acmurthy/status/1181574696451620865 Head over to the stateful functions website to know more details. OpenBSD 6.6 comes with GCC disabled in base for ARMv7 and i386, SMP Improvements, and more Developers ask for an option to disable Docker Compose from automatically reading the .env file Ubuntu 19.10 releases with MicroK8s add-ons, GNOME 3.34, ZFS on root, NVIDIA-specific improvements, and much more! Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices
Read more
  • 0
  • 0
  • 21985

article-image-atlassian-bitbucket-github-and-gitlab-take-collective-steps-against-the-git-ransomware-attack
Bhagyashree R
15 May 2019
4 min read
Save for later

Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack

Bhagyashree R
15 May 2019
4 min read
Yesterday, Atlassian Bitbucket, GitHub, and GitLab published a joint incident report in the wake of the recent Git ransomware attack on the three platforms earlier this month. The post sheds light on the ransom event details, what measures the platforms are taking to protect users, and what are the next steps to be taken by the affected repo owners. https://twitter.com/github/status/1128332167229202433 The Git ransom attack On May 2, the security teams at Atlassian Bitbucket, GitHub, and GitLab started getting numerous reports from users about their accounts being compromised. The reports mentioned that the source code from their repositories, both private and public, was being wiped off and replaced with the following ransom note: “To recover your lost data and avoid leaking it: Send us 0.1 Bitcoin (BTC) to our Bitcoin address 1ES14c7qLb5CYhLMUekctxLgc1FV2Ti9DA and contact us by Email at admin@gitsbackup.com with your Git login and a Proof of Payment. If you are unsure if we have your data, contact us and we will send you a proof. Your code is downloaded and backed up on our servers. If we don't receive your payment in the next 10 Days, we will make your code public or use them otherwise.” The user accounts were compromised with legitimate user credentials including passwords, app passwords, API keys, and personal access tokens. After getting access to the user accounts, the attackers performed command-line Git commits, which resulted in overwriting the source code in repositories with the ransom note. To recover your repository, in case you have its latest copy on your computer, you can force push the local copy to the current HEAD using the ‘git push origin HEAD:master --force’ command. If not, you can clone the repository and use the git reflog or git fsck commands to find your last commit and change the HEAD. What the investigation revealed? A basic GitHub search shows that 267 repositories were affected by the ransom attack. While investigating how the credential leakage happened, the security teams found a public third-party credential dump, which was hosted by the same hosting provider where the attack had originated. The dump had credentials of nearly one-third of the attacked accounts. After finding this out, the platforms took steps to invalidate the credentials by resetting or revoking them. On further investigation, it was found that continuous scanning has been conducted by the same IP address as the attacker for publicly exposed .git/config and other environment files, which may have sensitive information like credentials and personal access tokens. Similar scanning behavior from other IPs residing on the same hosting provider was also found. How you can protect your repositories from such attacks? Strong and unique passwords: Users should use strong and unique passwords as attackers can easily crack simple passwords through brute-force attacks. Enabling multi-factor authentication (MFA): Users are recommended to use multi-factor authentication, which is supported on all three platforms. MFA provides better security by combining two or more independent credentials for authentication. Understanding personal access tokens (PATs) and their risks: PATs serve as an alternative to passwords when you are using two-factor authentication. Users should ensure that these are not publicly accessible in repositories or on web servers as in some situations these tokens may have read or write access to repositories. The report further recommends that users should use them as environment variables and avoid hardcoding them into their programs. Additionally, the three platforms also offer other features through which we can prevent such attacks from recurring. Bitbucket gives admins the authority to control access of users through IP Whitelisting on their Premium plan. GitHub does token scanning on public repositories to check for known token formats and notifies the service providers if secrets are published to public GitHub repositories. GitLab 11.9 comes with a feature called Secret Detection that scans repositories to find API keys and other information that should not be there. To read the official announcement, check out the joint incident report on GitLab blog. GitHub announces beta version of GitHub Package Registry, its new package management service GitHub deprecates and then restores Network Graph after GitHub users share their disapproval DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories  
Read more
  • 0
  • 0
  • 21984
article-image-mozillas-sponsored-security-audit-finds-a-critical-vulnerability-in-the-tmux-integration-feature-of-iterm2
Vincy Davis
10 Oct 2019
3 min read
Save for later

Mozilla’s sponsored security audit finds a critical vulnerability in the tmux integration feature of iTerm2

Vincy Davis
10 Oct 2019
3 min read
Yesterday, Mozilla announced that a critical security vulnerability is present in the terminal multiplexer (tmux) integration feature in all the versions of iTerm2, the GPL-licensed terminal emulator for macOS. The security vulnerability was found by a sponsored security audit conducted by the Mozilla Open Source Support Program (MOSS) which delivers security audits for open source technologies. Mozilla and the iTerm2’s developer George Nachman have together developed and released a patch for the vulnerability in the iTerm2 version 3.3.6. Read Also: MacOS terminal emulator, iTerm2 3.3.0 is here with new Python scripting API, a scriptable status bar, Minimal theme, and more According to the official blog post, MOSS sponsored the iTerm2 security audit due to its popularity among developers and system administrators. Another major reason was the iTerm2’s processing of untrusted data. Radically Open Security (ROS), the firm that conducted the audit, has ascertained that this vulnerability was present in iTerm2 for the last 7 years. An attacker can exploit this vulnerability (CVE-2019-9535) by producing a malicious output to the terminal using commands on the targeted user’s computer or by remotely executing arbitrary commands with the privileges of the targeted user. Tom Ritter of Mozilla says, “Example attack vectors for this would be connecting to an attacker-controlled SSH server or commands like curl http://attacker.com and tail -f /var/log/apache2/referer_log. We expect the community will find many more creative examples.” Nachman says that this is a serious vulnerability because “in some circumstances, it could allow an attacker to execute commands on your machine when you view a file or otherwise receive input they have crafted in iTerm2.” He also strongly recommended all the users to upgrade their iTerm2 to the latest 3.3.6 version. The CERT Coordination Center has pointed out that since the tmux integration cannot be disabled through configuration, the complete resolution to this vulnerability is not yet available. Users have appreciated both Mozilla and the iTerm2 team for the security update. A user commented on Hacker News, “I checked for update, installed and relaunched... and found that all my tabs were exactly as they were before, including my tab that had an ssh tunnel running. The only thing that changed was that iTerm got more secure. Impressive work, Nachman.” Another user says, “Thank you, Mozilla. =)” Visit the Mozilla blog for more details about the vulnerability. Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications Apple iPadOS now available for download with Slide Over and Split View, Home Screen updates, new capabilities to Apple Pencil and more Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support, and more! The US, UK, and Australian governments call Facebook’s end-to-end encryption plan a hindrance to investigating crimes An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack
Read more
  • 0
  • 0
  • 21980

article-image-firefox-71-released-with-new-developer-tools-features
Savia Lobo
04 Dec 2019
5 min read
Save for later

Firefox 71 released with new developer tools features

Savia Lobo
04 Dec 2019
5 min read
Yesterday, the Firefox team announced its latest version, Firefox 71. This version includes a plethora of new developer tools features such as web socket message inspector, console multi-line editor mode, log on events, and network panel full-text search. Many of these features were first made available in the Firefox Developer Edition and later improved based on the feedback. Other highlights in Firefox 71 includes new web platform features such as CSS subgrid, column-span, Promise.allSettled, and the Media Session API. What’s new in Firefox 71? Improvements in speed and reliability In Firefox 71, the team took some help from the JavaScript team by improving the caching of scripts during a startup. This made both Firefox and DevTools start faster. “One Console test got an astonishing 40% improvement while times across every panel were boosted by 8-15%”, the official blog post mentions. Also, the links to scripts, for example, from the event handler tooltip in the Inspector or the stack traces in the Console, reliably gets you to the expected line and debugging sources loaded through eval() now also works as expected. WebSocket Message Inspector In Firefox 71, the Network panel has a new Messages tab. You can observe all messages sent and received through a WebSocket connection: Source: Mozilla Hacks Sent frames have a green up-arrow icon, while received frames have a red down-arrow icon. You can click on an individual frame to view its formatted data. Know more about WebSocket Message Inspector on the official post. Console multi-line editor mode Another developer tools feature in Firefox 71 is the new multi-line console. It combines the benefits of IDEs to authoring code with the workflow of repeatedly executing code in the context of the page. If you open the regular console, you’ll see a new icon at the end of the prompt row. Source: Mozilla Hacks Clicking this will switch the console to multi-line mode: Source: Mozilla Hacks Here you can enter multiple lines of code, pressing enter after each one, and then run the code using Ctrl + Enter. You can also move between statements using the next and previous arrows. The editor includes regular IDE features you’d expect, such as open/close bracket pair highlighting and automatic indentation. Inline variable preview in Debugger The JavaScript Debugger now provides inline variable previewing, which is a useful timesaver when stepping through your code. In previous versions, you had to scroll through the scope panel to find variable values or hover over a variable in the source pane. In the current version, when execution pauses, you can view relevant variable and property values directly in the source. Source: Mozilla Hacks Using the babel-powered source mapping, preview also works for variables that have been renamed or minified by build steps. Make sure to enable this power-feature by checking Map in the Scopes pane. Log on Event Listeners There have been a few updates in the event listener breakpoints in Firefox 71. A few improvements include, log on events lets you explore which event handlers are being fired in which order without the need for pausing and stepping. Hence, if we choose to log keyboard events, for example, the code no longer pauses as each event is fired: Source: Mozilla Hacks Instead, we can then switch to the console, and whenever we press a key we are given a log of where related events were fired. CSS improvements In Firefox 71, the new CSS includes subgrid, multicol, clip-path: path, and aspect ratio mapping. Subgrid A feature that has been enabled in 71 after being supported behind a pref for a while, the subgrid value of grid-template-columns and grid-template-rows allows you to create a nested grid inside a grid item that will use the main grid’s tracks. This means that grid items inside the subgrid will line up with the parent’s grid tracks, making various layout techniques much easier. Multicol — column-span CSS multicol support has moved forward in a big way with the inclusion of the column-span property in Firefox 71. This allows you to make an element span across all the columns in a multicol container (generated using column-width or column-count). Clip-path: path() The path() value of the clip-path property is now enabled by default — this allows you to create a custom mask shape using a path() function, as opposed to a predefined shape like a circle or ellipse. Aspect ratio mapping Finally, the height and width HTML attributes on the <img> element are now mapped to an internal aspect-ratio property. This allows the browser to calculate the image’s aspect ratio early on and correct its display size before it has loaded if CSS has been applied that causes problems with the display size. There are also a few minor JavaScript changes in this release including, Promise.allSettled(), the Media Session API, and WebGL multiview. A lot of users are excited about this release and are looking forward to trying it out. https://twitter.com/IshSookun/status/1201897724943036417 https://twitter.com/awhite/status/1202163413021077504 To know more about this news in detail, read Firefox 71 official announcement. The new WebSocket Inspector will be released in Firefox 71 Firefox 70 released with better security, CSS, and JavaScript improvements Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70
Read more
  • 0
  • 0
  • 21941
Modal Close icon
Modal Close icon