Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-datadog-releases-ddsketch-fully-mergeable-relative-error-quantile-sketching-algorithm
Sugandha Lahoti
03 Sep 2019
4 min read
Save for later

Datadog releases DDSketch, a fully-mergeable, relative-error quantile sketching algorithm with formal guarantees

Sugandha Lahoti
03 Sep 2019
4 min read
Datadog, the monitoring, and analytics platform released DDSketch (Distributed Distribution Sketch) which is a fully-mergeable, relative-error quantile sketching algorithm with formal guarantees. It was presented at VLDB2019 in August. DDSketch is fully-mergeable and relative-error quantile sketching algorithm Per Wikipedia, quantiles are cut points dividing the range of a probability distribution into continuous intervals with equal probabilities or dividing the observations in a sample in the same way. Quantiles are useful measures because they are less susceptible than means to long-tailed distributions and outliers. Calculating exact Quantiles can be expensive for both storage and network bandwidth. So, most monitoring systems compress the data into sketches and compute approximate quantiles. However, maintaining Quantile sketches has primarily been done on bounding the rank error of the sketch while using little memory. Unfortunately, for data sets with heavy tails, rank-error guarantees can return values with large relative errors. Also, quantile sketches should be mergeable which means that several combined sketches must be as accurate as a single sketch of the same data. These two problems are addressed in DDSketch which comes with formal guarantees, is also fully-mergeable and has relative-error sketching. The sketch is extremely fast as well as accurate and is currently being used by Datadog. How DDSketch works As mentioned earlier, DDSketch has relative error guarantees. This means it computes quantiles with a controlled relative error. For example, for a DDSketch with a relative accuracy guarantee set to 1% and expected quantile value set to 100, the computed quantile value is guaranteed to be between 99 and 101. If the expected quantile value is 1000, the computed quantile value is guaranteed to be between 990 and 1010. DDSketch works by mapping floating-point input values to bins and counting the number of values for each bin. The mapping to bins is handled by IndexMapping, while the underlying structure that keeps track of bin counts is Store. The memory size of the sketch depends on the range that is covered by the input values; the larger the range, the more bins are needed to keep track of the input values. As a rough estimate, when working on durations using standard parameters (mapping and store) with a relative accuracy of 2%, about 2.2kB (297 bins) are needed to cover values between 1 millisecond and 1 minute, and about 6.8kB (867 bins) to cover values between 1 nanosecond and 1 day. DDSketch implementations and comparisons Datadog has provided implementations of DDSketch in Java, Go, and Python. The Java implementation provides multiple versions of DDS. They have also compared DDSketch against the Java implementation of HDR Histogram, the Java implementation of the GKArray version of the GK sketch, as well as the Java implementation of the Moments sketch. HDR Histogram HDR Histogram is the only relative-error sketch in the literature. It has extremely fast insertion times (only requiring low-level binary operations), as the bucket sizes are optimized for insertion speed instead of size, and it is fully mergeable (though very slow). The main downside, the researchers say, is that it can only handle a bounded (though very large) range that might not be suitable for certain data sets. It also has no published guarantees, though the researchers agree that much of the analysis presented for DDSketch can be made to apply to a version of HDR Histogram that more closely resembles DDSketch with a slightly worse guarantee. Moments sketch The Moments sketch takes an entirely different approach by estimating the moments of the underlying distribution. It has notably fast merging times and is fully mergeable. The guaranteed accuracy, however, is only for the average rank error, unlike other sketches which have guarantees for the worst-case error (whether rank or relative) GK sketch Compared to GK, the relative accuracy of DDSketch is comparable for dense data sets, while for heavy-tailed data sets the improvement in accuracy can be measured in orders of magnitude. The rank error is also comparable to if not better than that of GK. Additionally, it is much faster in both insertion and merge. Note: All images are taken from the research paper. For more technical coverage, please read the research paper. In other related news, late August, Datadog announced that it has filed a registration statement on Form S-1 with the U.S. Securities and Exchange Commission relating to a proposed initial public offering of its Class A common stock. The firm listed a $100 million raise in its prospectus, a provisional number that will change when the company sets a price range for its equity. Other news in Tech Microsoft announces XLOOKUP for Excel users that fixes most VLOOKUP issues Golang 1.13 module mirror, index, and Checksum database are now production-ready Why Perl 6 is considering a name change?
Read more
  • 0
  • 0
  • 12937

article-image-microsoft-announces-xlookup-for-excel-users-that-fixes-most-vlookup-issues
Amrata Joshi
02 Sep 2019
3 min read
Save for later

Microsoft announces XLOOKUP for Excel users that fixes most VLOOKUP issues

Amrata Joshi
02 Sep 2019
3 min read
Last week, the team at Microsoft announced the XLOOKUP feature for Excel users, a successor to the VLOOKUP function, the first lookup function learned by Excel users. XLOOKUP feature gives Excel users an easier way of displaying information in their spreadsheets. Currently, this function is only available to Office 365 testers and the company will be making it more broadly available. XLOOKUP has the ability to look vertically as well as horizontally and it replaces HLOOKUP too.  XLOOKUP just needs 3 arguments for performing the most common exact lookup whereas VLOOKUP required 4. The official post reads, “Let’s consider its signature in the simplest form: XLOOKUP(lookup_value,lookup_array,return_array) lookup_value: What you are looking for lookup_array: Where to find it return_array: What to return”  XLOOKUP overcomes the limitations of VLOOKUP Exact match in XLOOKUP is possible VLOOKUP resulted in a default approximate match of what the user was looking for, rather than the exact match. With XLOOKUP users can now find the exact match. Data can be drawn on both sides  VLOOKUP can draw on the data that’s on the right-hand side of the reference column, so users have to rearrange their data to use the function. With XLOOKUP, users can easily draw on the data both to the left and right, and it also combines VLOOKUP and HLOOKUP into a single function. Column insertions/deletions VLOOKUP’s 3rd argument is the column number so if you insert or delete a column then you have to increment or decrement the column number inside the VLOOKUP. With XLOOKUP users can easily insert or delete columns. Search from the back is now possible With VLOOKUP, users need to reverse the order of the data for finding the last occurrence of the data but with XLOOKUP it is easy for users to search the data from the back. References cells systematically For VLOOKUP, the 2nd argument, table_array, needs to be stretched from the lookup column to the results column. It references more cells which results in unnecessary calculations, reducing the performance of your spreadsheets. XLOOKUP systematically references the cells which don’t lead to complications in calculations. In an email to CNBC, Joe McDaid, Excel’s senior program manager wrote, XLOOKUP is “more powerful than INDEX/MATCH and more approachable than VLOOKUP.” To know more about this news, check out the official post. What’s new in application development this week? Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs Qt introduces Qt for MCUs, a graphics toolkit for creating a fluid user interface on microcontrollers Twilio launched Verified By Twilio, that will show customers who is calling them and why      
Read more
  • 0
  • 0
  • 17243

article-image-facebook-is-reportedly-working-on-threads-app-an-extension-of-instagrams-close-friends-feature-to-take-on-snapchat
Amrata Joshi
02 Sep 2019
3 min read
Save for later

Facebook is reportedly working on Threads app, an extension of Instagram's 'Close friends' feature to take on Snapchat

Amrata Joshi
02 Sep 2019
3 min read
Facebook is seemingly working on a new messaging app called Threads that would help users to share their photos, videos, location, speed, and battery life with only their close friends, The Verge reported earlier this week. This means users can selectively share content with their friends while not revealing to others the list of close friends with whom the content is shared. The app currently does not display the real-time location but it might notify by stating that a friend is “on the move” as per the report by The Verge. How do Threads work? As per the report by The Verge,  Threads app appears to be similar to the existing messaging product inside the Instagram app. It seems to be an extension of the ‘Close friends’ feature for Instagram stories where users can create a list of close friends and make their stories just visible to them.  With Threads, users who have opted-in for ‘automatic sharing’ of updates will be able to regularly show their status updates and real-time information  in the main feed to their close friends.. The auto-sharing of statuses will be done using the mobile phone sensors.  Also, the messages coming from your friends would appear in a central feed, with a green dot that will indicate which of your friends are currently active/online. If a friend has posted a story recently on Instagram, you will be able to see it even from Threads app. It also features a camera, which can be used to capture photos and videos and send them to close friends. While Threads are currently being tested internally at Facebook, there is no clarity about the launch of Threads. Direct’s revamped version or Snapchat’s potential competitor? With Threads, if Instagram manages to create a niche around the ‘close friends’, it might shift a significant proportion of Snapchat’s users to its platform.  In 2017, the team had experimented with Direct, a standalone camera messaging app, which had many filters that were similar to Snapchat. But this year in May, the company announced that they will no longer be supporting Direct. Threads look like a Facebook’s second attempt to compete with Snapchat. https://twitter.com/MattNavarra/status/1128875881462677504 Threads app focus on strengthening the ‘close friends’ relationships might promote more of personal data sharing including even location and battery life. This begs the question: Is our content really safe? Just three months ago, Instagram was in the news for exposing personal data of millions of influencers online. The exposed data included contact information of Instagram influencers, brands and celebrities https://twitter.com/hak1mlukha/status/1130532898359185409 According to Instagram’s current Terms of Use, it does not get ownership over the information shared on it. But here’s the catch, it also states that it has the right to host, use, distribute, run, modify, copy, publicly perform or translate, display, and create derivative works of user content as per the user’s privacy settings. In essence, the platform has a right to use the content we post.  Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong protests   Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules
Read more
  • 0
  • 0
  • 13015

article-image-netnewswire-5-0-releases-with-dark-mode-smart-feed-article-list-three-pane-design-and-much-more
Amrata Joshi
02 Sep 2019
3 min read
Save for later

NetNewsWire 5.0 releases with Dark mode, smart feed article list, three-pane design and much more!

Amrata Joshi
02 Sep 2019
3 min read
Last week, the team behind NetNewsWire released NetNewsWire 5.0, a free and open-source RSS reader for Mac. NetNewsWire lets users read articles from their favorite blogs and news sites and keeps a track of what users have already read. So, users need not switch from page to page for reading new articles, instead, NetNewsWire would provide them with a list of new articles. In 2002, NetNewsWire started as Brent Simmons’ project which was sold in 2005 and again in 2011. Simmons finally re-acquired NetNewsWire from Black Pixel last year, and relaunched it as version 5 this year.  Previously, when NetNewsWire began as a project, it was named as “Evergreen” but later on became NetNewsWire in 2018. In this release of NetNewsWire 5.0, JSON Feed support, syncing via Feedbin, Dark Mode, a “Today” smart feed, starred articles, and more such features are included.  Key features included in NetNewsWire 5.0 Three pane-design As per the image given below, NetNewsWire 5.0 features a common three-pane design where the users’ feed and folders are on the extreme left hand side. The article lists for each of the feeds lie in the middle column, and the readers can view the article in the right column. Image Source: The Sweet Setup Dark mode NNW 5 comes with a light and dark mode that ensures it fits well with macOS’s dark mode support. New buttons The buttons have a design which is similar to the Mac design. This version features buttons that can be used for creating a new folder, sending an article to Safari or marking an article as unread. Smart feed article list  The Smart feed article list features the article title, feed’s icon, a short description from the article, as well as the time the article was published, and the publisher’s name. The “Today” smart feed list shows articles that got published in the last 24 hours instead of the articles that were published post midnight on the current date. Unread articles The unread articles in a feed are marked with a bright blue dot and users can double-click an article in the article list to open it directly in Safari. Keyboard shortcuts Users can now mark all articles in a given feed as “read” by pressing CMD + K. Users can now jump between their smart feeds with the combination of CMD + 1/2/3. Users can also jump to the browser by simply hitting CMD + right arrow key. By hitting the spacebar, users can jump through an article.  What is expected in the future? Support for more services NetNewsWire supports only its own local RSS service and Feedbin. And currently, the local RSS service doesn’t support syncing to any other service. Support for more services is expected in the future.  Read-It-Later Support Apps like Reeder and Fiery Feeds (on iOS) are working on their own read-it-later features as of late and NetNewsWire 5 doesn’t support such kind of feature. iOS version The team is currently working on the iOS version of NetNewsWire. It seems users are overall excited about this release. A user commented on HackerNews, “This looks very good, I'm just waiting for Feedly compatibility.” To know more about this news, check out the official post. What’s new in application development this week? Twilio launched Verified By Twilio, that will show customers who is calling them and why Emacs 26.3 comes with GPG key for GNU ELPA package signature check and more! Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks
Read more
  • 0
  • 0
  • 5768

article-image-reddit-experienced-an-outage-due-to-an-issue-in-its-hosting-provider-aws
Vincy Davis
02 Sep 2019
2 min read
Save for later

Reddit experienced an outage due to an issue in its hosting provider, AWS

Vincy Davis
02 Sep 2019
2 min read
On Saturday, Reddit status notified its users on Twitter that they are suffering an outage due to “elevated level of errors” at 06.09 PDT. During this time, users across U.S. East Coast and Northern Europe could not load the Reddit page. Bleeping computer reported that users from Europe (Finland, Spain, Italy, Portugal), Canada, the Philippines, and Australia faced trouble while loading the website. Downdetector.com registered a peak of almost 15,000 Reddit down reports on this particular day. Image Source: Bleeping computer Twitter was flooded with user messages questioning the status of Reddit. https://twitter.com/Majesti20934027/status/1167788217216765952 https://twitter.com/whoisanthracite/status/1167786716121509888 https://twitter.com/Blade67470/status/1167788111998390272 After some time at 07:53 PDT, Reddit tweeted that they have identified the issue behind the outage. https://twitter.com/redditstatus/status/1167812712262295552 Meanwhile, users trying to open any page on Reddit received messages that said, “Sorry, we couldn’t load posts for this page.” or “Sorry, for some reason Reddit can’t be reached.” Read Also: Reddit’s 2018 Transparency report includes copyright removals, restorations, and more! Finally, the outage was resolved at 12:36 PDT on the same day. Reddit tweeted, “Resolved: This incident has been resolved.” No further details have been posted by Reddit or AWS. Reddit status page reported that the Amazon AWS issue affected seven Reddit.com components including Desktop Web, Mobile Web, Native Mobile Apps, Vote Processing, Comment Processing, Spam Processing, and Modmail. A user commented on Bleeping computer’s post,“Looks like Elastic Block Store (EBS) and Relational Database Service (RDS) (and Workspaces, whatever that is) took a hit for US-EAST-1 at that time. From the status updates, maybe due to a big hardware failure. Perhaps Reddit has realized there is value in keeping a redundant stack running in a western region. They could have instantly mitigated the outage by flipping traffic with Route 53 to the healthy stack in this case.” Other recent outages Google services, ProtonMail, and ProtonVPN suffered an outage yesterday Stack Overflow suffered an outage yesterday EU’s satellite navigation system, Galileo, suffers major outage; nears 100 hours of downtime
Read more
  • 0
  • 0
  • 9807

article-image-golang-1-13-module-mirror-index-and-checksum-database-are-now-production-ready
Savia Lobo
02 Sep 2019
4 min read
Save for later

Golang 1.13 module mirror, index, and Checksum database are now production-ready

Savia Lobo
02 Sep 2019
4 min read
Last week, the Golang team announced that the Go module mirror, index, and checksum database are now production-ready thus adding reliability and security to the Go ecosystem. For Go 1.13 module users, the go command will use the module mirror and checksum database by default. New production-ready modules for Go 1.13 module Module Mirror A module mirror is a special kind of module proxy that caches metadata and source code in its own storage system. This allows the mirror to continue to serve source code that is no longer available from the original locations thus speeding up downloads and protect users from the disappearing dependencies. According to the team, module mirror is served at proxy.golang.org, which the go command will use by default for module users as of Go 1.13. For users still running an earlier version of the go command, they can use this service by setting GOPROXY=https://proxy.golang.org in their local environment. Read Also: The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14 Module Index The module index is served by index.golang.org. It is a public feed of new module versions that become available through proxy.golang.org. Module index is useful for tool developers who want to keep their own cache of what’s available in proxy.golang.org, or to keep up-to-date on some of the newest modules go developers use. Read Also: Implementing Garbage collection algorithms in Golang [Tutorial] Checksum Database Modules introduced the go.sum file, a list of SHA-256 hashes of the source code and go.mod files of each dependency when it was first downloaded. The go command can use these hashes to detect misbehavior by an origin server or proxy that gives a different code for the same version. However, the go.sum file has a limitation, it works entirely by trust based on user’s first use. When a user adds a version of a never seen before dependency, the go command fetches the code and adds lines to the go.sum file quickly. The problem is that those go.sum lines aren’t being checked against anyone else’s and thus they might be different from the go.sum lines that the go command just generated for someone else. The checksum database ensures that the go command always adds the same lines to everyone's go.sum file. Whenever the go command receives new source code, it can verify the hash of that code against this global database to make sure the hashes match, ensuring that everyone is using the same code for a given version. The checksum database is served by sum.golang.org and is built on a Transparent Log (or “Merkle tree”) of hashes backed by Trillian, a transparent, highly scalable and cryptographically verifiable data store. The main advantage of a Merkle tree is that it is tamper-proof and has properties that don’t allow for misbehavior to go undetected, making it more trustworthy. The Merkle tree checks inclusion proofs (if a specific record exists in the log) and “consistency” proofs (that the tree hasn’t been tampered with) before adding new go.sum lines to a user’s module’s go.sum file. This checksum database allows the go command to safely use an otherwise untrusted proxy. Because there is an auditable security layer sitting on top of it, a proxy or origin server can’t intentionally, arbitrarily, or accidentally start giving you the wrong code without getting caught. “Even the author of a module can’t move their tags around or otherwise change the bits associated with a specific version from one day to the next without the change being detected,” the blog mentions. Developers are excited about the launch of the module mirror and checksum database and look forward to checking it out. https://twitter.com/hasdid/status/1167795923944124416 https://twitter.com/jedisct1/status/1167183027283353601 To know more about this news in detail, read the official blog post. Other news in Programming Why Perl 6 is considering a name change? The Julia team shares its finalized release process with the community TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers and more
Read more
  • 0
  • 0
  • 18556
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-kubernetes-releases-etcd-v3-4-with-better-backend-storage-improved-raft-voting-process-new-raft-non-voting-member-and-more
Fatema Patrawala
02 Sep 2019
5 min read
Save for later

Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more

Fatema Patrawala
02 Sep 2019
5 min read
Last Friday, a team at Kubernetes announced the release of etcd 3.4 version. etcd 3.4 focuses on stability, performance and ease of operation. It includes features like pre-vote and non-voting member and improvements to storage backend and client balancer. Key features and improvements in etcd v3.4 Better backend storage etcd v3.4 includes a number of performance improvements for large scale Kubernetes workloads. In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. “read-only range request ... took too long to execute”). Previously, the storage backend commit operation on pending writes, blocks incoming read transactions, even when there was no pending write. Now, the commit does not block reads which improve long-running read transaction performance. The team has further made backend read transactions fully concurrent. Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. They also ran Kubernetes 5000-node scalability test on GCE with this change and observed similar improvements. Improved raft voting process etcd server implements Raft consensus algorithm for data replication. Raft is a leader-based protocol. Data is replicated from leader to follower; a follower forwards proposals to a leader, and the leader decides what to commit or not. Leader persists and replicates an entry, once it has been agreed by the quorum of cluster. The cluster members elect a single leader, and all other members become followers. The elected leader periodically sends heartbeats to its followers to maintain its leadership, and expects responses from each follower to keep track of its progress. In its simplest form, a Raft leader steps down to a follower when it receives a message with higher terms without any further cluster-wide health checks. This behavior can affect the overall cluster availability. For instance, a flaky (or rejoining) member drops in and out, and starts campaign. This member ends up with higher terms, ignores all incoming messages with lower terms, and sends out messages with higher terms. When the leader receives this message of a higher term, it reverts back to follower. This becomes more disruptive when there’s a network partition. Whenever the partitioned node regains its connectivity, it can possibly trigger the leader re-election. To address this issue, etcd Raft introduces a new node state pre-candidate with the pre-vote feature. The pre-candidate first asks other servers whether it’s up-to-date enough to get votes. Only if it can get votes from the majority, it increments its term and starts an election. This extra phase improves the robustness of leader election in general. And helps the leader remain stable as long as it maintains its connectivity with the quorum of its peers. Introducing a new raft non-voting member, “Learner” The challenge with membership reconfiguration is that it often leads to quorum size changes, which are prone to cluster unavailabilities. Even if it does not alter the quorum, clusters with membership change are more likely to experience other underlying problems. In order to address failure modes, etcd introduced a new node state “Learner”, which joins the cluster as a non-voting member until it catches up to leader’s logs. This means the learner still receives all updates from leader, while it does not count towards the quorum, which is used by the leader to evaluate peer activeness. The learner only serves as a standby node until promoted. This relaxed requirements for quorum provides the better availability during membership reconfiguration and operational safety. Improvements to client balancer failover logic etcd is designed to tolerate various system and network faults. By design, even if one node goes down, the cluster “appears” to be working normally, by providing one logical cluster view of multiple servers. But, this does not guarantee the liveness of the client. Thus, etcd client has implemented a different set of intricate protocols to guarantee its correctness and high availability under faulty conditions. Historically, etcd client balancer heavily relied on old gRPC interface: every gRPC dependency upgrade broke client behavior. A majority of development and debugging efforts were devoted to fixing those client behavior changes. As a result, its implementation has become overly complicated with bad assumptions on server connectivity. The primary goal in this release was to simplify balancer failover logic in etcd v3.4 client; instead of maintaining a list of unhealthy endpoints, whenever client gets disconnected from the current endpoint. To know more about this release, check out the Changelog page on GitHub. What’s new in cloud and networking this week? VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models Pivotal open sources kpack, a Kubernetes-native image build service
Read more
  • 0
  • 0
  • 21906

article-image-cryptographic-key-of-facebooks-free-basics-app-has-been-compromised
Fatema Patrawala
02 Sep 2019
5 min read
Save for later

Cryptographic key of Facebook’s Free Basics app has been compromised

Fatema Patrawala
02 Sep 2019
5 min read
Last week, APK Mirror and Android Police owner Artem Russakovskii reported that a cryptographic key used by Facebook developers to digitally sign its Free Basics by Facebook app has been compromised, and third-party apps are reusing the key. https://twitter.com/ArtemR/status/1159867541537169409 Russakovskii discovered this issue and reported it to Facebook earlier in August. Then Facebook pulled the original app listing from the Play Store and replaced it with a new app using a new signing cryptographic key. Since then, the company has not publicly divulged the nature of the compromised key. They have also not given any precise reason for the re-released app to its users, placing them at risk if they still have the old version installed. Before the listing was removed, the original Free Basics by Facebook app had over five million downloads on the Play Store. Websites like APK Mirror host Android apps for download. They do it for several reasons: to circumvent censorship, so users can download updates before they're widely rolled out, to mitigate geographic restrictions, and to provide a historical archive for comparison and ease of rolling back updates, among other reasons. Russakovskii writes, “In the last month, we've spotted third-party apps using a debug signing cryptographic key which matched the key used by Facebook for its Free Basics Android app.” The APK Mirror team notified Facebook about the leaked key, and the company verified it, pledging to address the issue in a new version of the app. The company claims it has prompted users to upgrade to the newer version of app but did not provide any specific reason for the update. Potential dangers of a compromised cryptographic key According to Android Police, the security of Android app updates hinges on the secrecy of a given app's signing cryptographic key. It's how app updates are verified as secure, and if it falls into the wrong hands, false updates could be distributed containing nefarious changes. As a result, developers usually guard signing keys quite closely. Of course, that security is entirely dependent upon developers keeping their app signing key secret; if it's publicly available, anyone can sign an app that claims to be an update to their app, and consumers' phones will easily install right over the top of the real app. So losing or leaking a signing key is a big problem. If signing keys fall into the wrong hands, third parties can distribute maliciously modified versions of the app as updates on venues outside the Play Store, and potentially trick sites similar to APK Mirror that rely on signature verification. Someone can easily upload a fake app that looks like it was made by Facebook to a forum or trick less wary APK distribution sites into publishing it based on the verified app signature. To make things a bit easier for developers, Google has started a service which allows developers to store app signing keys on its servers instead. The "Google Play App Signing," as it's called, means that app keys can't ever be lost and compromised cryptographic keys can be "upgraded" to new keys. Additionally, Android 9 Pie supports a new "key rotation" feature which securely verifies a lineage of signatures in case you need to change them. Facebook’s lax approach in addressing the security issue According to APK Mirror, the old app is telling users to move to the new version, but no specific statement has been provided to customers. A spokesperson from Facebook said to APK Mirror that users were simply notified of the requirement to upgrade in the old app. And the APK Mirror team is unable to check the old app or the specific message sent to customers, as the Free Basics app doesn't appear to work outside specific markets. Additionally, the new app listing on the Play Store makes no mention that the security of the old app has been compromised by the leaked signing cryptographic key, and the APK Mirror team did not find any disclosure about how this leak has impacted user security anywhere on Facebook's site or the internet.org site. When asked for a statement, Facebook spokesperson provided with the following: “We were notified of a potential security issue that could have tricked people into installing a malicious update to their Free Basics app for Android if they chose to use untrusted sources. We have seen no evidence of abuse and have fixed the issue in the latest release of the app.” What’s new in the security this week? Retadup, a malicious worm infecting 850k Windows machines, self-destructs in a joint effort by Avast and the French police A security issue in the net/http library of the Go language affects all versions and all components of Kubernetes GitHub now supports two-factor authentication with security keys using the WebAuthn API
Read more
  • 0
  • 0
  • 15686

article-image-why-perl-6-is-considering-a-name-change
Bhagyashree R
30 Aug 2019
4 min read
Save for later

Why Perl 6 is considering a name change?

Bhagyashree R
30 Aug 2019
4 min read
There have been several discussions around renaming Perl 6. Earlier this month, another such discussion started when Elizabeth Mattijsen, one of the Perl 6 core developers submitted the "Perl" in the name "Perl 6" is confusing and irritating issue. She suggested changing its name to Camelia, which is also the name of Perl’s mascot. In the year 2000, the Perl team basically decided to break everything and came up with a whole new set of design principles. Their goal was to remove the “historical warts” from the language including the confusion surrounding sigil usage for containers, the ambiguity between the select functions, and more. Based on these principles Perl was redesigned into Perl 6. For Perl 6, Wall and his team envisioned to make it a better object-oriented as well as a better functional programming language. There are many differences between Perl 5 and Perl 6. For instance, in Perl 5 you need to choose things like concurrency system and processing utilities, but in Perl 6 these features are part of the language itself. In an interview with the I Programmer website, when asked about how the two languages differ, Moritz Lenz, a Perl and Python developer, said, “They are distinct languages from the same family of languages. On the surface, they look quite similar and they are designed using the same principles.” Why developers want to rename Perl 6 Because of the aforementioned differences, many developers find the “Perl 6” name very confusing. This name does not convey the fact that it is a brand new language. Developers may instead think that it is the next version of the Perl language. Some others may believe that it is faster, more stable, or better compared to the earlier Perl language. Also, many search engines will sometimes show results for Perl 5 instead of Perl 6. “Having two programming languages that are sufficiently different to not be source compatible, but only differ in what many perceive to be a version number, is hurting the image of both Perl 5 and Perl 6 in the world. Since the word "Perl" is still perceived as "Perl 5" in the world, it only seems fair that "Perl 6" changes its name,” Mattijsen wrote in the submitted issue. To avoid this confusion Mattijsen suggests an alternative name: Camelia. Many developers agreed with her suggestion. A developer commented on the issue, “The choice of Camelia is simple: search for camelia and language already takes us to Perl 6 pages. We can also keep the logo. And it's 7 characters long, 6-ish. So while ofun and all the others have their merits, I prefer Camelia.” In addition to Camelia, Raku is also a strong contender for the new name for Perl 6, which was suggested by Larry Wall, the creator of Perl. A developer supporting Raku said, “In particular, I think we need to discuss whether "Raku", the alternative name Larry proposed, is a viable possibility. It is substantially shorter than "Camelia" (and hits the 4-character sweet spot), it's slightly more searchable, has pleasant associations of "comfort" or "ease" in its original Japanese, in which language it even looks a little like our butterfly mascot.” Some developers were not much convinced with the idea of renaming the language and think that this rather adds more to the confusion. A developer added, “I don't see how Perl 5 is going to benefit from this. We're freeing the name, yes. They're free to reuse the versions now in however way they like, yes. Are they going to name the successor to 5.30 “Perl 6”? Of course not – that would cause more confusion, make them look stupid and make whatever spiritual successor of Perl 6 we could think of look obsolete. Would they go up to Perl 7 with the next major change? Perhaps, but they can do that anyway: they're another grown-up language that can make its own decisions :) I'm not convinced it would do anything to improve Perl 6's image either. Being Perl 6 is “standing on the shoulders of giants”. Perl is a strong brand. Many people have left it because of the version confusion, yes. But I don't imagine these people coming back to check out some new Camelia language that came out. They might, however, decide to give Perl 6 a shot if they start seeing some news about it – “oh, I was using Perl 15 years ago... is this still a thing? Is that new famous version finally being out and useful? I should check it out!” You can read the submitted issue and discussion on GitHub for more details. What’s new in programming this week Introducing Nushell: A Rust-based shell React.js: why you should learn the front end JavaScript library and how to get started Julia v1.2 releases with support for argument splatting, Unicode 12, new star unary operator, and more
Read more
  • 0
  • 0
  • 24664

article-image-twilio-launched-verified-by-twilio-that-will-show-customers-who-is-calling-them-and-why
Amrata Joshi
30 Aug 2019
3 min read
Save for later

Twilio launched Verified By Twilio, that will show customers who is calling them and why

Amrata Joshi
30 Aug 2019
3 min read
This month at the Twilio SIGNAL 2019 conference, Twilio, announced Verified By Twilio which help customers to know caller details. Verified By Twilio will also help them know which calls are real and which are fake or spam calls. For this, the company is partnering with major call identification apps like CallApp, Hiya, Robokiller, and YouMail to help more than 200 million consumers. Verified By Twilio is expected to be fully available by early 2020. Verified by Twilio aims to show genuine callers Due to privacy concerns, customers usually tend to reject a number of business calls daily, be it legitimate or illegitimate. As per Hiya’s State of the Phone Call report, Americans answer just a little more than 50% of the calls that they receive on their cell phones. As per a recent Consumer Reports survey, around 70% of consumers do not answer a call if the number flashes up as anonymous.   But in this case, if the customer knows in advance as to who is calling and why then there is a possibility of such business calls not going unanswered. The project Verified by Twilio aims to let users know about why are they getting a call even before they actually press the answer button. It also aims to verify the business or organization that is calling for each of the calls. The official press release reads, “For example, if an airline company is trying to contact a customer about a cancelled flight, as the call comes in, the consumer will see the name of the airline with a short note indicating why they are calling. With that information, that person can make the decision about stepping out of a meeting or putting another call on hold to answer this critically important call.” Jeff Lawson, co-founder and chief executive officer, Twilio, said in a statement, “At Twilio, we want to help consumers take back their phones, so that when their phone rings, they know it's a trusted, wanted call.”  Lawson further added, “A lot of work is being done in the industry to stop unwanted calls and phone scams, and we want to ensure consumers continue to receive the wanted calls. Verified By Twilio is aimed at providing consumers with the context to know who's calling so they answer the important and wanted calls happening in their lives, such as from doctors, schools, and banks.” How Twilio plans to verify businesses? Twilio is now creating a repository for hosting verified information of businesses and organizations as well as their associated brands that will populate the screens as soon as a call comes in. With the programmability of the Twilio platform, it will be possible for businesses and organizations to dynamically assign a purpose for each call to give better context. Twilio plans to involve no costs for businesses and organizations who would want to join the private beta.  With Verified By Twilio, businesses and organizations might improve their overall engagement with their customers as the chances of their calls getting answered would be high and in this way, they would establish trust in traditional communications. To know more about this news, check out the official post. What’s new in Application development this week? Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3
Read more
  • 0
  • 0
  • 14638
article-image-emacs-26-3-comes-with-gpg-key-for-gnu-elpa-package-signature-check-and-more
Amrata Joshi
30 Aug 2019
2 min read
Save for later

Emacs 26.3 comes with GPG key for GNU ELPA package signature check and more!

Amrata Joshi
30 Aug 2019
2 min read
Last week, the team behind Emacs, the customizable libre text editor announced the first release candidate of Emacs 26.3. Again on Wednesday, the team announced a maintenance release, Emacs 26.3. Key features in Emacs 26.3? New GPG key for GNU ELPA Emacs 26.3 now features a new GPG (GNU Privacy Guard) key for GNU ELPA package signature checking (GNU ELPA package is the default package repository for GNU Emacs). New option to help-enable-completion-auto-load This release also features a new option ‘help-enable-completion-auto-load’ that allows users to disable the new feature that was introduced in Emacs 26.1 which was responsible for loading files during the completion of ‘C-h f’ and ‘C-h v’. Supports the Japanese Era name This release now supports the new Japanese Era name. Few users expected more changes in this release, a user commented on HackerNews, “So ... only two relevant changes this time?” While others think that there are editors comparatively better than Emacs. Another user commented, “I don't want to start a flamewar, but I moved most things I was doing in Emacs to Textadept a while back because I found Textadept more convenient. That's not to say TA does everything you can do in Emacs, but it replaced all of the scripting I was doing with Emacs. You have the full power of Lua inside TA. Emacs always has a lag when I start it up, whereas TA is instant. I slowly built up functionality inside TA to the point that I realized I could replace everything I was doing in Emacs.” To know more about this news, check out the mailing thread. What’s new in application development this week? Google Chrome 76 now supports native lazy-loading Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks #Reactgate forces React leaders to confront community’s toxic culture head on    
Read more
  • 0
  • 0
  • 7050

article-image-google-researchers-weight-agnostic-neural-networks-perform-tasks-without-learning-weight-parameters
Fatema Patrawala
30 Aug 2019
4 min read
Save for later

Google researchers present Weight Agnostic Neural Networks (WANNs) that perform tasks without learning weight parameters

Fatema Patrawala
30 Aug 2019
4 min read
On Tuesday, Adam Gaier, a student researcher and David Ha, Staff Research Scientist at the Google research team published a paper on Weight Agnostic Neural Networks (WANN) that can perform tasks even without learning the weight parameters. In “Weight Agnostic Neural Networks”, researchers present their first step towards searching networks with the neural net architectures that can already perform tasks, even when they use a random shared weight. The team writes, “Our motivation in this work is to question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. By exploring such neural network architectures, we present agents that can already perform well in their environment without the need to learn weight parameters.” The team looked at analogies of nature vs. nurture. They gave an example of certain precocial species in biology—who possess anti-predator behaviors from the moment of birth and can perform complex motor and sensory tasks without learning, Hence, researchers constructed network architectures that can perform well without training. The team has also open-sourced the code to reproduce WANN experiments in the broader research community. Researchers explored range of WANNs using topology search algorithm The team started with a population of minimal neural network architecture candidates, which have very few connections, and used a well-established topology search algorithm to evolve the architecture by adding single connections and single nodes one by one. Unlike traditional neural architecture search methods, where all of the weight parameters of new architectures need to be trained using a learning algorithm, the team took a simpler approach. To all candidate architectures the team first assigned a single shared weight value at each iteration, and then optimized to perform well over a wide range of shared weight values. In addition to exploring a range of weight agnostic neural networks, researchers also looked for network architectures that were only as complex as they need to be. They accomplished this by optimizing for both the performance of the networks and their complexity simultaneously, using techniques drawn from multi-objective optimization. Source: Google AI blog. Overview of Weight Agnostic Neural Network Search and corresponding operators for searching the space of network topologies. Training the WANN architectures Researchers believe that unlike traditional networks, WANNS can be easily trained by finding the best single shared weight parameter that maximizes its performance. They proved this with an example of a swing-up cartpole task using constant weights:   Source: Google AI blog. A WANN performing a Cartpole Swing-up task at various different weight parameters and fine tune weights As per the above figure, WANNs can perform tasks using a range of shared weight parameters. However, the performance is not comparable to a network that learns weights for each individual connection, normally done in network training. To improve performance, researchers used the WANN architecture, and the best shared weight to fine-tune the weights of each individual connection using a reinforcement learning algorithm, like how a normal neural network is trained. Created an ensemble of multiple distinct models of WANN architecture The researchers also believe that by using copies of the same WANN architecture, where each copy of the WANN is assigned a distinct weight value, they created an ensemble of multiple distinct models for the same task. And according to them this ensemble generally achieves better performance than a single model. They illustrated this with an example of an MNIST classifier: Source: Google AI blog The team conclude that a conventional network with random initialization will achieve ~10% accuracy on MNIST. While this particular network architecture that uses random weights when applied to MNIST achieves an accuracy of > 80%. However, when an ensemble of WANNs is used the accuracy increases to > 90%. The researchers hope that this work will serve as a stepping stone to discover novel fundamental neural network components such as convolutional networks in deep learning. To know more about this research, check out the official Google AI blog. What’s new in machine learning this week? DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale Google open sources an on-device, real-time hand gesture recognition algorithm built with MediaPipe
Read more
  • 0
  • 0
  • 11839

article-image-is-apples-independent-repair-provider-program-a-bid-to-avoid-the-right-to-repair-bill
Vincy Davis
30 Aug 2019
4 min read
Save for later

Is Apple's ‘Independent Repair Provider Program’ a bid to avoid the ‘Right To Repair’ bill?

Vincy Davis
30 Aug 2019
4 min read
Yesterday, Apple announced a new ‘Independent Repair Provider Program’ which will offer customers additional options for common out-of-warranty iPhone repairs. It will also provide independent repair businesses with genuine Apple parts, tools, training, repair manuals and diagnostics. Customers can now approach these independent repair shops to fix their devices instead of being restricted to Apple Authorized Service Providers (AASPs). The program is only available in U.S. for now, but will soon be expanded to other countries. To qualify for the Independent Repair Provider Program, an independent repair business will need to have at least one Apple-certified technician to perform the iPhone repair. In the press release, Apple states that only “qualifying repair businesses will receive Apple-genuine parts, tools, training, repair manuals and diagnostics at the same cost as AASPs.” Apple’s certification program is simple and an indie business can enroll in it free of any cost. Apple’s Chief Operating Officer, Jeff Williams says, “When a repair is needed, a customer should have confidence that the repair is done right. We believe the safest and most reliable repair is one handled by a trained technician using genuine parts that have been properly engineered and rigorously tested” In the past one year, Apple has launched a “successful pilot” with 20 independent repair businesses which supplies genuine parts to customers in North America, Europe and Asia. Read Also: Apple announces expanded security bug bounty program up to $1 million; plans to release iOS Security Research Device program in 2020 Is this Apple’s way to avoid the ‘Right To Repair’ bill? Apple’s sudden shift to a new trajectory comes as a surprise after it was reported that Apple was trying hard to kill the ‘Right To Repair’ bill in California. If passed, the bill would provide customers the right to fix or modify their devices without any effect on their warranty. The Apple representatives tried to protest the bill by stoking fears of battery explosions for the consumers who attempt to repair their iPhones. Currently, the bill has been pushed till 2020, allegedly due to successful lobbying of Californian lawmakers by Apple. Many people believe that Apple is going to use this ‘Independent Repair Provider Program’ to support their side of the right-to-repair debate. A user on Hacker News says, “Pretty straightforward attempt to stave off right-to-repair laws... and coming after years of attempts to destroy independent repair businesses. Very hard to see this as a good faith effort by Apple.” Another user comments, “I feel like this is an end-run attempt to avoid right-to-repair legislation. While I hope for the best from this program, it seems to little to late and in direct opposition of prior arguments they’ve made concerning third-party repairs and parts distribution.” Apple’s iPhone sales have declined in the past two fiscal quarters. Kyle Wiens, chief executive of repair guide company iFixit and a longtime advocate for right-to-repair laws, said “This is Apple realizing that the market for repair is larger than Apple could ever handle themselves, and providing independent technicians with genuine parts is a great step,” Wiens said. But, he said, “what this clearly shows is that if right-to-repair legislation passed tomorrow, Apple could instantly comply.” Another critical point which is not highlighted by Apple in the press release he says is that this program does not provide customers the opportunity to repair their own phones. Apple also reserves the right to reject any application they want, without comment. https://twitter.com/kwiens/status/1167076331953090561 Others believe that this a step in the right direction. https://twitter.com/LaurenGoode/status/1167156636789592064 A Redditor comments, “Sounds like Apple is choosing a great halfway point between letting anyone access parts and only letting established shops/businesses from offering repairs. Hopefully we’ll see more independent repair stores offering repairs officially if as it sounds it’s free to apply and get certified!” Another reason Apple probably would have felt the need to change its longstanding policy, is seeing viable and easy to repair alternatives cropping up in the smartphone market. For instance, this week, Fairphone, a Dutch company launched a sustainable smartphone called ‘Fairphone 3’ which aims to minimize electronic waste. Fairphone 3 contains 7 modules, which have been designed to support easy repairs. It also boasts tech specifications of any modern 2019 smartphone. For more details on the Independent Repair Provider Program, head over to the official press release by Apple. Interested businesses can also check out the Independent Repair Provider Program official website. Other interesting news in Tech The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett #Reactgate forces React leaders to confront community’s toxic culture head on
Read more
  • 0
  • 0
  • 16021
article-image-retadup-a-malicious-worm-infecting-850k-windows-machines-self-destructs-in-a-joint-effort-by-avast-and-the-french-police
Savia Lobo
30 Aug 2019
4 min read
Save for later

Retadup, a malicious worm infecting 850k Windows machines, self-destructs in a joint effort by Avast and the French police

Savia Lobo
30 Aug 2019
4 min read
A malicious worm, Retadup, affected 850k Windows machines throughout Latin America. The objective of the Retadup worm is to obtain persistence on victims’ computers to spread itself far and wide and to install additional malware payloads on infected machines. Source: Avast.io The Avast antivirus team started closely monitoring activities of the Retadup worm in March 2019. Jan Vojtěšek, a malware analyst at Avast who led research into Retadup said, "The general functionality of this payload is pretty much what we have come to expect from common malicious stealthy miners."  “In the vast majority of cases, the installed payload is a piece of malware mining cryptocurrency on the malware authors’ behalf. However, in some cases, we have also observed Retadup distributing the Stop ransomware and the Arkei password stealer,” Vojtěšek writes. A few days ago, Vojtěšek shared a report informing users that Avast researchers, the French National Gendarmerie and FBI have together disinfected the Retadup virus, by making the threat to self-destruct. When the Avast team analyzed the Retadup worm closely they identified a design flaw in the (Command-and-Control) C&C protocol that “would have allowed us to remove the malware from its victims’ computers had we taken over its C&C server,” Vojtěšek writes. As Retadup’s C&C infrastructure was mostly located in France, Vojtěšek’s team decided to contact the  Cybercrime Fighting Center (C3N) of the French National Gendarmerie (one of two national police forces of France) at the end of March. The team shared their findings with the Gendarmerie proposing a disinfection scenario that involved taking over a C&C server and abusing the C&C design flaw in order to neutralize Retadup. In July 2019, the Gendarmerie received the green light to legally proceed with the disinfection. To do this, they replaced the malicious C&C server with a prepared disinfection server that made connected instances of Retadup self-destruct. “In the very first second of its activity, several thousand bots connected to it in order to fetch commands from the server. The disinfection server responded to them and disinfected them, abusing the C&C protocol design flaw,” the report states. The Gendarmerie also alerted the FBI of this worm as some parts of the C&C infrastructure were also located in the US. The FBI took them down successfully and on July 8, the malware authors no longer had any control over the malware bots, Vojtěšek said. “Since it was the C&C server’s responsibility to give mining jobs to the bots, none of the bots received any new mining jobs to execute after this takedown. This meant that they could no longer drain the computing power of their victims and that the malware authors no longer received any monetary gain from mining,” the report explained. Avast report highlights, “Over 85% of Retadup’s victims also had no third-party antivirus software installed. Some also had it disabled, which left them completely vulnerable to the worm and allowed them to unwittingly spread the infection further.” Retadup has many different variants of its core, which is written in either AutoIt or AutoHotkey. Both cases contain two files, the clean scripting language interpreter and the malicious script. “In AutoHotkey variants of Retadup, the malicious script is distributed as source code, while in AutoIt variants, the script is first compiled and then distributed. Fortunately, since the compiled AutoIt bytecode is very high-level, it is not that hard to decompile it into a more readable form,” the report states. Users and researchers are congratulating both the Avast team and the Gendarmerie to successfully disinfect the Retadup. https://twitter.com/nunohaien/status/1166636067279257600 To know more about Retadup in detail, read Avast’s complete report. Other interesting news in Security New Bluetooth vulnerability, KNOB attack can manipulate the data transferred between two paired devices A year-old Webmin backdoor revealed at DEF CON 2019 allowed unauthenticated attackers to execute commands with root privileges on server A security issue in the net/http library of the Go language affects all versions and all components of Kubernetes
Read more
  • 0
  • 0
  • 15098

article-image-mozilla-ceo-chris-beard-to-step-down-by-the-end-of-2019-after-five-years-in-the-role
Bhagyashree R
30 Aug 2019
3 min read
Save for later

Mozilla CEO Chris Beard to step down by the end of 2019 after five years in the role

Bhagyashree R
30 Aug 2019
3 min read
Yesterday, Chris Beard, the CEO of Mozilla, announced that he will be stepping down from his role by the end of this year. After an astounding tenure of more than fifteen years at Mozilla, Beard’s immediate plans are to take a break and spend more time with his family. https://twitter.com/cbeard/status/1167091991487729664 Chris Beard’s journey at Mozilla started back in 2004, just before Firefox 1.0 was released. Since then he has been deeply involved in almost every part of the business including product, marketing, innovation, communications, community, and user engagement. In 2013, Beard worked as an Executive-in-Residence at the venture capital firm Greylock Partners, gaining a deeper perspective on innovation and entrepreneurship. During this time he remained an advisor to Mozilla’s Chair, Mitchell Baker. Chris Beard’s appointment as CEO came during a very “tumultuous time” for Mozilla. In 2013, when Gary Kovacs stepped down as Mozilla’s CEO, the company was extensively looking for a new CEO. In March 2014, the company appointed its CTO Brendan Eich, the creator of JavaScript as CEO. Just a few weeks in the role, Eich had to resign from his position after it was revealed that he has donated $1,000 to California Proposition 8, which called for the banning of same-sex marriage in California. Then in April 2014, Chris Beard was appointed as the interim CEO at Mozilla and was confirmed in the position on July 28. Throughout his tenure as a “Mozillian”, Chris Beard has made countless contributions to the company. Listing his achievements, Mozilla’s Chair, Mitchell Baker wrote in a thanking post, “This includes reinvigorating our flagship web browser Firefox to be once again a best-in-class product. It includes recharging our focus on meeting the online security and privacy needs facing people today. And it includes expanding our product offerings beyond the browser to include a suite of privacy and security-focused products and services from Facebook Container and Enhanced Tracking Protection to Firefox Monitor.” Read also: Firefox now comes with a Facebook Container extension to prevent Facebook from tracking user’s web activity Mozilla is now seeking a successor for Beard to lead the company. Mitchell Baker has agreed to step into an interim CEO role if the search continues beyond this year. Meanwhile, Chris Beard will continue to be an advisor to the board of directors and Baker. “And I will stay engaged for the long-term as an advisor to both Mitchell and the Board, as I’ve done before,” he wrote. Many of Beard’s co-workers thanked him for his contribution to Mozilla: https://twitter.com/kaykas/status/1167094792230076424 https://twitter.com/digitarald/status/1167107776734085120 https://twitter.com/hoosteeno/status/1167099338226429952 You can read Beard’s announcement on Mozilla’s blog. What’s new in web development this week Mozilla proposes WebAssembly Interface Types to enable language interoperability Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70 Mozilla’s MDN Web Docs gets new React-powered frontend, which is now in Beta
Read more
  • 0
  • 0
  • 15383
Modal Close icon
Modal Close icon