Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-microsoft-edge-mobile-browser-now-shows-warnings-against-fake-news-using-newsguard
Bhagyashree R
24 Jan 2019
3 min read
Save for later

Microsoft Edge mobile browser now shows warnings against fake news using NewsGuard

Bhagyashree R
24 Jan 2019
3 min read
Microsoft Edge mobile browser is now flagging untrustworthy news sites with the help of a plugin named NewsGuard. Microsoft partnered with NewsGuard in August 2018 under its Defending Democracy Program. It was first supported as a downloadable plugin, but now Microsoft has started automatically installing this functionality on mobile version of Edge. Currently, it is an opt-in feature, which you can enable by going to the Settings menu . NewsGuard was founded by journalists Steven Brill and Gordon Crovitz. It evaluates a news site based on 9 specific criteria including their use of deceptive headlines, transparency regarding ownership and financing and gives users a color coded rating in green or red. Its business model is basically licensing its product to tech companies that aim to fight fake news. According to The Guardian, NewsGuard was warning users when they visited the Mail Online, “Proceed with caution: this website generally fails to maintain basic standards of accuracy and accountability.” Steve Brill says that NewsGuard takes complete responsibility for its verdicts and all complaints should be directed at his company rather than Microsoft. “They can blame us. And we’re happy to be blamed. Unlike the platforms we’re happy to be accountable. We want people to game our system. We are totally transparent. We are not an algorithm.” A spokesperson for Mail Online said to The Guardian, “We have only very recently become aware of the NewsGuard startup and are in discussions with them to have this egregiously erroneous classification resolved as soon as possible.” Though NewsGuard says that its verdict are taken by experienced journalists, there are still some issues that users have pointed out. One of the Hacker News user said that the very concern related to NewsGuard is that it flags unreliable content at the site level instead of the article level. Sharing what consequences this could impose he wrote, “It's obvious why that's necessary, but the result is a complete failure to deal with any source where quality varies widely. Fox's written reporting is sometimes quite good, but Glenn Beck's old videos are still posted under the same domain. The result is that NewsGuard happily puts a big green check-mark above a video declaring that the US is the only country in the world with birthright citizenship.” Though an opt-in feature is not a big deal, but with time this could become a default mode. We can’t deny that with time users could just see the green or red icon and based on that rating follow the website. Another Hacker News user says this could lead to something called “Truth as a service”, which means you are not using your own critical thinking and just take what the machine says. This fact is also supported by a study done by Gallup and the Knight Foundation, which surveyed 2,000 adults in U.S. They showed articles with and without the ratings and the result revealed that readers are more likely to trust articles that included the green icon in the address bar. Read the full story at The Guardian website. Microsoft confirms replacing EdgeHTML with Chromium in Edge Microsoft reportedly ditching EdgeHTML for Chromium in the Windows 10 default browser Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge
Read more
  • 0
  • 0
  • 13840

article-image-tensorflow-1-13-0-rc0-releases
Natasha Mathur
24 Jan 2019
3 min read
Save for later

TensorFlow 1.13.0-rc0 releases!

Natasha Mathur
24 Jan 2019
3 min read
The TensorFlow team released the first release candidate of TensorFlow 1.13.0-rc0 yesterday. TensorFlow 1.13.0-rc0 explores major bug fixes, improvements and other changes. Let’s have a look at the major highlights in TensorFlow 1.13.0-rc0. Major improvements In TensorFlow 1.13.0-rc0, TensorFlow Lite has been moved from contrib to core. What this means is that Python modules are now under tf.lite and the source code is now under tensorflow/lite instead of tensorflow/contrib/lite. TensorFlow GPU binaries have now been built against CUDA 10. NCCL has been moved to core in TensorFlow 1.13.0-rc0. Behavioural and other changes Conversion of python floating types to uint32/64 (i.e. matching behaviour of other integer types) in tf.constant has been disallowed in TensorFlow 1.13.0-rc0. Doc consisting of details about the rounding mode used in quantize_and_dequantize_v2 has been updated. The performance of GPU cumsum/cumprod has been increased by up to 300x. Support has been added for weight decay in most TPU embedding optimizers such as AdamW and MomentumW. An experimental Java API has been added for injecting TensorFlow Lite delegates. New support is added for strings in TensorFlow Lite Java API. tf.spectral has been merged into tf.signal for TensorFlow 2.0. Bug fixes tensorflow::port::InitMain() now gets called before using the TensorFlow library. Programs that fail to do this are not portable to all platforms. saved_model.loader.load has been deprecated and is replaced by saved_model.load. Saved_model.main_op has also been deprecated and is replaced by saved_model.main_op in V2. tf.QUANTIZED_DTYPES has been deprecated and is changed to tf.dtypes.QUANTIZED_DTYPES. sklearn imports has been updated for deprecated packages. confusion_matrix op is now exported as tf.math.confusion_matrix instead of tf.train.confusion_matrix. An ignore_unknown argument is added in TensorFlow 1.13.0-rc0 to parse_values that suppresses ValueError for unknown hyperparameter types. Such * Add tf.linalg.matvec convenience function. tf.data.Dataset.make_one_shot_iterator() has been deprecated in V1 and added tf.compat.v1.data.make_one_shot_iterator()`. tf.data.Dataset.make_initializable_iterator() is deprecated in V1, removed it from V2, and added another tf.compat.v1.data.make_initializable_iterator(). The XRTCompile op is can now return the ProgramShape resulted from the XLA compilation as a second return argument. XLA HLO graphs are rendered as SVG/HTML in TensorFlow 1.13.0-rc0. For more information, check out the complete TensorFlow 1.13.0-rc0 release notes. TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ] TensorFlow 1.11.0 releases
Read more
  • 0
  • 0
  • 12436

article-image-go-1-11-5-and-go-1-10-8-released
Savia Lobo
24 Jan 2019
2 min read
Save for later

Go 1.11.5 and Go 1.10.8 released!

Savia Lobo
24 Jan 2019
2 min read
Today, the Go team announced the release of Go 1.11.5 and Go 1.10.8. This version addresses a recently reported security issue. Go team recommends all users to update to one of these releases. For users who are unsure of which one to choose, the team recommends Go 1.11.5. The DoS vulnerability in the crypto/elliptic implementations of the P-521 and P-384 elliptic curves may let an attacker craft inputs that consume excessive amounts of CPU. These inputs might be delivered via TLS handshakes, X.509 certificates, JWT tokens, ECDH shares or ECDSA signatures. In some cases, if an ECDH private key is reused more than once, the attack can also lead to key recovery. There is an issue in the release tooling due to which go1.11.5.linux-amd64.tar.gz and go1.10.8.linux-amd64.tar.gz include two unnecessary directories in the root of the archive, "gocache" and "tmp". The team members say that these issues are harmless and safe to remove. They have also mentioned commands that can be used to extract only the necessary “go” directory from the archives. These commands would create a Go tree in /usr/local/go. tar -C /usr/local -xzf go1.11.5.linux-amd64.tar.gz go tar -C /usr/local -xzf go1.10.8.linux-amd64.tar.gz go To know more about these releases in detail, visit Go’s official mailing thread. Go Programming Control Flow Introduction to Creational Patterns using Go Programming Essential Tools for Go Programming
Read more
  • 0
  • 0
  • 2176

article-image-a-brief-list-of-drafts-bills-in-us-legislation-for-protecting-consumer-data-privacy
Savia Lobo
24 Jan 2019
3 min read
Save for later

A brief list of drafts bills in US legislation for protecting consumer data privacy

Savia Lobo
24 Jan 2019
3 min read
US Lawmakers have initiated drafting privacy regulations and also encouraging the enforcement agencies to build a privacy framework which they can easily follow. Last week, Marco Rubio, U.S. Senator introduced a bill titled ‘American Data Dissemination (ADD) Act’ for creating federal standards of privacy protection for large companies like Google, Amazon, and Facebook. However, this bill largely focuses on data collection and disclosure. Hence, the experts were afraid that this bill would ignore the way companies use customer’s data. Last week, U.S. Senators John Kennedy and Amy Klobuchar introduced the ‘Social Media Privacy and Consumer Rights Act’ that allows the consumers to have more control over their personal data. This legislation aims to improve transparency, strengthen consumers’ recourse options during a data breach and ensure companies are compliant with privacy policies that protect consumers. Another bill, sponsored by Reps. Dutch Ruppersberger, Jim Himes, Will Hurd, and Mike Conaway, was introduced last week to combat theft of U.S. technologies by state actors including China, and to reduce risks to “critical supply chains.” Ruppersberger said they had long suspected Beijing is using its telecom companies to spy on Americans and they knew that China is responsible for up to $600 billion in a theft of U.S. trade secrets. Some reintroduced bills Securing Energy Infrastructure Act A bill titled ‘Securing Energy Infrastructure Act’ was proposed by Sens. Jim Risch, and Angus King. This bill, reintroduced last Thursday, would push the government to explore new ways to secure the electric grid against cyber attacks. This bill unanimously passed the Senate in December but was never put to a vote in the House. Telephone Robocall Abuse Criminal Enforcement and Deterrence Act On 17th January, Sens. John Thune, R-S.D., and Ed Markey, D-Mass., renewed their call to increase punishments for people running robocall scams. The Telephone Robocall Abuse Criminal Enforcement and Deterrence, or TRACED, Act would give the Federal Communications Commission more legal leeway to pursue and prosecute robocallers. Under the bill, telecom companies would also need to adopt tools to sift out robocalls. Thune said, “The TRACED Act holds those people who participate in robocall scams and intentionally violate telemarketing laws accountable and does more to proactively protect consumers who are potential victims of these bad actors.” Federal CIO Authorization Act The Federal CIO Authorization Act, which Reps. Will Hurd, and Robin Kelly, reintroduced on Jan. 4, passed the House unanimously on Tuesday. This bill would elevate the federal chief information officer within the White House chain of command and designate both the federal CIO and federal chief information security officer as presidentially appointed positions. The measure still lacks a Senate counterpart. Lawmakers have also sent letters to different companies including Verizon, T-Mobile, Sprint, and AT&T asking for information on the companies’ data sharing partnerships with third-party aggregators. These companies have time until Jan 30 to respond. Reps. Greg Walden, Cathy McMorris Rodgers, Robert Latta, and Brett Guthrie, wrote, “We are deeply troubled because it is not the first time we have received reports and information about the sharing of mobile users’ location information involving a number of parties who may have misused personally identifiable information.” To know more about these bills in detail, visit the Nextgov website. Russia opens civil cases against Facebook and Twitter over local data laws Harvard Law School launches its Caselaw Access Project API and bulk data service making almost 6.5 million cases available Senator Ron Wyden’s data privacy law draft can punish tech companies that misuse user data
Read more
  • 0
  • 0
  • 12438

article-image-debian-9-7-released-with-fix-for-rce-flaw
Melisha Dsouza
24 Jan 2019
1 min read
Save for later

Debian 9.7 released with fix for RCE flaw

Melisha Dsouza
24 Jan 2019
1 min read
On 23rd January, Debian announced the release of Debian 9.7 which is the seventh update of the stable distribution of Debian 9. This comes right after a remote code execution vulnerability was discovered in the APT high-level package manager used by Debian, Ubuntu, and other related Linux distributions that allows an attacker to perform a man-in-the-middle attack. This Debian includes a security update for the APT vulnerability. The Debian GNU/Linux 9.7 (codename "Stretch") release contains a new version of the APT package manager that's no longer vulnerable to man-in-the-middle attacks. The team states that there is no need to download new ISO images to update existing installations, however, the Debian Project will release live and install-only ISO images for all supported architectures of the Debian GNU/Linux 9.7 "Stretch". This will be available for download in a few days. Head over to Debian’s official website for more information on this announcement. Kali Linux 2018 for testing and maintaining Windows security – Wolf Halton and Bo Weaver [Interview] Black Hat hackers used IPMI cards to launch JungleSec Ransomware, affects most of the Linux servers Homebrew 1.9.0 released with periodic brew cleanup, beta support for Linux, Windows and much more!
Read more
  • 0
  • 0
  • 12752

article-image-chromium-developers-propose-an-alternative-to-webrequest-api-that-could-result-in-existing-ad-blockers-end
Bhagyashree R
23 Jan 2019
4 min read
Save for later

Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end

Bhagyashree R
23 Jan 2019
4 min read
Chromium developers recently shared the updates they are planning to do in Manifest V3, and one of them was limiting the blocking version of the webRequest API. They are introducing an alternative to this API called the declrativeNetRequest API. After knowing about this update many ad blocker maintainers and developers felt that introduction of the declarativeNetRequest API can lead to the end of many already existing ad blockers. One of the users at the Chromium bug tracker said: “If this (quite limited) declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two content blockers I have maintained for years, uBlock Origin ("uBO") and uMatrix, can no longer exist.” What is manifest version? Manifest version is basically a mechanism through which certain capabilities can be restricted to a certain class of extensions. These restrictions are specified in the form of either a minimum version or a maximum version. What Chromium states is their reason for doing this update? The webRequest API permit extensions to intercept requests to modify, redirect, or block them. The basic flow of handling a request using this API is, Chrome receives the request, asks the extension, and then gets the result. In Manifest V3, the use of this API will be limited in its blocking form. While the non-blocking form of the API, which permit extensions to observer network requests, but not modify, redirect, or block them will not be discouraged. They have not yet listed the limitations they are going to put in the webRequest API. Manifest V3 will treat the declarativeNetRequest API as the primary content-blocking API in extensions. What this API does is, it allows extensions to tell Chrome what to do with a given request, rather than have Chrome forward the request to the extension. This allows Chrome to handle a request synchronously. As per the doc shared by the team, this API is more performant and provides better privacy guarantees to users. What ad blocker developers and maintainers are saying? After knowing about this update many developers were concerned that this change will end up crippling all ad blockers. “Beside causing uBO and uMatrix to no longer be able to exist, it's really concerning that the proposed declarativeNetRequest API will make it impossible to come up with new and novel filtering engine designs, as the declarativeNetRequest API is no more than the implementation of one specific filtering engine, and a rather limited one (the 30,000 limit is not sufficient to enforce the famous EasyList alone)”, commented an ad blocker developer. He also stated that with the declarativeNetRequest API developers will not be able to implement other features like blocking of media element that are larger than a set size, disabling of JavaScript execution through the injection of CSP directives, etc. Users also feel that this is similar to Safari content blocking APIs, which basically puts limit on the number of rules. One of the developers stated on Chromium issue tab, “Safari has introduced a similar API, which I guess inspires this. My personal experience is that extensions written in that API is usable, but far inferior to the full power of uBlock Origin. I don't want to see this API to be the sole future.” You can check out the issue reported on Chromium bug tracker. Also, you can join the discussion or raise your concern on the Google group: Manifest V3: Web Request Changes. Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users Chromium-based Brave browser shows 22% faster page load time than its Muon-based counterpart DragonFly BSD 5.4.1 released with new system compiler in GCC 8 and more
Read more
  • 0
  • 0
  • 15993
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ionic-framework-4-0-has-just-been-released-now-backed-by-web-components-not-angular
Richard Gall
23 Jan 2019
4 min read
Save for later

Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular

Richard Gall
23 Jan 2019
4 min read
Ionic Framework today released Ionic Framework 4.0. The release is a complete rebuild of the popular JavaScript framework for developing mobile and desktop apps. Although Ionic has, up until now, Ionic was built using Angular components, this new version has instead been built using Web Components. This is significant, as it changes the whole ball game for the project. It means Ionic Framework is now an app development framework that can be used alongside any front end frameworks, not just Angular. The shift away from Angular makes a lot of sense for the project. It now has the chance to grow adoption beyond the estimated five million developers around the world already using the framework. While in the past Ionic could only be used by Angular developers, it now opens up new options for development teams - so, rather than exacerbating a talent gap in many organizations, it could instead help ease it. However, although it looks like Ionic is taking a significant step away from Angular, it's important to note that, at the moment, Ionic Framework 4.0 is only out on general availability for Angular - it's still only in Alpha for Vue.js and React. Ionic Framework 4.0 and open web standards Although the move to Web Components is the stand-out change in Ionic Framework 4.0, it's also worth noting that the release has been developed in accordance with open web standards. This has been done, according to the team, to help organizations develop Design Systems (something the Ionic team wrote about just a few days ago) - essentially, using a set of guidelines and components that can be reused across multiple platforms and products to maintain consistency across various user experience touch points. Why did the team make the changes to Ionic Framework 4.0 that they did? According to Max Lynch, Ionic Framework co-founder and CEO, the changes present in Ionic Framework 4.0 should help organizations achieve brand consistency quickly, and to give development teams the option of using Ionic with their JavaScript framework of choice. Lynch explains: "When we look at what’s happening in the world of front-end development, we see two major industry shifts... First, there’s a recognition that the proliferation of proprietary components has slowed down development and created design inconsistencies that hurt users and brands alike. More and more enterprises are recognizing the need to adopt a design system: a single design spec, or library of reusable components, that can be shared across a team or company. Second, with the constantly evolving development ecosystem, we recognized the need to make Ionic compatible with whatever framework developers wanted to use—now and in the future. Rebuilding our Framework on Web Components was a way to address both of these challenges and future-proof our technology in a truly unique way." What does Ionic Framework 4.0 tell us about the future of web and app development? Ionic Framework 4.0 is a really interesting release as it tells us a lot about where web and app development is today. It confirms to us, for example, that Angular's popularity is waning. It also suggests that Web Components are going to be the building blocks of the web for years to come - regardless of how frameworks evolve. As Lynch writes in a blog post introducing Ionic Framework 4.0, "in our minds, it was clear Web Components would be the way UI libraries, like Ionic, would be distributed in the future. So, we took a big bet and started porting all 100 of our components over." Ionic Framework 4.0 also suggests that Progressive Web Apps are here to stay too. Lynch writes in the blog post linked to above that "for Ionic to reach performance standards set by Google, new approaches for asynchronous loading and delivery were needed." To do this, he explains, the team "spent a year building out a web component pipeline using Stencil to generate Ionic’s components, ensuring they were tightly packed, lazy loaded, and delivered in smart collections consisting of components you’re actually using." The time taken to ensure that the framework could meet those standards - essentially, that it could support high performance PWAs - underscores that this will be one of the key use cases for Ionic in the years to come.  
Read more
  • 0
  • 0
  • 23828

article-image-facebook-ai-research-introduces-enhanced-laser-library-that-allows-zero-shot-transfer-across-93-languages
Amrata Joshi
23 Jan 2019
4 min read
Save for later

Facebook AI research introduces enhanced LASER library that allows zero-shot transfer across 93 languages

Amrata Joshi
23 Jan 2019
4 min read
Yesterday, the team at Facebook’s AI research announced that they have expanded and enhanced their LASER (Language-Agnostic SEntence Representations) toolkit to work with more than more than 90 languages, written in 28 different alphabets. This has accelerated the transfer of natural language processing (NLP) applications to many more languages. The team is now open-sourcing LASER and making it as the first exploration of multilingual sentence representations. Currently 93 languages have been incorporated into LASER. LASER achieves the results by embedding all languages together in a single shared space. They are also making the multilingual encoder and PyTorch code freely available and providing a multilingual test set for more than 100 languages. The Facebook post reads, “The 93 languages incorporated into LASER include languages with subject-verb-object (SVO) order (e.g., English), SOV order (e.g., Bengali and Turkic), VSO order (e.g., Tagalog and Berber), and even VOS order (e.g., Malagasy).” Features of LASER Enables zero-shot transfer of NLP models from one language, such as English, to scores of others including languages where training data is limited. Handles low-resource languages and dialects. Provides accuracy for 13 out of the 14 languages in the XNLI corpus. It delivers results in cross-lingual document classification (MLDoc corpus). LASER’s sentence embeddings are strong at parallel corpus mining which establishes a new state of the art in the BUCC, 2018 workshop on building and using comparable Corpora, shared task for three of its four language pairs. It provides fast performance with processing up to 2,000 sentences per second on GPU. PyTorch has been used to implement the sentence encoder with minimal external dependencies. LASER supports the use of multiple languages in one sentence. LASER’s performance improves as new languages get added and the system keeps on learning to recognize the characteristics of language families. Sentence embeddings LASER maps a sentence in any language to a point in a high-dimensional space such that the same sentence in any language will end up in the same neighborhood. This representation could also be a universal language in a semantic vector space. The Facebook post reads, “We have observed that the distance in that space correlates very well to the semantic closeness of the sentences.” The sentence embeddings are used for initializing the decoder LSTM through a linear transformation and are also concatenated to its input embeddings at every time step. The encoder/decoder approach The approach behind this project is based on neural machine translation, an encoder/decoder approach which is also known as sequence-to-sequence processing. LASER uses one shared encoder for all input languages and a shared decoder for generating the output language. LASER uses a 1,024-dimension fixed-size vector for representing the input sentence. The decoder is instructed about which language needs to be generated. As the encoder has no explicit signal for indicating the input language, this method encourages it to learn language-independent representations. The team at Facebook AI-research has trained their systems on 223 million sentences of public parallel data, aligned with either English or Spanish. By using a shared BPE vocabulary trained on the concatenation of all languages, it was possible to benefit  low-resource languages from high-resource languages of the same family. Zero-shot, cross-lingual natural language inference LASER achieves excellent results in cross-lingual natural language inference (NLI). The Facebook’s AI research team considers the zero-shot setting as they train the NLI classifier on English and then apply it to all target languages with no fine tuning or target-language resources. The distances between all sentence pairs are calculated and the closest ones are selected. For more precision, the margin between the closest sentence and the other nearest neighbors is considered. This search is performed using Facebook’s FAISS library. The team outperformed the state of the art on the shared BUCC task by a large margin. The team improved the F1 score from 85.5 to 96.2 for German/English, from 81.5 to 93.9 for French/English, from 81.3 to 93.3 for Russian/English, and from 77.5 to 92.3 for Chinese/English. To know more about LASER, check out the official post by Facebook. Trick or Treat – New Facebook Community Actions for users to create petitions and connect with public officials Russia opens civil cases against Facebook and Twitter over local data laws FTC officials plan to impose a fine of over $22.5 billion on Facebook for privacy violations, Washington Post reports
Read more
  • 0
  • 0
  • 13804

article-image-introducing-ct-wasm-a-type-driven-extension-to-webassembly-for-secure-in-browser-cryptography
Bhagyashree R
23 Jan 2019
3 min read
Save for later

Introducing CT-Wasm, a type-driven extension to WebAssembly for secure, in-browser cryptography

Bhagyashree R
23 Jan 2019
3 min read
Researchers from the University of California and the University of Cambridge have come up with Constant-Time WebAssembly (CT-Wasm), the details of which are shared in their paper: CT-Wasm: Type-Driven Secure Cryptography for the Web Ecosystem in December. It is a type-driven, strict extension to WebAssembly, which aims to address the state of cryptography in the web ecosystem. CT-Wasm provides developers a principled direction for improving the quality and auditability of web platform cryptography libraries while also maintaining the convenience that has made JavaScript successful. Why CT-Wasm is introduced? A lot of work has been done towards the implementation of client and server-side cryptography in JavaScript. But, there are still some widespread concerns related to security in JavaScript, which CT-WASM tries to solve: Side channels: While implementing a cryptography algorithm, the functional correctness is not the only concern. It is also important to ensure the properties of information flow that take into account the existence of side channels. For instance, an attacker can use the duration of the computation as a side channel. They can compare different executions to find out which program paths were used and work backward to determine information about secret keys and messages. Additionally, modern JavaScript runtimes are extremely complex software systems, that include just-in-time (JIT) compilation and garbage collection (GC) techniques that can inherently expose timing side-channels. In-browser cryptography: Another concern is, in-browser cryptography, which refers to the implementation of cryptographic algorithms using JavaScript in a user’s browser. Unskilled cryptographers: Most of the JavaScript cryptography is implemented by unskilled cryptographers who do not generally care about the most basic timing side channels. How it solves the concerns in JavaScript cryptography? Recently, all browsers have added support for WebAssembly (WASM), a bytecode language. As Wasm is a low-level bytecode language, it already provides a firmer foundation for cryptography than JavaScript: Wasm’s “close-to-the-metal” instructions provide more confidence in its timing characteristics than JavaScript’s unpredictable optimizations. It has a strong, static type system, and principled designed. It uses a formal small-step semantics and a well-typed Wasm program enjoys standard progress and preservation properties. CT-Wasm extends Wasm to become a verifiably secure cryptographic language by augmenting its type system and semantics with cryptographically meaningful types to produce Constant-Time WebAssembly (CT-Wasm). It combines the convenience of in-browser JavaScript crypto with the security of a low-level, formally specified language. Using CT-Wasm, developers can distinguish between secret data such as keys and messages and public data. After distinguishing the secret data, they can impose secure information flow and constant-time programming disciplines on code that handles secret data and ensure that well-typed CT-Wasm code cannot leak such data. CT-Wasm allows developers to incorporate third-party cryptographic libraries as they do with JavaScript and ensures that these libraries do not leak any secret information by construction. For more details, read the paper: CT-Wasm: Type-Driven Secure Cryptography for the Web Ecosystem. The elements of WebAssembly – Wat and Wasm, explained [Tutorial] Now you can run nginx on Wasmjit on all POSIX systems Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux
Read more
  • 0
  • 0
  • 17365

article-image-gitlab-11-7-releases-with-multi-level-child-epics-api-integration-with-kubernetes-search-filter-box-and-more
Amrata Joshi
23 Jan 2019
5 min read
Save for later

GitLab 11.7 releases with multi-level child epics, API integration with Kubernetes, search filter box and more

Amrata Joshi
23 Jan 2019
5 min read
Yesterday, the team at Gitlab released GitLab 11.7, an application for the DevOps lifecycle that helps the developer teams work together efficiently to secure their code. GitLab 11.7 comes with features like multi-level child epics, API integration with Kubernetes, cross-project pipeline and more. What’s new in GitLab 11.7 Managing releases with GitLab 11.7 This version of GitLab eliminates the need for manual collection of source code, build output, or metadata associated with a released version of the source code. GitLab 11.7 comes with releases in GitLab Core which helps users to have release snapshots that include the source code and related artifacts. Multi-level child epics for work breakdown structures This release comes with multi-level child epics in GitLab portfolio management which allow users to create multi-level work breakdown structures. It also helps in managing complex projects and work plans. This structure builds a direct connection between planning and actionable issues. Users can now have an epic containing both issues and epics. Streamlining JavaScript development with NPM registries This release also delivers NPM registries in GitLab Premium that provides a standard and secure way to share and version control NPM packages across projects. Users can then share a package-naming convention for utilizing libraries in any Node.js project and NPM. Remediating vulnerabilities GitLab 11.7 helps users to remediate vulnerabilities in the apps and suggest a solution for Node.js projects managed with Yarn. Users can download a patch file, and apply it to their repo using the git apply command. They can then push changes back to their repository and the security dashboard will then confirm if the vulnerability is gone. This process is easy and reduces the time required to deploy a solution. API integration with Kubernetes This release comes with API support to Kubernetes integration. All the actions that are available in the GUI currently, such as listing, adding, and deleting a Kubernetes cluster are now accessible with the help of the API. Developers can use this feature to fold in cluster creation as part of their workflow. Cross-project pipeline With this release, it is now possible to expand upstream or downstream cross-project pipelines from the pipeline view. Users can view the pipelines across projects. Search filter box for issue board navigation This release comes with a search filter that makes navigation much easier. Users can simply type a few characters in the search filter box to narrow down to the issue board they are interested in. Project list redesign Project list UI is redesigned in GitLab 11.7 and mainly focuses on readability and summary of the project’s activity. Import issues CSV This release makes transitions easier. Users can now import issues into GitLab while managing their existing work. This feature works with Jira or any other issue tracking system that can generate a CSV export. Support catch-all email mailboxes This release supports sub-addressing and catch-all email mailboxes with a new email format that allows more email servers to be used with GitLab, including Microsoft Exchange and Google Groups. Include CI/CD files from other projects and templates With this release, users can now include their snippets of configuration from other projects and predefined templates. This release also includes snippets for specific jobs, like sast or dependency_scanning, so users can use them instead of copying and pasting the current definition. GitLab Runner 11.7 The team at GitLab also released GitLab Runner 11.7 yesterday. It is an open source project that is used to run CI/CD jobs and send the results back to GitLab. Major improvements In GitLab 11.7, the performance of viewing merge requests has been improved by caching syntax highlighted discussion diffs. Push performance has been improved by skipping pre-commit validations that have passed on other branches. Redundant counts in snippets search have been removed. This release comes with Mattermost 5.6, an open source Slack-alternative that includes interactive message dialogs, new admin tools, Ukrainian language support, etc. Users are generally happy with GitLab 11.7 release. One of the users who has been using GitLab for quite some time now is waiting for MR[0]. They commented on Hacker News, “I'm impatiently waiting for this MR [0] that will allow dependant containers to also talk to each other. It's the last missing piece for my ideal CI setup.” To which, GitLab’s product manager for Verify (CI) replied, “Thanks for bringing this up I hadn't seen your contribution! I think this is a great idea. I know the technical team has been overwhelmed with community contributions as of late - which is a good problem to have but one that we're still solving. I'm going to try and shepherd this one along myself.” Some users think if GitLab can pull off the npm registry well, then this might prove to be the beginning of a universal package management server built into Gitlab. One of the comments reads, “Gitlab API is amazingly simple and flexible, can be used efficiently from the terminal to list CI jobs, your issues, edit them.” Users are also comparing GitLab with GitHub, where some users are supporting GitHub. One user commented, “GitLab’s current homepage hides their actual site (the repositories) and makes it hard as a developer to actually get started compared to Github.” Another user commented, “We've started using Gitlab where I work and it's so much better than GitHub.” Users are also facing issues with memory optimization. One of the comments reads, “I like GitLab but noticed my Docker container running it is steadily requiring more memory to run smoothly. It’s sitting at 12GB right now, which is a little too high for my taste. I wish there were ways to reduce this.” Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications GitLab 11.5 released with group security and operations-focused dashboard, control access to GitLab pages GitLab 11.4 is here with merge request reviews and many more features
Read more
  • 0
  • 0
  • 16356
article-image-typescript-3-3-rc-is-here
Prasad Ramesh
23 Jan 2019
2 min read
Save for later

TypeScript 3.3 RC is here!

Prasad Ramesh
23 Jan 2019
2 min read
Today, Microsoft announced the general availability of TypeScript 3.3 RC in a blog post. This version does not contain any major or breaking changes. Better behavior while calling union types When there is a union type A | B, TypeScript now allows users to access all of the properties common to both A and B. For example, the intersection of members. You can get a property from a union type only if it’s known to be present in every union type. When every type has only one signature with identical parameters, things work. Such a restriction was too much and have errors in some areas. So, in TypeScript 3.3, the following code as shown in the blog will work: type Fruit = "apple" | "orange"; type Color = "red" | "orange"; type FruitEater = (fruit: Fruit) => number;     // eats and ranks the fruit type ColorConsumer = (color: Color) => string;  // consumes and describes the colors declare let f: FruitEater | ColorConsumer; f("orange"); // It works! Returns a 'number | string'. f("apple");  // error - Argument of type '"apple"' is not assignable to parameter of type '"orange"'. f("red");    // error - Argument of type '"red"' is not assignable to parameter of type '"orange"'. The parameters of the above signatures are ‘intersected’ to create a new signature. When the impossible intersections are gone, what remains is "orange" & "orange" which is just "orange". That is not to say there are no restrictions. The new behavior is active only when only one type in the union has multiple overloads and a generic signature. The forEach method will now be callable, but there may be some issues under noImplicitAny. The --build mode’s --watch flag leverages incremental file watching as well in TypeScript 3.3. This can result in significantly faster builds with --build --watch. Reportedly, there was over 50% reduced build times. Future of ESLint support in TypeScript Announcing ‘TypeScript Roadmap’ for January 2019- June 2019 Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript
Read more
  • 0
  • 0
  • 2067

article-image-blizzard-set-to-demo-googles-deepmind-ai-in-starcraft-2
Natasha Mathur
23 Jan 2019
3 min read
Save for later

Blizzard set to demo Google's DeepMind AI in StarCraft 2

Natasha Mathur
23 Jan 2019
3 min read
Blizzard, an American video game development company is all set to demonstrate the progress made by Google’s DeepMind AI at StarCraft II, a real-time strategy video game, tomorrow. “The StarCraft games have emerged as a "grand challenge" for the AI community as they're the perfect environment for benchmarking progress against problems such as planning, dealing with uncertainty and spatial reasoning”, says the Blizzard team. Blizzard had partnered up with DeepMind during the 2016 BlizzCon, where they announced that they’re opening up the research platform for StarCraft II so that everyone in the StarCraft II community can contribute towards advancement in the AI research. Ever since then, much progress has made on the AI research front when it comes to StarCraft II. It was only two months back when, Oriol Vinyals, Research Scientist, Google DeepMind, shared the details of the progress that the AI had made in StarCraft II, states the Blizzard team. Vinyals stated how the AI, or agent, had learned to perform basic macro focused strategies along with defence moves against cheesy and aggressive tactics such as “cannon rushes”. Blizzard also posted an update during BlizzCon 2018, stating that DeepMind had been working really hard at training their AI (or agent) to better understand and learn StarCraft II. “Once it started to grasp the basic rules of the game, it started exhibiting amusing behaviour such as immediately worker rushing its opponent, which actually had a success rate of 50% against the 'Insane' difficulty standard StarCraft II AI”, mentioned the Blizzard team. It has almost become a trend for DeepMind to measure the capabilities of its advanced AI against human opponents in video games. For instance, it made headlines in 2016 when its AlphaGo AI program, managed to successfully defeat Lee Sedol, world champion, in a five-game match. AlphaGo had also previously defeated the professional Go player, Fan Hui in 2015 who was a three-time European champion of the game at the time. Also, recently in December 2018, DeepMind researchers published a full evaluation of its AlphaZero in the journal Science, confirming that it is capable of mastering Chess, Shogi, and Go from scratch. Other examples of AI making its way into advanced game learning includes OpenAI Five, a team of AI algorithms that beat a team of amateur human video game players in Dota 2 – the popular battle arena game, back in June 2018. Later in August, it managed to beat semi-professional players at the Dota 2 game. The demonstration for DeepMind AI in StarCraft II is all set for tomorrow at 10 AM Pacific Time. Check out StarCraft’s Twitch channel or DeepMind’s YouTube channel to learn about other recent developments that have been made. Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads of AI use in healthcare Graph Nets – DeepMind’s library for graph networks in Tensorflow and Sonnet DeepMind open sources TRFL, a new library of reinforcement learning building blocks
Read more
  • 0
  • 0
  • 15077

article-image-google-may-pull-out-google-news-from-europe-bloomberg-report
Melisha Dsouza
23 Jan 2019
3 min read
Save for later

Google may pull out Google news from Europe: Bloomberg Report

Melisha Dsouza
23 Jan 2019
3 min read
According to a report  published by Bloomberg yesterday, Google may pull its Google News service from Europe. This decision is dependent on a controversial copyright law that is in the process of being finalized. This law that is being worked on by European regulators will give publishers the right to demand money from Google and other web platforms when fragments of their articles show up in news search results or are shared by users. Moreover, these rules would also require Google and Facebook to actively prevent music, videos, and other copyrighted content from appearing on their platforms unless the rights holders grant them a license. On the basis of "a close reading of the rules ", Jennifer Bernal, Google’s public policy manager for Europe, the Middle East, and Africa; says that Google News might quit the continent if regulators are successful in implementing this law. What does this move mean to Google and other Publishers? Google states that its news service does not earn the company any direct revenue. So, pulling Google News out of Europe wouldn’t mean much to the tech giant. However, news publishers would be affected to a certain extent. This is because publishers earn money through advertisements in search results. Passing the law would mean that Google will have to choose the publishers that it would license. Bloomberg points out that since bigger publishers offer a broader range of popular content, smaller competitors are likely to lose out on the license and eventually on the revenue. This is not the first time Google has found itself at crossroads. Bloomberg details a similar incident in 2014 when Google shut its news service after a law was passed requiring Spanish publications to charge aggregators for displaying excerpts of stories. While Google remained financially unaffected by this move, small publishers lost about 13 percent of their web traffic, according to a 2017 study released by the Spanish Association of Publishers of Periodical Publications. While the proposal was scheduled to be finalized on Monday, lawmakers failed to come to an agreement and the legislation has been stalled for now. You can head over to Bloomberg for more insights on this news. Google faces pressure from Chinese, Tibetan, and human rights groups to cancel its censored search engine, Project DragonFly A new privacy bill was introduced for creating federal standards for privacy protection aimed at big tech firms like Facebook, Google and Amazon Google Home and Amazon Alexa can no longer invade your privacy; thanks to Project Alias!
Read more
  • 0
  • 0
  • 7419
article-image-us-department-of-homeland-security-releases-an-emergency-directive-to-combat-dns-tampering
Savia Lobo
23 Jan 2019
2 min read
Save for later

US Department of Homeland security releases an ‘emergency directive’ to combat DNS tampering

Savia Lobo
23 Jan 2019
2 min read
Yesterday, the Department of Homeland security issued an emergency directive with the subject, “Mitigate DNS Infrastructure Tampering” and ordering the federal agencies to comply with these in order to secure login credentials for their internet domain records. The DHS directive comes on the heels of research published by FireEye, early this month. The company shared that they have identified huge DNS hijacking affecting multiple domains belonging to the government, telecommunications, and internet infrastructure entities across the Middle East and some other countries. FireEye analysts also believe an Iranian-based group to be the source behind these attacks. https://twitter.com/gregotto/status/1087800274511634434 The directive provides a brief explanation of how the attackers compromise user credentials, alter their DNS records, which enables them to direct user traffic to their system for manipulation or inspection. This directive includes four actions to mitigate risks from undiscovered tampering, enable agencies to prevent illegitimate DNS activity for their domains, and detect unauthorized certificates. The actions include, Audit DNS Records Change DNS Account Passwords Add Multi-Factor Authentication to DNS Accounts Monitor Certificate Transparency Logs Agencies have 10 business days to implement these instructions. According to CyberScoop, “The directive makes clear that agencies will ultimately be held accountable for their domain-name security policies, regardless of where they maintain their DNS accounts.” The CISA (Cybersecurity and Infrastructure Security Agency) would also be providing technical assistance to agencies that report anomalous DNS records. They will also review submissions from agencies that are unable to implement MFA on DNS accounts within the timeline and get back to agencies. CISA will also provide additional assistance via their Cyber Hygiene service and will also provide additional guidance to agencies through an Emergency Directive coordination call following the issuance of this directive. “By February 8, 2019, CISA will provide a report to the Secretary of Homeland Security and the Director of the Office of Management and Budget (OMB) identifying agency status and outstanding issues”, the directive states. To know more about this news in detail, visit DHS’ official website. China Telecom misdirected internet traffic, says Oracle report How to attack an infrastructure using VoIP exploitation [Tutorial] FireEye’s Global DNS Hijacking Campaign suspects Iranian-based group as the prime source
Read more
  • 0
  • 0
  • 3237

article-image-pharo-7-0-released-with-64-bit-support-a-new-build-process-and-more
Prasad Ramesh
23 Jan 2019
2 min read
Save for later

Pharo 7.0 released with 64-bit support, a new build process and more

Prasad Ramesh
23 Jan 2019
2 min read
The release of Pharo 7.0 was announced in a blog post yesterday. The seventh major release of the object-oriented programming language, Pharo 7.0 is the most important release yet. We look at the major features in Pharo 7.0. Pharo 7.0 comes in 64-bit versions for Linux and OSX. The performance and stability have improved. The 64-bit versions are stable for Linux and OSX. For Windows, the 64-bit version is still in the preview stage. Pharo 7.0 includes the new version of the PharoLauncher. It is a very useful tool to manage distributions you are working with. The new Pharo build comes with a completely new build process that supports its full bootstrap from sources. This new build process will enable the production to some images. The git client for Pharo, Iceberg has also been significantly improved. Iceberg is now the default CMS. Calypso replaces Nautilus as the new system Pharo browser in Pharo 7.0. Calypso brings improved remote working and enhanced browsing capabilities. IoT is now an important part of Pharo. Installing PharoThings provides an impressive amount of tools to develop applications in small devices. The unified foreign function interface (UnifiedFFI) is improved significantly for compatibility with 64-bit Windows. UnifiedFFI is used for interfacing with the outside world in Pharo. The Pharo blog post says: "Pharo 70’s new infrastructure and process set the stage for a new generation of version. The visibility of GitHub combined with the powerful tools that have been validated with more than one year of beta testing is massively pay off.” About 2142 issues have been closed in this release. There were more than 75 people contributing to the success of Pharo 7.0’s main image. These were the highlights of the new features in Pharo, for more details, you can view the release notes. Future of ESLint support in TypeScript Rust 1.32 released with a print debugger and other changes Elixir 1.8 released with new features and infrastructure improvements
Read more
  • 0
  • 0
  • 8864
Modal Close icon
Modal Close icon