Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-google-cloud-security-launches-three-new-services-for-better-threat-detection-and-protection-in-enterprises
Melisha Dsouza
08 Mar 2019
2 min read
Save for later

Google Cloud security launches three new services for better threat detection and protection in enterprises

Melisha Dsouza
08 Mar 2019
2 min read
This week, Google Cloud Security announced a host of new services to empower customers with advanced security functionalities that are easy to deploy and use. This includes the Web Risk API, Cloud Armor, and HSM keys. #1 Web Risk API The Web Risk API has been released in the beta format to ensure the safety of users on the web. The Web Risk API includes data on more than a million unsafe URLs. Billions of URL’s are examined each day to keep this data up-to-date. Client applications can use a simple API call to check URLs against Google's lists of unsafe web resources. This list also includes social engineering sites, deceptive sites, and sites that host malware or unwanted software. #2 Cloud Armor Cloud Armor is a Distributed Denial of Service (DDoS) defense and Web Application Firewall (WAF) service for Google Cloud Platform (GCP) based on the technologies used to protect services like Search, Gmail and YouTube. Cloud Armor is generally available, offering L3/L4 DDoS defense as well as IP Allow/Deny capabilities for applications or services behind the Cloud HTTP/S Load Balance. It also allows users to either permit or block incoming traffic based on IP addresses or ranges using allow lists and deny lists. Users can also customize their defenses and mitigate multivector attacks through Cloud Armor’s flexible rules language. #3 HSM keys to protect data in the cloud Cloud HSM is now generally available and it allows customers to protect encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 certified HSMs. Customers do not have to worry about the operational overhead of HSM cluster management, scaling and patching. Cloud HSM service is fully integrated with Cloud Key Management Service (KMS), allowing users to create and use customer-managed encryption keys (CMEK) that are generated and protected by a FIPS 140-2 Level 3 hardware device. You can head over to Google Cloud Platform’s official blog to know more about these releases. Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud Build Hadoop clusters using Google Cloud Platform [Tutorial]
Read more
  • 0
  • 0
  • 13252

article-image-lxd-3-11-releases-with-configurable-snapshot-expiry-progress-reporting-and-more
Natasha Mathur
08 Mar 2019
2 min read
Save for later

LXD 3.11 releases with configurable snapshot expiry, progress reporting, and more

Natasha Mathur
08 Mar 2019
2 min read
The LXD team released version 3.11 of LXD, its open source container management extension for Linux Containers (LXC), earlier this week. LXD 3.11 explores new features, minor improvements, and bugfixes. LXD or ‘ Linux Daemon’ system container manager provides users with an experience similar to virtual machines. It is written in Go and helps improve the existing LXC features to build and manage Linux containers. New Features in LXD 3.11 Configurable snapshot expiry at creation time: LXD 3.11 allows users to set an expiry during the snapshot creation time. Earlier, it was a hassle to manually create snapshots and edit them to modify their expiry. To change the expiry at the API level, you can set the exact timestamp to null that will make a persistent snapshot despite any configured auto-expiry. Progress reporting for publish operations: Progress information is now displayed to the user in LXD 3.11 when running lxc publish against a container or snapshot. This is similar to image transfers and container migrations. Improvements Minor improvements have been made to how candid authentication feature gets handled by the CLI in LXD 3.11. Per-remote authentication cookies: Now every remote consist of its own “cookie jar”. Also, LXD’s behavior is now always identical in LXD 3.11 when adding remotes. In prior releases, a shared “cookie jar” was being used for all remotes which would lead to inconsistent behaviors. Candid preferred over TLS for new remotes: In LXD 3.11, while using LXC remote add to add in a new remote, candid will be used for TLS authentication in case that remote supports candid. Also, authentication type can always be overridden using --auth-type. Remote list can now show Candid domain: The remote list can now indicate what Candid domain is used in LXD 3.11. Bug Fixes Goroutine leak has been fixed in ExecContainer. The “client: fix goroutine leak in ExecContainer” has been reverted. rest-api.md formatting has been updated. Translations from weblate have also been updated. Error handling in execIfAliases has been improved. Duplicate scheduled snapshots have been fixed. failing backup import has been fixed. Test case that covers the image sync scenario for the joined node has been updated. For a complete list of changes, check out the official LXD 3.11 release notes. LXD 3.8 released with automated container snapshots, ZFS compression support and more! Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix An update on Bcachefs- the “next generation Linux filesystem”
Read more
  • 0
  • 0
  • 7527

article-image-apache-maven-javadoc-plugin-version-3-1-0-released
Sugandha Lahoti
08 Mar 2019
2 min read
Save for later

Apache Maven Javadoc Plugin version 3.1.0 released

Sugandha Lahoti
08 Mar 2019
2 min read
On Monday, the Apache Maven team announced the release of the Apache Maven Javadoc Plugin, version 3.1.0. The Javadoc Plugin uses the Javadoc tool to generate javadocs for a specified project. It gets the parameter values that will be used from the plugin configuration specified in the pom. The plugin can also be used to package the generated javadocs into a jar file for distribution. What’s new in Maven Javadoc Plugin version 3.1.0? New features include the support for aggregated reports at each level in the multi-module hierarchy. The dependency has also been upgraded to parent pom 32. Changes made to the repository The aggregate goal doesn't respect managed dependencies detectLinks may pass invalid URLs to javadoc(1) Invalid 'expires' attribute <link> entries that do not redirect are ignored and those that point to a resource that requires an Accept header may be ignored Other improvements: The plugin adds an 'aggregated-no-fork' goal The Command line dump reveals proxy user/password in case of errors The plugin ignores module-info.java on earlier Java versions Additionalparam documentation has been cleaned up Element-list links from java10 dependencies are now supported Reports are now allowed to be generated in Spanish locale The default value for removeUnknownThrows is changed to true Proxy configuration now properly works for both HTTP and HTTPS Patterns are used for defaultJavadocApiLinks Typos are fixed in User Guide. Groups parameter is not compatible with Surefire Other fixes: duplicates lines are fixed in the javadoc JavadocOptionsXpp3Reader doesn't deserialize the placement element <additionalOption> input isn't escaped for double backslashes links option is ignored in offline mode, even for local links in argument file in tag Maven Javadoc Plugin can't get dependency from third-party maven repository These are just a select few updates. For more details, head over to the mailing list archives. Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more! Twitter adopts Apache Kafka as their Pub/Sub System Apache Spark 2.4.0 released
Read more
  • 0
  • 0
  • 9418

article-image-mozilla-considers-blocking-darkmatter-after-reuters-reported-its-link-with-a-secret-hacking-operation-project-raven
Bhagyashree R
07 Mar 2019
3 min read
Save for later

Mozilla considers blocking DarkMatter after Reuters reported its link with a secret hacking operation, Project Raven

Bhagyashree R
07 Mar 2019
3 min read
Back in January this year, Reuters in an investigative piece shared that DarkMatter was providing staff for a secret hacking operation called Project Raven. After reading this report, Mozilla is now thinking whether it should block DarkMatter from serving as one of its internet security providers. The unit working for Project Raven were mostly former US intelligence officials, who were allegedly conducting privacy-threatening operations for the UAE government. The team behind this project was working in a converted mansion in Abu Dhabi, which they called “the Villa”.  These operations included hacking accounts of human rights activists, journalists, and officials from rival governments. On February 25, DarkMatter in a letter addressed to Mozilla, CEO Karim Sabbagh denied all the allegations reported by Reuters and refused that it has anything to do with Project Raven. Sabbagh wrote in the letter, “We have never, nor will we ever, operate or manage non-defensive cyber activities against any nationality.” Mozilla’s response to the Reuter report In an interview last week, Mozilla executive said that Reuter’s report has raised concerns inside the company about DarkMatter misusing its authority to certify websites as safe. Mozilla is yet to decide whether they should deny DarkMatter from this authority. Selena Deckelmann, a senior director of engineering for Mozilla, “We don’t currently have technical evidence of misuse (by DarkMatter) but the reporting is strong evidence that misuse is likely to occur in the future if it hasn’t already.” Deckelmann further shared that Mozilla is also concerned about the certifications DarkMatter has granted and may strip some or all of the 400 certifications that DarkMatter has granted to websites under a limited authority since 2017. Marshall Erwin, director of trust and security for Mozilla, said that DarkMatter could use its authority for “offensive cybersecurity purposes rather than the intended purpose of creating a more secure, trusted web.” A website is designated as secure if it is certified by an external authorized organization called Certification Authority (CA). This certifying organization is also responsible for securing the connection between an approved website and its users. To get this authority, these organizations need to apply to individual browser makers like Mozilla and Apple. DarkMatter has been threatening Mozilla to gain full authority to grant certifications since 2017. Giving it a full authority will allow them to issue certificates to hackers impersonating real websites, including banks. https://twitter.com/GossiTheDog/status/1103596200891244545 To know more about this news in detail, read the full story at Reuters’ official website. Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Broswer Mozilla engineer shares the implications of rewriting browser internals in Rust Mozilla shares key takeaways from the Design Tools survey
Read more
  • 0
  • 0
  • 10963

article-image-googles-pay-equity-analysis-finds-men-not-women-are-underpaid-critics-call-out-design-flaws-in-the-analysis
Bhagyashree R
07 Mar 2019
6 min read
Save for later

Google’s pay equity analysis finds men, not women, are underpaid; critics call out design flaws in the analysis

Bhagyashree R
07 Mar 2019
6 min read
Every year Google conducts an annual pay equity analysis to determine whether employees are being equally paid for similar jobs regardless of their gender or race. Yesterday, it shared a blog post based on this analysis, according to which men were being paid less than women in one of their job level. Google says that with this annual analysis it aims to make the modeled compensation amounts and any changes to it by managers equitable across gender and race. Any significant discrepancies found in any job groups are then addressed by upwards adjustments across the group. What Google’s pay equity analysis revealed? Google’s paying plan consists of three pay elements namely salary, bonus, and equity refresh. The analysis did not talk about these three pay elements. The compensation is based on an employee’s job role, performance, and location. Additionally, managers are allocated an extra pool of money, which is called a discretionary budget. Managers can use this budget to increase pay elements of individual employees. The analysis highlighted that this compensation discrepancy of Level 4 engineers happened because managers were allocating more discretionary funds to women than men. “First, the 2018 analysis flagged one particularly large job code (Level 4 Software Engineer) for adjustments. Within this job code, men were flagged for adjustments because they received less discretionary funds than women,” wrote Lauren Barbato, Lead Analyst for Pay Equity. As an adjustment to this discrepancy, Google provided $9.7 million to a total of 10,677 employees, though it is not clear how many of those employees were men. A major portion of this adjustment fund (49 percent) was spent on discrepancies in offers to new hires. Google notes that the number of adjustments has risen dramatically as compared to 2017, which was only 228 employees. It says that this is because 2018’s analysis covered up to 91 percent of Googlers including new hires. “Secondly, this year we undertook a new hire analysis to look for any discrepancies in offers to new employees—this accounted for 49 percent of the total dollars spent on adjustments,” reads the blog post. This report just came at a time when Google is facing lawsuits for gender pay discrimination Google and other tech giants, in general, have been increasingly under public pressure to address gender bias at the workplace. Back in 2017, Google was sued by three ex-female employees saying that the company was discriminating female employees by systematically paying them less than men for similar jobs. One of the plaintiffs in this lawsuit against Google, Kelly Ellis, mentioned that she was hired as a Level 3 employee despite having four years of experience. Needless to say, she was surprised when she read about Google underpaying men and quickly correcting their compensation to maintain “equity”. She tweeted: https://twitter.com/justkelly_ok/status/1102686415643525120 Dylan Salisbury, a Software development manager at Google, shared in a tweet that even though he raised a concern about Google’s gender pay discrimination, it was not really taken into account: https://twitter.com/dyfy/status/1102963758471643137 The Level 3 category is for new software engineers who have just graduated. She further revealed that a few weeks later a male engineer who, similar to Ellis, had graduated four years back was hired as a Level 4 employee. According to the suit, other men who had the same level of qualifications or even less were also hired as Level 4 employees. Jim Finberg, the lawyer representing the female employees, says this report, in fact, contradicts expert analysis of the company’s own payroll data. Under the lawsuit, the plaintiffs are demanding justice for about 8,300 current and former employees. Finberg in an email to WIRED said, “It is very disappointing that, instead of addressing the real gender pay inequities adverse to women, Google has decided to increase the compensation of 8,000 male software engineers.” Along with the lawsuit by the three ex-female employees, in the same year, Google was sued by the Department of Labor because it refused to share compensation data needed for an anti-discrimination audit. Later on, the department found out “systemic compensation disparities against women pretty much across the entire workforce.” Google supposedly shared this report because it was counter-intuitive. Sharing select findings in the report led to a lot of misinterpretations showing that either Google didn't have pay equity problem or even if it had they have overcorrected it. As the company itself admits in the post that this analysis is not adequate, it could have waited until the more comprehensive review is complete. Or, instead, it could have just shared the full report or the raw data for the public to see, as sharing a part of the results does not show the context. The pay equity analysis only compares employees in the same job category so it does not really show the bigger picture. The results do not reflect race or gender differences in the hiring and promotion processes. Further, in the blog post, Lauren Barbato mentions that in addition to this pay equity analysis, Google will be performing a more comprehensive review of leveling, performance ratings, and promotion processes. It will also be analyzing how employees are being leveled when they are hired. “Our first step is a leveling equity analysis to assess how employees are leveled when they are hired, and whether we can improve how we level,” says the blog post. One of the Google employees shared in a tweet that women were being paid more discretionary fund because they were systematically being under-promoted: https://twitter.com/ireneista/status/1102655232540962817 Amid all this backlash, people were also supportive of Google's decision. A Redditor commented, “So Google found out it was underpaying more men and hence gave men a higher raise pool? Sounds like they found a problem and took steps to fix it.” To read Google’s announcement, check out their blog post. Google refuses to remove the controversial Saudi government app that allows men to track women Google’s Project Zero reveals a “High severity” copy-on-write security flaw found in macOS kernel Google finally ends Forced arbitration for all its employees  
Read more
  • 0
  • 0
  • 26402

article-image-deepmind-researchers-provide-theoretical-analysis-on-user-recommender-system
Natasha Mathur
07 Mar 2019
3 min read
Save for later

DeepMind researchers provide theoretical analysis on recommender system, 'echo chamber' and 'filter bubble effect'

Natasha Mathur
07 Mar 2019
3 min read
DeepMind researchers published a paper last week, titled ‘Degenerate Feedback Loops in Recommender Systems’. In the paper, researchers provide a new theoretical analysis examining the user dynamics role and the behavior of recommender systems, that can help remove the echo chamber from the filter bubble effect. https://twitter.com/DeepMindAI/status/1101514121563041792 Recommender systems are aimed to provide users with personalized product and information offerings. These systems take into consideration the user’s personal characteristics and past behaviors to generate a list of items that have been personalized as per the user’s tastes. Although very successful, there are certain concerns related to the systems that it might lead to a self-reinforcing pattern of narrowing exposure and a shift in user’s interest. These problems are often called the “echo chamber” and “filter bubble”. In the paper, researchers define echo chamber as user’s interest being positively or negatively reinforced due to the repeated exposure to a certain category of items. For “filter bubble”, researchers use the definition introduced by Pariser (2011) that states that the recommender systems select limited content to serve the users online. Researchers have considered a recommender system that is capable of interacting with a user over time. At every time step, the recommender system serves a different number of items (or categories of items such as news articles, videos, or consumer products) to a user from a set of finite or countably infinite items. The goal of this recommender system is to provide those items to a user that she/he might be interested in.                     The interaction between the recommender system and user The paper also considers the fact that the user’s interaction with the recommender system can change depending on her interest in different items for the next interaction. Additionally, to further analyze the echo chamber or filter bubble effect in recommender systems, researchers track when the user’s interest changes extremely. Futhermore, researchers used the dynamical system framework to model the user’s interest. They treated the interest extremes of the users as the degeneracy points within the system. For the recommender system, researchers discussed the influence on the degeneracy speed of these three independent factors in system design including model accuracy, amount of exploration, and the growth rate of the candidate pool. As per the researchers, continuous random exploration along with linearly growing the candidate pool is the best methods against system degeneracy. Although this research is quite effective, it still has two main limitations. The first limitation is that user interests are hidden variables and are not observed directly which is why a good measure for user interests is needed for practice to reliably study the degeneration process. Secondly, since the researchers have assumed the items and users being independent of each other, the theoretical analysis has been extended to study possibly mutually dependent items and users in the future. For more information, check out the official research paper. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers Blizzard set to demo Google’s DeepMind AI in StarCraft 2 Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games
Read more
  • 0
  • 0
  • 14676
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-donald-trump-called-apple-ceo-tim-cook-tim-apple
Richard Gall
07 Mar 2019
3 min read
Save for later

Donald Trump called Apple CEO Tim Cook 'Tim Apple'

Richard Gall
07 Mar 2019
3 min read
Conventional wisdom suggests that politicians are eager to please business leaders, particularly those at the forefront of the tech industry. But Donald Trump isn't, as most of the world probably knows by now, a normal politician - that's why we shouldn't be surprised that yesterday (March 6) he called Apple CEO Tim Cook 'Tim Apple'. The 'incident' took place at an American Workforce Policy Advisory Board, a group that is working together to plan the future policy direction for the U.S. to ensure economic stability and growth. https://twitter.com/sokane1/status/1103421841505505280 The discussion, attended by a small group of media representatives, began well for Trump and Tim Cook, with Trump paying tribute to the Apple CEO. Trump, remembering Tim Cook's name, described him as "a friend of mine." He went on to say: "he’s a friend because he does a great job. I mean, we want to get things done. Employs so many people. Brought a lot of money back into our country because of the new tax law, and he’s spending that money very wisely. And just done an incredible job." Trump's mistake came later when he once again returned to Cook and Apple, paying tribute to Apple's success and Cook's decision to invest and grow in the U.S. while also apparently attempting to steal just a little bit of credit for himself. Trump said: "People like Tim — you’re expanding all over and doing things that I really wanted you to do right from the beginning. I used to say, ‘Tim, you gotta start doing it here,’ and you really have you’ve really put a big investment in our country." Then came the killer line: "We appreciate it, Tim Apple." Lesser people might have been mortified at being misnamed by the President of the United States, but you'd think that Tim Cook's has a strong enough sense of self thanks to, well, being the CEO of Apple. If you watch the video, he barely misses a beat. Trump has a history of getting names wrong. Just over a year ago, the President called Lockheed Martin CEO Marillyn Hewson Marillyn Lockheed. There's clearly a pattern emerging to Donald President's mistakes... https://twitter.com/dave_brown24/status/976882203459227649 What the White House says Donald Trump said A briefing statement of the meeting published on the White House website shortly after the event finished. Here, Trump's mistake is reinterpreted as half-formed and unfinished sentences, thanks to the clever use of an em dash: "We appreciate it very much, Tim — Apple." Even if you concede that this is what happened, it's still strange for a President to have such a relaxed approach to sentence formation at such an important event. But if you return to the video, there's no pause in Trump's sentence that suggests an em dash is appropriate. Read next: Experts respond to Trump’s move on signing an executive order to establish the American AI Initiative What was the American Workforce Board meeting actually about? A Trump gaffe is always going to steal headlines. However, there was a serious agenda to the meeting insofar as it was an opportunity to discuss the future of the U.S. economy and how to develop a workforce that can power it. A focal point was how Trump could balance his harsh anti-immigration rhetoric and actions with the importance of immigration to the U.S. workforce. "We have to bring people in," Trump said. "We want them to be people based on merit, and we want them to come in legally."
Read more
  • 0
  • 0
  • 10129

article-image-mozilla-firefox-will-soon-support-letterboxing-an-anti-fingerprinting-technique-of-the-tor-broswer
Bhagyashree R
07 Mar 2019
2 min read
Save for later

Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser

Bhagyashree R
07 Mar 2019
2 min read
Yesterday, ZDNet shared that Mozilla will be adding a new anti-fingerprinting technique called letterboxing to Firefox 67, which is set to release in May this year. Letterboxing is part of the Tor Uplift project that started back in 2016 and is currently available for Firefox Nightly users. As part of the Tor Uplift project, the team is slowing bringing the privacy-focused features of Tor Browser to Firefox. For instance, Firefox 55 came with support for a Tor Browser feature called First-Party Isolation (FPI). This feature prevented ad trackers from using cookies to track user activity by separating cookies on a per-domain basis. What is letterboxing and why it is needed? The dimensions of a browser window can act as a big source of finger-printable data that can be used by advertising networks. These advertising networks can use browser window sizes to create user profiles and track users as they resize their browser and move across new URLs and browser tabs. To maintain online privacy of users, it is important to protect this window dimension data continuously even if users resize or maximize their window or enter fullscreen. What letterboxing does is that it masks the real dimensions of the browser window while keeping the window width and height dimensions multiples of 200px and 100px during the resize operation. And, then it adds a gray space at the top, bottom, left, or right of the current page. The advertising code tracking the window resize events reads the flawed dimensions and sends it to its server, and only then Firefox removes the gray spaces. This is how the advertising code is tricked into reading the incorrect window dimensions. Here is a demo of letterboxing showing how exactly it works: https://www.youtube.com/watch?&v=TQxuuFTgz7M The letterboxing feature is not enabled by default. To enable the feature, you can go to the ‘about:config’ page in the browser, enter “privacy.resistFingerprinting" in the search box, and toggle the browser's anti-fingerprinting features to "true." To know more in detail about letterboxing, check out ZDNet’s website. Mozilla engineer shares the implications of rewriting browser internals in Rust Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 19158

article-image-google-releases-a-fix-for-the-zero-day-vulnerability-in-its-chrome-browser-while-it-was-under-active-attack
Melisha Dsouza
07 Mar 2019
3 min read
Save for later

Google releases a fix for the zero day vulnerability in its Chrome browser while it was under active attack

Melisha Dsouza
07 Mar 2019
3 min read
Yesterday, Google announced that a patch for Chrome released last week was actually a fix for an active zero-day discovered by its security team. The bug tagged as CVE-2019-5786, was originally discovered by Clement Lecigne of Google's Threat Analysis Group on Wednesday, February 27th and is currently under active attack. The threat advisory states that this vulnerability involves a memory mismanagement bug in a part called ‘FileReader’ of the Chrome browser. The FileReader is a programming tool that allows web developers to pop up menus and dialogs asking a user to choose from a list of local files to upload or an attachment to be added to their webmail. The attackers can use this vulnerability to execute a Remote Code Execution or RCE. ZDNet states that the bug is a type of memory error that happens when an app tries to access memory after it has been freed/deleted from Chrome's allocated memory. If this type of memory access operation is mishandled, it can lead to the execution of malicious code. Chaouki Bekrar, CEO of exploit vendor Zerodium, tweeted that the vulnerability allegedly allows malicious code to escape Chrome's security sandbox and run commands on the underlying OS. https://twitter.com/cBekrar/status/1103138159133569024 Not divulging in any further information on the bug, Google says: “Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.” Further, Forbes reports that Satnam Narang, a senior research engineer at Tenable has said that it is a "Use-After-Free (UAF) vulnerability in FileReader, an application programming interface (API) included in browsers to allow web applications to read the contents of files stored on a user's computer." Catalin Cimpanu, a security reporter at ZDNet, suggests that there are malicious PDF files in the wild that are being used to exploit this vulnerability. "The PDF documents would contact a remote domain with information on the users' device --such as IP address, OS version, Chrome version, and the path of the PDF file on the user's computer", he added. The fix for this zero-day Users are being advised to update Chrome across all platforms. https://twitter.com/justinschuh/status/1103087046661267456 Check out the new version of Chrome for Android and the patch for Chrome OS . Mac, Windows, and Linux users are advised to manually initiate the download if it is yet to be pushed to a device. Head over to chrome://settings/help to check the current version of Chrome on your system. The URL will also do an update check at the same time, just in case any recent auto-updates have failed. Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument Google’s new Chrome extension ‘Password CheckUp’ checks if your username or password has been exposed to a third party breach Hacker duo hijacks thousands of Chromecasts and Google smart TVs to play PewDiePie ad, reveals bug in Google’s Chromecast devices!
Read more
  • 0
  • 0
  • 12573

article-image-openai-and-google-introduce-activation-atlases-a-technique-for-visualizing-neuron-interaction-in-ai-systems
Melisha Dsouza
07 Mar 2019
3 min read
Save for later

OpenAI and Google introduce ‘Activation atlases’, a technique for visualizing neuron interaction in AI systems

Melisha Dsouza
07 Mar 2019
3 min read
Open AI and Google have introduced a new technique called “Activation atlases” for visualizing the interactions between neurons. This technique aims to provide a better understanding of the internal decision-making processes of AI systems and identify their weakness and failures. Activation atlases are built on ‘feature visualization’, a technique for understanding what the hidden layers of neural networks can represent and in turn make machine learning more accessible and interpretable. “Because essential details of these systems are learned during the automated training process, understanding how a network goes about its given task can sometimes remain a bit of a mystery”, says Google. Activation Atlases will simply answer the question of what an image classification neural network actually "sees" when provided with an image, thus giving users an insight into the hidden layers of a network. Open AI states that “With activation atlases, humans can discover unanticipated issues in neural networks--for example, places where the network is relying on spurious correlations to classify images, or where re-using a feature between two classes lead to strange bugs. Humans can even use this understanding to “attack” the model, modifying images to fool it.” Working of Activation Atlases Activation atlases are built from a convolutional image classification network, Inceptionv1, trained on the ImageNet dataset. This network progressively evaluates image data through about ten layers. Every layer is made of hundreds of neurons and every neuron activates to varying degrees on different types of image patches. An activation atlas is built by collecting the internal activations from each of these layers of the neural network from the images. These activations are represented by a complex set of high-dimensional vectors and are projected into useful 2D layouts via UMAP. The activation vectors then need to be aggregated into a more manageable number. To do this, a grid is drawn over the 2D layout that was created. For every cell in the grid, all the activations that lie within the boundaries of that cell are averaged. ‘Feature visualization’ is then used to create the final representation. An example of Activation Atlas Here is an activation atlas for just one layer in a neural network: An overview of an activation atlas for one of the many layers within Inception v1 Source: Google AI blog Detectors for different types of leaves and plants Source: Google AI blog Detectors for water, lakes and sandbars Source: Google AI blog Researchers hope that this paper will provide users with a new way to peer into convolutional vision networks. This, in turn, will enable them to see the inner workings of complicated AI  systems in a simplified way. Introducing Open AI’s Reptile: The latest scalable meta-learning Algorithm on the block AI Village shares its perspective on OpenAI’s decision to release a limited version of GPT-2 OpenAI team publishes a paper arguing that long term AI safety research needs social scientists
Read more
  • 0
  • 0
  • 6596
article-image-waymo-will-sell-its-3d-perimeter-lidar-sensors-to-companies-outside-of-self-driving
Natasha Mathur
07 Mar 2019
3 min read
Save for later

Waymo to sell its 3D perimeter LIDAR sensors to companies outside of self-driving

Natasha Mathur
07 Mar 2019
3 min read
Waymo, a former Google self-driving car project, announced yesterday, that it is making one of its 3D LIDAR (light detection and ranging) sensors, called Laser Bear Honeycomb, available to select partners. “Offering this LIDAR to partners helps spur the growth of applications outside of self-driving cars and also propels our business forward”, states the Waymo team on Medium. Waymo started developing its own set of sensors in 2011, including three different types of LIDARs . LIDAR refers to a remote sensing method that can track the distance using pulses of laser light. Waymo uses a medium-range LIDAR, located on top of the car. They have also developed a short-range and a long-range LIDAR. Waymo team states that these custom LIDARs are the ones that enabled Waymo to put its self-driving cars on road. Now, Waymo is set on expanding the realm of these sensors outside of self-driving, by including Robotics, security, agricultural technology, etc. Waymo team states that their Laser Bear Honeycomb is a best-in-class perimeter sensor. It’s the same short-range sensor that is used around the bumper of Waymo’s self-driving vehicles. Key features of Laser Bear Honeycomb LIDAR The Laser Bear Honeycomb LIDAR by Waymo comes with an outstanding set of features including a wider field of view, multiple returns per pulse, and minimum zero range. Wide field of view Most of the 3D LIDARs come with a vertical field of view (FOV) of just 30°. But, the Laser Bear Honeycomb LIDAR comes with a vertical FOV of 95°, along with a 360° horizontal FOV. What this means is that one Honeycomb is capable of performing the job of three other 3D sensors. Multiple returns per pulse On sending out a pulse of light, Laser Bear Honeycomb can see up to four different objects in the laser beams’ line of sight. For instance, it can spot the foliage in front of a tree branch as well as the tree branch itself, giving a more detailed view of the environment in turn. It can also uncover the objects that might otherwise get missed out. Minimum range of zero Laser Bear Honeycomb comes with a minimum range of zero. This means it can immediately track the objects that are in front of the sensor. It also comes with other capabilities such as near object detection and avoidance. For more information, check out the official Waymo blog post. Alphabet’s Waymo to launch the world’s first commercial self driving cars next month Anthony Levandowski announces Pronto AI and makes a coast-to-coast self-driving trip Aurora, a self-driving startup, secures $530 million in funding from Amazon, Sequoia, and T.Rowe Price among others
Read more
  • 0
  • 0
  • 2051

article-image-microsoft-open-sources-the-windows-calculator-code-on-github
Amrata Joshi
07 Mar 2019
3 min read
Save for later

Microsoft open sources the Windows Calculator code on GitHub

Amrata Joshi
07 Mar 2019
3 min read
Since the past couple of years, Microsoft has been supporting open source projects, it even joined the Open Invention Network. Last year, Microsoft had announced the general availability of its Windows 3.0 File Manager code. Yesterday, the team at Microsoft made an announcement regarding releasing its Windows Calculator program as an open source project on GitHub under the MIT License. Microsoft is making the source code, build system, unit tests, and product roadmap available to the community. It would be interesting for the developers to explore how different parts of the Calculator app work and further getting to know the Calculator logic. Microsoft is also encouraging developers to participate in their projects by bringing in new perspectives on the Calculator code. The company highlighted that developers can contribute by participating in discussions, fixing or reporting issues, prototyping new features and by addressing design flows. By reviewing the Calculator code, developers can explore the latest Microsoft technologies like XAML, Universal Windows Platform, and Azure Pipelines. They can also learn about Microsoft’s full development lifecycle and can even reuse the code to build their own projects. Microsoft will be also contributing custom controls and API extensions used in Calculator and projects like the Windows UI Library and Windows Community Toolkit. The official announcement reads, “Our goal is to build an even better user experience in partnership with the community.” With the recent updates from Microsoft, it seems that the company is becoming more and more developer friendly. Just two days ago, the company updated its App Developer Agreement. As per the new policy, the developers will now get up to 95% share. According to a few users, Microsoft might collect user information via this new project and even the section below telemetry (on the GitHub post) states the same. The post reads, "This project collects usage data and sends it to Microsoft to help improve our products and services. Read our privacy statement to learn more. Telemetry is disabled in development builds by default, and can be enabled with the SEND_TELEMETRY build flag." One of the users commented on HackerNews, “Well it must include your IP address too, and they know the time and date it was received. And then it gets bundled with the rest of the data they collected. I don't even want them knowing when I'm using my computer. What gets measured gets managed.” Few users have different perspectives regarding this. Another comment reads, “Separately, I question whether anyone looking at the telemetry on the backend. In my experience, developers add this stuff because they think it will be useful, then it never or rarely gets looked at. A telemetry event here, a telemetry event there, pretty soon you're talking real bandwidth.” Check out Microsoft’s blog post for more details on this news. Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more! Microsoft workers protest the lethal use of Hololens2 in the $480m deal with US military Microsoft adds new features to Microsoft Office 365: Microsoft threat experts, priority notifications, Desktop App Assure, and more
Read more
  • 0
  • 0
  • 16005

article-image-microsoft-store-updates-its-app-developer-agreement-to-give-developers-up-to-95-of-app-revenue
Amrata Joshi
07 Mar 2019
3 min read
Save for later

Microsoft Store updates its app developer agreement, to give developers up to 95% of app revenue

Amrata Joshi
07 Mar 2019
3 min read
Last year, Microsoft had announced about its new revenue split figures at the Build 2018. The new policy was expected to be rolled out by the end of 2018. However, it was actually two days ago that the team at Microsoft Store updated its App Developer Agreement (ADA) which is the revenue sharing agreement. The consumer app developers will now benefit by earning up to 95 percent cut of the revenue on app sales excluding games, and an 85 percent cut on the low end. This 95 percent share can be earned only when a customer uses a deep link (tracked by CID (Connection ID)) to purchase the app. In case the customers are directed by Microsoft to their app through a collection or "any other owned Microsoft properties (tracked by an OCID)," then developers will get an 85 percent share. This policy for the fee structure is effective for purchases on Windows Mixed Reality, Windows phone, Windows 10 PCs, and Surface Hub. The policy excludes purchases made on Xbox consoles. If there is no CID or OCID attributed to purchase, then in the case of a web search, customers will get 95 percent revenue. Few Hacker news users have appreciated this new revenue split policy as according to them the company has made a fair move. One user commented on HackerNews, “It seems like a reasonable shifting of costs. If you rely mostly on Microsoft for acquiring new customers, then Microsoft should get a little bit more of a cut, and if you rely mostly on your own marketing methods, then it should get less.” Another comment reads, “It’s an insanely good deal. MSFT has to be losing money on that.” According to a few others, there is also a benefit of organic search here. As app stores don’t usually have much of organic search going on. This move might result in the company getting a better idea on the organic search being done on their store. Also, the 5%-15% cut is an add on. According to a few users, it is equally beneficial for Microsoft as the company earns a cut as well. A comment reads, “Like all digital goods, the marginal cost of MSFT doing this is zero. I don't think they are losing money on this, in terms of pure margins, it’s probably quite lucrative (though in absolute revenue, maybe not so much).” Another comment reads, “I actually think this is a brilliant insight on the side of Microsoft, by inverting this model they get a non-zero slice of a pie they previously did not have.” This may have an effect on how other tech companies and developers function. Other companies may possibly get pressurized by Microsoft’s move considering the company has significantly gained the confidence of developers. To know more about this news, check out Microsoft’s blog post. Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more! Microsoft workers protest the lethal use of Hololens2 in the $480m deal with US military Microsoft adds new features to Microsoft Office 365: Microsoft threat experts, priority notifications, Desktop App Assure, and more
Read more
  • 0
  • 0
  • 12675
article-image-github-releases-vulcanizer-a-new-golang-library-for-operating-elasticsearch
Natasha Mathur
06 Mar 2019
2 min read
Save for later

GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch

Natasha Mathur
06 Mar 2019
2 min read
The GitHub team released a new Go library, Vulcanizer, that interacts with an Elasticsearch cluster, yesterday. Vulcanizer is not a full-fledged Elasticsearch client. However, it is aimed at providing a high-level API to help with common tasks associated with operating an Elasticsearch cluster. These tasks include querying health status of the cluster, migrating data from nodes, updating cluster settings, and more. GitHub makes use of Elasticsearch as the core technology behind its search services. GitHub has already released the Elastomer library for Ruby and they use Elastic library for Go by user olivere. However, the GitHub team wanted a high-level API that corresponded with the common operations on cluster such as disabling allocation or draining the shards from a node. They wanted a library that focused more on the administrative operations and that could be easily used by their existing tooling. Since Go’s structure encourages the construction of composable software, they decided it was a good fit for Elasticsearch. This is because, Elasticsearch is very effective and helps carry out almost all the operations that can be done using its HTTP interface, and where you don’t want to write JSON manually. Vulcanizer is great at getting nodes of a cluster, updating the max recovery cluster settings, and safely adding or removing the nodes from the exclude settings, making sure that shards don’t unexpectedly allocate onto a node. Also, Vulcanizer helps build ChatOps tooling around Elasticsearch quickly for common tasks. GitHub team states that having all the Elasticsearch functionality in their own library, Vulcanizer, helps its internal apps to be slim and isolated. For more information, check out the official GitHub Vulcanizer post. GitHub increases its reward payout model for its bug bounty program   GitHub launches draft pull requests GitHub Octoverse: top machine learning packages, languages, and projects of 2018
Read more
  • 0
  • 0
  • 15549

article-image-dojo-5-0-releases-with-extended-support-for-typescript-2-6-x-to-3-2-x-condition-polyfills-and-more
Bhagyashree R
06 Mar 2019
3 min read
Save for later

Dojo 5.0 releases with extended support for TypeScript 2.6.x to 3.2.x, condition polyfills, and more!

Bhagyashree R
06 Mar 2019
3 min read
Last week, the team behind the Dojo Toolkit announced the release of Dojo 5.0. This release comes with extended support for TypeScript versions from 2.6.x to 3.2.x, condition polyfills, better Build Time Rendering, and more. Dojo is a JavaScript toolkit that equips developers with everything they need to build a web app like language utilities, UI components, and more. New features and enhancements in Dojo 5.0 Conditional polyfills This release provides a better user experience by introducing an out-of-the-box solution for building and loading polyfills in Dojo applications. A polyfill is a piece of code, which implements a feature that web browsers do not support natively. The Dojo build will produce two platform bundles that will be loaded only if two key conditions are fulfilled. First, the shim module is imported somewhere in an application. Second, a user browser does not natively support the browser feature. This update means serving less JavaScript and hence improving the application performance without compromising on features. Better Build Time Rendering (BTR) This version comes with various stability and feature enhancements in BTR such as Dojo Blocks, support for StateHistory API, multiple page HTML generation, better error messaging, and more. BTR was supported in Dojo via the Dojo cli-build-app command since its initial 2.0.0 release. It provides rendering an application to HTML during the build and in-lines the critical CSS enabling the application to effectively render static HTML pages. It also comes with some advantages of server-side rendering (SSR) such as performance and SEO and eliminates the complexities of running a server to support full SSR. Dojo Blocks Dojo Blocks is a new mechanism that allows you to execute code in Node.js as part of the build. A Dojo Block module can do things like reading a group of markdown files, transforming them into VNodes, and making them available to render in the application, all at build time. The results of this Dojo module can be written to the cache that can be used at runtime in the browser. Simplifying testing with Assertion Templates Dojo 5.0 comes with Assertion Templates, that makes testing widgets easier. Earlier, developers had to manually curate each ‘expectedRender’ result per test. Assertion Templates solves this problem by allowing developers to easily modify and layer outputs for the expected render. To read the entire list of updates in Dojo 5.0, check out the official announcement. Dojo 4.0 released with support for Progressive Web Apps, a redesigned Virtual DOM, and more! npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls  
Read more
  • 0
  • 0
  • 6548
Modal Close icon
Modal Close icon