Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-deepmasterprints-master-key-fingerprints-made-by-a-neural-network-can-now-fake-fingerprints
Prasad Ramesh
15 Nov 2018
3 min read
Save for later

DeepMasterPrints: ‘master key’ fingerprints made by a neural network can now fake fingerprints

Prasad Ramesh
15 Nov 2018
3 min read
New York University researchers have found a way to generate artificial fingerprints that can be used to create fake fingerprints. They do this by using a neural network. They have presented their work in a paper titled DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via Latent Variable Evolution. The vulnerability in fingerprint sensors Fingerprint recognition systems are vulnerable to dictionary attacks based on MasterPrint. MasterPrints are like master keys that can match with a large number of fingerprints. Such work was done previously at feature level, but now this work dubbed as DeepMasterPrints has much higher attack accuracy with the capacity to generate complete images. The method demonstrated in the paper is Latent Variable Evolution which is based on training a Generative Adversarial Network (GAN) on a set of real fingerprint images. Then a stochastic search is then used to search for latent input variables to the generator network. This can increase the accuracy of impostor matches assessed by a fingerprint recognizer. Small fingerprint sensors pose a risk Aditi Roy, one of the authors of the paper exploited an observation. Smartphones have small areas for fingerprint recording and recognition. Hence the whole fingerprint is not recorded in them at once, they are partially recorded and authenticated. Also, some features among fingerprints are more common than others. She then demonstrated that MasterPrints can be obtained from real fingerprint images or be synthesized. With this exploit, 23% of the subjects could be spoofed in the used dataset at a 0.1% false match rate. The generated DeepMasterPrints was able to spoof 77% of the subjects at a 1% false match rate. This shows the danger of using small fingerprint sensors. For a DeepMasterPrint a synthetic fingerprint image needed to be created that can fool a fingerprint matcher. A condition was that the matcher should also match that fingerprint image to different identities in addition to realizing that the image is a fingerprint. The paper presents a method for creating DeepMasterPrint using a neural network that learns to generate fingerprint images. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is used for searching the input space of the trained neural network. The ideal fingerprint image is then selected. Conclusion Partial fingerprint images can be generated that can be used for launching dictionary attacks against a fingerprint verification system. A GAN network is trained over a dataset of fingerprints, then LVE searches the latent variables of the generator network for a fingerprint image that maximize the matching chance. This matching is only successful when a large number of different identities are involved, meaning specific individual attacks are not so likely. The use of inked images and sensor images show that the system is robust and independent of artifacts and datasets. For more details, read the research paper. Tesla v9 to incorporate neural networks for autopilot Alphabet’s Waymo to launch the world’s first commercial self driving cars next month UK researchers have developed a new PyTorch framework for preserving privacy in deep learning
Read more
  • 0
  • 0
  • 13931

article-image-what-is-facebook-hiding-new-york-times-reveals-facebooks-insidious-crisis-management-strategy
Melisha Dsouza
15 Nov 2018
9 min read
Save for later

What is Facebook hiding? New York Times reveals Facebook’s insidious crisis management strategy

Melisha Dsouza
15 Nov 2018
9 min read
Today has been Facebook’s worst day in its history. As if the plummeting stocks that closed on  Wednesday at just $144.22.were not enough, Facebook is now facing backlash on its leadership morales. Yesterday, the New York Times published a scathing expose on how Facebook wilfully downplayed its knowledge of the 2016 Russian meddling of US elections via its platform. In addition, it also alleges that over the course of two years, Facebook has adopted a ‘delay, deny and deflect’ strategy under the shrewd leadership of Sheryl Sandberg and the disconnected from reality, Facebook CEO, Mark Zuckerberg, to continually maneuver through the chain of scandals the company has been plagued with. In the following sections, we dissect the NYT article and also loo at other related developments that have been triggered in the wake of this news. Facebook, with over 2.2 billion users globally, has accumulated one of the largest-ever repositories of personal data, including user photos, messages and likes that propelled the company into the Fortune 500. Its platform has been used to make or break political campaigns, advertising business and reshape the daily life around the world. There have been constant questions raised on the security of this platform and all credit goes to the various controversies surrounding Facebook since well over two years. While Facebook’s response to these scandals (“we should have done better”) have not convinced many, Facebook has never been considered ‘knowingly evil’ and continued enjoyed the benefit of the doubt. The Times article now changes that. Crisis management at Facebook: Delay, deny, deflect The report by the New York Times is based on anonymous interviews with more than 50 people, including current and former Facebook executives and other employees, lawmakers and government officials, lobbyists and congressional staff members. Over the past few years, Facebook has grown, so has the hate speech, bullying and other toxic content on the platform.  It hasn't fully taken responsibility for what users posted turning a blind eye and carrying on as it is- a platform and not a Publisher. The report highlights the dilemma Facebook leadership faces while deciding on candidate Trump’s statement on Facebook in 2015 calling for a “total and complete shutdown” on Muslims entering the United States. After a lengthy discussion, Mr. Schrage (a prosecutor whom Ms. Sandberg had recruited)  concluded that Mr. Trump’s language had “not violated Facebook’s rules”. Mr. Kaplan (Facebook’s Vice President of global public policy) argued that Mr. Trump was an important public figure, and shutting down his account or removing the statement would be perceived as obstructing free speech leading to a conservative backlash. Sandberg decided to allow the poston Facebook. In the spring of 2016, Mr. Alex Stamos (Facebook’s former security chief) and his team discovered Russian hackers probing Facebook accounts for people connected to the presidential campaign along with Facebook accounts linked to Russian hackers who messaged journalists to share information from the stolen emails. Mr. Stamos directed a team to scrutinize the extent of Russian activity on Facebook. By January 2017, it was clear that there was more to the Russian activity on Facebook. Mr. Kaplan believed that if Facebook implicated Russia further,  Republicans would “accuse the company of siding with Democrats” and pulling  down the Russians’ fake pages would offend regular Facebook users as having been deceived. To summarize their findings, Mr. Zuckerberg and Ms. Sandberg released a  blog post  on 6th September 2017. The post had little information on fake accounts or the organic posts created by Russian trolls gone viral on Facebook. You can head over to New York Times to read in depth about what went on in the company post reported scandals. What is also surprising, is that instead of offering a clear explanation to the matters at hand, the company was more focused on taking a stab at those who make statements against Facebook. Take for instance , Apple CEO Tim Cook who criticized Facebook in an MSNBC interview  and called facebook a service that traffics “in your personal life.” According to the Times, Mark Zuckerberg has reportedly told his employees to only use Android Phones in lieu of this statement. Over 70 human rights group write to Zuckerberg Fresh reports have now emerged that the Electronic Frontier Foundation, Human Rights Watch, and over 70 other groups have written an open letter to Mark Zuckerberg  to adopt a clearer “due process” system for content takedowns.  “Civil society groups around the globe have criticized the way that Facebook’s Community Standards exhibit bias and are unevenly applied across different languages and cultural contexts,” the letter says. “Offering a remedy mechanism, as well as more transparency, will go a long way toward supporting user expression.” Zuckerberg rejects facetime call for answers from five parliaments “The fact that he has continually declined to give evidence, not just to my committee, but now to an unprecedented international grand committee, makes him look like he’s got something to hide.” -DCMS chair Damian Collins On October 31st, Zuckerberg was invited to give evidence before a UK parliamentary committee on 27th November, with politicians from Canada co-signing the invitation. The committee needed answers related to Facebook “platform’s malign use in world affairs and democratic process”. Zuckerberg rejected the request on November 2nd.  In yet another attempt to obtain answers, MPs from Argentina, Australia, Canada, Ireland and the UK  joined forces with UK’s Digital, Culture, Media and Sport committee requesting a facetime call with Mark Zuckerberg last week. However, in a letter to DCMS, Facebook declined the request, stating: “Thank you for the invitation to appear before your Grand Committee. As we explained in our letter of November 2nd, Mr. Zuckerberg is not able to be in London on November 27th for your hearing and sends his apologies.” The letter does not explain why Zuckerberg is unavailable to speak to the committee via a video call. The letter summarizes a list of Facebook activities and related research that intersects with the topics of election interference, political ads, disinformation and security.  It makes no mention of the company’s controversial actions and their after effects. Diverting scrutiny from the matter? According to the NYT report, Facebook reportedly expanded its relationship with a Washington-based public relations consultancy with Republican ties in October 2017 after an entire year dedicated to external criticism over its handling of Russian interference on its social network. The firm last year wrote dozens of articles that criticized facebook’s  rivals Google and Apple while diverting focus from the impact of Russian interference on Facebook  It pushed the idea that liberal financier George Soros was behind a growing anti-Facebook movement, according to the New York Times. The PR team also reportedly pressed reporters to explore Soros' financial connections with groups that protested Facebook at Congressional hearings in July. How are employees and users reacting? According to the Wall Street Journal, only 52 percent of employees say that they're optimistic about Facebook's  future . As compared to 2017, 84 percent were optimistic about working at Facebook. Just under 29,000 workers (of more than 33,000 in total)  participated in the biannual pulse survey. In the most recent poll conducted in October, statistics have fallen-  like its tumbling stock market - as compared to last year's survey. Just over half feel Facebook was making the world a better place which was at 19 percentage last year. 70 percent said they were proud to work at Facebook, down from 87 percent, and overall favorability towards the company dropped from 73 to 70 percent since last October's poll. Around 12 percent apparently plan to leave within a year. Hacker news has comments from users stating that “Facebook needs to get its act together” and “are in need for serious reform”. Some also feel that “This Times piece should be taken seriously by FB, it's shareholders, employees, and users. With good sourcing, this paints a very immature picture of the company, from leadership on down to the users”. Readers have pointed out that Facebook’s integrity is questionable and that  “employees are doing what they can to preserve their own integrity with their friends/family/community, and that this push is strong enough to shape the development of the platform for the better, instead of towards further addictive, attention-grabbing, echo chamber construction.” Facebook’s reply on the New York Times Report Today, Facebook published a post in response to the Time’s report, listing the number of inaccuracies in their post. Facebook asserts that they have been closely following the Russian investigation, along with reasons for not citing Russia’s name in the April 2017 white paper. The company has also addressed the backlash it faced for the “Muslim ban” statement by Trump which was not taken down. Facebook strongly supports Mark and Sheryl in the fight against false news and information operations on Facebook.along with reasons  for Sheryl championing Sex Trafficking Legislation. Finally, in response to the controversy to advising employees to use only Android, they clarified that it was because “it is the most popular operating system in the world”. In response to hiring a PR team Definers, Facebook says that “We ended our contract with Definers last night. The New York Times is wrong to suggest that we ever asked Definers to pay for or write articles on Facebook’s behalf – or to spread misinformation.” We can’t help but notice that again, Facebook is defending itself against allegations but not providing a proper explanation for why it finds itself in controversies time and again. It is also surprising that the contract with Definers abruptly came to an end just before the report went live by the Times. What Facebook has additionally done is emphasized about improved security practices at the company, something which it has been talking about everytime they face a controversy. It is time to stop delaying, denying and deflecting. Instead, atone, accept, and act responsibly. Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior” Emmanuel Macron teams up with Facebook in a bid to fight hate speech on social media Facebook GEneral Matrix Multiplication (FBGEMM), high-performance kernel library, open sourced, to run deep learning models efficiently
Read more
  • 0
  • 0
  • 13707

article-image-uber-announces-the-2019-uber-ai-residency
Amrata Joshi
15 Nov 2018
3 min read
Save for later

Uber announces the 2019 Uber AI Residency

Amrata Joshi
15 Nov 2018
3 min read
On Tuesday, Uber announced the 2019 Uber AI Residency. The Uber AI residency was established in 2018. It is a 12-month training program for recent college and master’s graduates, professionals interested in reinforcing their AI skills. It is also for those with quantitative skills, interested in becoming AI researchers at Uber AI Labs or Uber Advanced Technologies Group (ATG). Artificial intelligence at Uber AI is a rapidly growing area across both research and applications, including self-driving vehicles. General AI and applied machine learning through Uber AI, and AI for self-driving cars through Uber ATG are the major areas where AI is growing at Uber. Uber AI The teams at Uber AI are working towards providing and improving services in the fields of computer vision, conversational AI, and sensing and inference from sensor data. Uber AI Labs under Uber AI organization is composed of two main wings which reinforce each other. The two wings are foundational core research and Connections group. They focus on the translation of research into applications for the company. They work in collaboration with the platform and product teams. AI Labs Core The AI Labs Core work on diverse topics including the spectrum from probabilistic programming, Bayesian inference, reinforcement learning, neuroevolution, safety, core deep learning research and artificial intelligence. AI Labs Connections AI Labs Connections transformed Bayesian optimization from a research field into a service for the company. AI Labs Connects has collaborations with teams working on conversational AI, natural language processing, mapping, forecasting, fraud detection, Uber’s Marketplace, and many more. Uber Advanced Technologies Group (ATG) The self-driving vehicle is one of the most ambitious AI applications at Uber. AI helps in perceiving the surrounding environment using multiple sensors, predicting the motion and intent of actors in the near future. The important components of the self-driving technology are creating high definition maps and localizing self-driving vehicles. Also, providing critical data about the vehicle’s environment is equally important. The Residency program The residency program involves the selection of Uber AI Residents across AI Labs in San Francisco and ATG in Toronto and San Francisco. The residents will be given an opportunity to pursue interests across academic and applied research. They will also be meeting with researchers at AI Labs and ATG. The residents will get a chance of working with Uber product and engineering teams to converge on initial project directions. The 2018 residency class is currently working on foundational research projects in deep learning, probabilistic modeling, reinforcement learning, as well as computer vision. Their results have been submitted to top scientific venues, and their contributions also directly impact Uber’s business in partnership with Uber’s technology teams. Applicants can apply from December 10th, 2018 to January 13, 2019, at 11:59 p.m. EST. Apply here. Read more about this news on the official page of Uber Engineering. Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development Uber’s Head of corporate development, Cameron Poetzscher, resigns following a report on a 2017 investigation into sexual misconduct Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop?
Read more
  • 0
  • 0
  • 10713

article-image-seven-new-spectre-and-meltdown-attacks-found
Savia Lobo
15 Nov 2018
3 min read
Save for later

Seven new Spectre and Meltdown attacks found

Savia Lobo
15 Nov 2018
3 min read
A group of researchers recently disclosed seven additional attacks in the Spectre and Meltdown families. These seven attacks are said to impact the AMD, ARM, and the Intel CPUs to a certain extent. The researchers have presented an execution of these attacks in detail, in their research paper titled, ‘A Systematic Evaluation of Transient Execution Attacks and Defenses’. 2 Meltdown and 5 Spectre variants found The 7 newly found attacks include 2 new Meltdown variants namely, Meltdown-PK, and Meltdown-BR. It also includes 5 new Spectre mistraining strategies for Spectre-PHT and SpectreBTB attacks. The researchers said that these 7 new attacks have been overlooked and not been investigated so far. The researchers successfully demonstrated all seven attacks with proof-of-concept code. However, experiments to confirm six other Meltdown-attacks did not succeed. The two new Meltdown attacks include: Meltdown-PK - bypasses memory protection keys on Intel CPUs Meltdown-BR - exploits an x86 bound instruction on Intel and AMD The other Meltdown attacks  which the researchers tried and failed to exploit targeted the following internal CPU operations: Meltdown-AC - tried to exploit memory alignment check exceptions Meltdown-DE - tried to exploit division (by zero) errors Meltdown-SM - tried to exploit the supervisor mode access prevention (SMAP) mechanism Meltdown-SS - tried to exploit out-of-limit segment accesses Meltdown-UD - tried to exploit invalid opcode exception Meltdown-XD - tried to exploit non-executable memory Source: A Systematic Evaluation of Transient Execution Attacks and Defenses In order to understand the Spectre-type attacks, the researchers proposed a categorization based on, first, the prediction mechanism exploited, and second, the mistraining mechanism. Here researchers propose to combine all attacks that exploit the same microarchitectural element: Spectre-PHT: Exploits the Pattern History Table (PHT) Spectre-BTB: Exploits the Branch Target Buffer (BTB) Spectre-STL: Exploits the CPUs memory disambiguation prediction, specifically store-to-load forwarding (STLF) Spectre-RSB: Exploits the Return Stack Buffer (RSB) According to ZDNet, “Based on the experiments, the researchers found three new Spectre attacks that exploit the Pattern History Table (PHT) mechanism and two new Spectre attacks against the Branch Target Buffer (BTB).” PHT-CA-OP PHT-CA-IP PHT-SA-OP BTB-SA-IP BTB-SA-OP Defenses for these new Spectre and Meltdown attacks For each of the Spectre and Meltdown attack types, the researchers have categorized the defenses into three and two categories respectively. For Spectre-type attacks, the defense categories are: Mitigating or reducing the accuracy of covert channels used to extract the secret data. Mitigating or aborting speculation if data is potentially accessible during transient execution. Ensuring that secret data cannot be reached. For Meltdown-type attacks, the defense categories are: Ensuring that architecturally inaccessible data remains inaccessible on the microarchitectural level. Preventing the occurrence of faults. The researchers in the paper said, “We have systematically evaluated all defenses, discovering that some transient execution attacks are not successfully mitigated by the rolled out patches and others are not mitigated because they have been overlooked. Hence, we need to think about future defenses carefully and plan to mitigate attacks and variants that are yet unknown”. To know more about these newly found attacks in detail and the related experiments, head over to the research paper written by Claudio Canella et al. Intel announces 9th Gen Core CPUs with Spectre and Meltdown Hardware Protection amongst other upgrades NetSpectre attack exploits data from CPU memory SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets
Read more
  • 0
  • 0
  • 20496

article-image-amazon-announces-corretto-a-open-source-production-ready-distribution-of-openjdk-backed-by-aws
Melisha Dsouza
15 Nov 2018
3 min read
Save for later

Amazon announces Corretto, a open source, production-ready distribution of OpenJDK backed by AWS

Melisha Dsouza
15 Nov 2018
3 min read
Yesterday, at Devoxx Belgium, Amazon announced the preview of Amazon Corretto which is a free distribution of OpenJDK that offers Long term support. With Corretto, users can develop and run Java applications on popular operating systems. The team further mentioned that Corretto is multiplatform and production ready with long-term support that will include performance enhancements and security fixes. They also have plans to make Coretto the default OpenJDK on Amazon Linux 2 in 2019. The preview currently supports Amazon Linux, Windows, macOS, and Docker, with additional support planned for general availability. Corretto is run internally by Amazon on thousands of production services. It is certified as compatible with the Java SE standard. Features and benefits of Corretto Amazon Corretto lets a user run the same environment in the cloud, on premises, and on their local machine. During Preview, Corretto will allow users to develop and run Java applications on popular operating systems like Amazon Linux 2, Windows, and macOS. 2. Users can upgrade versions only when they feel the need to do so. 3. Since it is certified to meet the Java SE standard, Coretto can be used as a drop-in replacement for many Java SE distributions. 4. Corretto is available free of cost and there are no additional paid features or restrictions. 5. Coretto is backed by Amazon and the patches and improvements in Corretto enable Amazon to address high-scale, real-world service concerns. Coretto can meet heavy performance and scalability demands. 6. Customers will obtain long-term support, with quarterly updates including bug fixes and security patches. AWS will provide urgent fixes to customers outside of the quarterly schedule. At Hacker news, users are talking about how the product's documentation could be formulated in a better way. Some users feel that the “Amazon's JVM is quite complex”. Users are also talking about Oracle offering the same service at a price. One user has pointed out the differences between Oracle’s service and Amazon’s service. The most notable feature of this release apparently happens to be the LTS offered by Amazon. Head over to Amazon’s blog to read more about this release. You can also find the source code for Corretto at Github. Amazon tried to sell its facial recognition technology to ICE in June, emails reveal Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative
Read more
  • 0
  • 0
  • 11646

article-image-introducing-krispnet-dnn-a-deep-learning-model-for-real-time-noise-suppression
Bhagyashree R
15 Nov 2018
3 min read
Save for later

Introducing krispNet DNN, a deep learning model for real-time noise suppression

Bhagyashree R
15 Nov 2018
3 min read
Last month, 2Hz introduced an app called krisp which was featured on the Nvidia website. It uses deep learning for noise suppression and is powered by krispNet Deep Neural Network. krispNet is trained to recognize and reduce background noise from real-time audio and yields clear human speech. 2Hz is a company which builds AI-powered voice processing technologies to improve voice quality in communications. What are the limitations in the current ways of noise suppression? Many edge devices from phones, laptops, to conferencing systems come with noise suppression technologies. Latest mobile phones come equipped with multiple microphones which helps suppress environmental noise when we talk. Generally, the first mic is placed on the front bottom of the phone to directly capture the user’s voice. The second mic is placed as far as possible from the first mic. After the surrounding sounds are captured by both these mics, the software effectively subtracts them from each other and yields an almost clean voice. The limitations of multiple mics design: Since multiple mics design requires a certain form factor, their application is only limited to certain use cases such as phones or headsets with sticky mics. These designs make the audio path complicated, requiring more hardware and code. Audio processing can only be done on the edge or device side, thus the underlying algorithm is not very sophisticated due to the low power and compute requirement. The traditional Digital Signal Processing (DSP) algorithms also work well only in certain use cases. Their main drawback is that they are not scalable to variety and variability of noises that exist in our everyday environment. This is why 2Hz has come up with a deep learning solution that uses a single microphone design and all the post processing is handled by a software. This allows hardware designs to be simpler and more efficient. How deep learning can be used in noise suppression? There are three steps involved in applying deep learning to noise suppression: Source: Nvidia Data collection: The first step is to build a dataset to train the network by combining distinct noises and clean voices to produce synthetic noisy speech. Training: Next, feed the synthetic noisy speech dataset to the DNN on input and the clean speech on the output. Inference: Finally, produce a mask which will filter out the noise giving you a clear human voice. What are the advantages of krispNet DNN? krispNet is trained with a very large amount of distinct background noises and clean human voices. It is able to optimize itself to recognize what’s background noise and separate it from a human speech by leaving only the latter. While inferencing, krispNet acts on real-time audio and removes background noise. krispNet DNN can also perform Packet Loss Concealment for audio and fill out missing voice chunks in voice calls by eliminating “chopping”. krispNet DNN can predict higher frequencies of a human voice and produce much richer voice audio than the original lower bitrate audio. Read more in detail about how we can use deep learning in noise suppression on the Nvidia blog. Samsung opens its AI based Bixby voice assistant to third-party developers Voice, natural language, and conversations: Are they the next web UI? How Deep Neural Networks can improve Speech Recognition and generation
Read more
  • 0
  • 0
  • 12802
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-this-fun-mozilla-tool-rates-products-on-a-creepy-meter-to-help-you-shop-safely-this-holiday-season
Sugandha Lahoti
15 Nov 2018
2 min read
Save for later

This fun Mozilla tool rates products on a ‘creepy meter’ to help you shop safely this holiday season

Sugandha Lahoti
15 Nov 2018
2 min read
Mozilla has come up with a fun creepy product rater and guide to help people be aware of privacy issues by helping them shop safe products this holiday season. Their opening line - “Teddy bears that connect to the internet. Smart speakers that listen to commands. Great gifts—unless they spy on you. We created this guide to help you buy safe, secure products this holiday season.” Source: Mozilla When you click on a product, you can see a description, a creepiness rater, ‘a how likely to buy it’ option, and different privacy-related questions and answers. “It is a super fun poke by Mozilla at the overwhelming majority of the technology industry who treat privacy as a nuisance at best and as a non-event at worst,” said a hacker news user. It may be Mozilla’s way of illustrating their mission of being advocates for privacy. Read More: Is Mozilla the most progressive tech organization on the planet right now? Some people also disagreed with Mozilla’s jibe. “The page looks to be targeted at consumers, with the 'creepy' meter that changes as you scroll. However the PS4 and Xbox are considered 'A little creepy' and a sous vide cooker is listed as 'Somewhat creepy'. Despite the arguments made on the respective pages for why they are creepy (generally "Shares your information with 3rd parties for unexpected reasons") I don't think any consumer on the planet is going to consider any of those gifts even slightly creepy.” “This list definitely feels very shallow and disconnected from any deeper reasoning about specific security practices, business models, whether a net connection is actually required or not, etc. It's a popularity poll at best, and the actionable advice is minimal. It's a bit disappointing coming from Mozilla, at least to the extent that it's a wasted opportunity on something that the public is growing more aware of.” said a hacker news user. Most of the people agree that this is just for fun poll by Mozilla without any serious implications. Read more such hackernews comments. Also, have a look at Mozilla’s guide. Mozilla introduces new Firefox Test Pilot experiments: Price Wise and Email tabs. Mozilla shares how AV1, the new the open source royalty-free video codec, works. Mozilla pledges to match donations to Tor crowdfunding campaign up to $500,000.
Read more
  • 0
  • 0
  • 8861

article-image-open-invention-network-expands-its-patent-non-aggression-coverage-in-linux-system
Natasha Mathur
15 Nov 2018
3 min read
Save for later

Open Invention Network expands its patent non-aggression coverage in Linux system

Natasha Mathur
15 Nov 2018
3 min read
Open Invention Network (OIN), a non-aggression patent community, announced an expansion in its patent non-aggression coverage by updating the freedom of action in a Linux System, last week. Patents provide organizations and individuals with the right to the invention and the right to exclude others from making, using, offering for sale, or selling the invention. This Linux System expansion enables “OIN to keep pace with open source innovation, promoting patent non-aggression in the core. As open source grows, we will continue to protect Linux and adjacent technologies through strategic software package additions to the Linux System” said Keith Bergelt, CEO of Open Invention Network. The recent expansion comprises 151 new packages, bringing the total number of protected packages to 2,873. “While the majority of the new additions are widely used and found in most devices. The update includes a number of key open source innovations such as Kubernetes, Apache Cassandra and packages for Automotive Grade Linux” said Boehm Open Invention Network was introduced by Mr. Mirko Boehm, OIN’s director for the Linux System definition to develop a non-aggression pact between companies (especially within the field of the Linux system definition). OIN practices cross-licensing of patents for the Linux system on a royalty-free basis. This zone of cross-licensing is called OIN’s Linux System, which comprises a list of fundamental Linux software packages. Patents owned by OIN are similarly licensed royalty-free to any organization that agrees to not assert its patents against the Linux System. Open Invention Network focuses on changing the current patent system in core Linux and other open source technologies as it is being abused by a lot of organizations, deteriorating innovation significantly. These non-aggression pacts or defensive patent tools by OIN help protect the signatories against the aggressive use of patents. A report by Dr. E. Altsitsiadis, for OpenForum Academy (OFA) stresses on these issues in the current patent system, as it mentions, that companies whose business model consists of buying up patents with a goal of taking anyone who infringes them to court have grown exponentially. Technology giants are engaged in massive legal battles. This leads to public resources getting held up in expensive lawsuits, as well as it poses a significant barrier to smaller innovators who don’t always have the capacity to cover these legal costs. Just last month, Microsoft joined the Open Invention Network, making 60,000 of its patents accessible to fellow members, to embrace the open source software and open source culture. “With this update to the Linux System definition, OIN continues with its well-established process of carefully maintaining a balance between stability and innovative core open source technology,” stated Boehm. For more information, check out the official OIN press release. Four IBM facial recognition patents in 2018, we found intriguing Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics Four 2018 Facebook patents to battle fake news and improve news feed
Read more
  • 0
  • 0
  • 11571

article-image-kdevelop-5-3-released-with-new-analyzer-plugin-and-improved-language-support
Prasad Ramesh
15 Nov 2018
3 min read
Save for later

KDevelop 5.3 released with new analyzer plugin and improved language support

Prasad Ramesh
15 Nov 2018
3 min read
20 years after KDevelop’s first release, KDevelop 5.3 is now released with features like a new analyzer and improved support for some languages. A new type of analyzer plugin in KDevelop 5.3 In version 5.1 KDevelop got a menu entry Analyzer which provides a set of actions to work with analyzer-like plugins. With version 5.2, a runtime analyzer called Heaptrack and a static analyzer called cppcheck were added. In the development phase of KDevelop 5.3, another analyzer plugin was added which is available with the current release. The new analyzer named Clazy is a clang analyzer plugin specialized for Qt-using code. It can now also be run from within KDevelop by default displaying the issues inline. The KDevelop plugin for Clang-Tidy support will be released as part of KDevelop starting with version 5.4. It is released independently as of now. Internal changes in KDevelop 5.3 KDevelop's own codebase has been subject for using analyzers. A lot of code has been optimized and also stabilized in places indicated by the analyzers. There is also modernization to the new standards of languages like C++ and Qt5 with the aid of analyzers. Improved support for C++ Lot of work was done in KDevelop 5.3 on stabilizing and improving KDevelop’s clang-based language support for C++. The notable fixes include: In clang tooltips were included, range check was fixed. The path to the builtin clang compiler headers can now be overridden. Now the clang builtin headers are always used for the libclang version used. Requests are completed in a group and only the last one is handled. The template for Class/function signatures in Clang Code Completion is fixed. A workaround for constructor argument hints to find declarations. In clang, argument hint code completion is improved. Improved support for PHP With the help of Heinz Wiesinger, there are improvements for PHP support in KDevelop 5.3. There is much-improved support for PHP Namespaces. Support for Generators and Generator delegation is added. The integrated documentation of PHP internals has been updated and expanded. Support for context-sensitive lexer of PHP 7. Installing the parser as a library so other projects can use them. The type detection of object properties is improved. Support is added for the object typehint. ClassNameReferences is better supported. Improvements to expression syntax support particularly around 'print'. Optional function parameters are allowed before non-optional ones. Support for magic constants: __DIR__ and __TRAIT__ are added. Improved Python language support The focus is on fixing bugs, which have been added to the 5.2 series. A couple of improved features in 5.3 are: Environment profile variables are injected into debug process environment. The support for 'with' statements is improved. There is also experimental, but maintainer-seeking support for macOS and port for Haiku. For more details, visit the KDevelop website. Neuron: An all-inclusive data science extension for Visual Studio The LLVM project is ditching SVN for GitHub. The migration to Github has begun. Microsoft announces .NET standard 2.1
Read more
  • 0
  • 0
  • 9250

article-image-duckduckgo-chooses-to-improve-its-products-without-sacrificing-user-privacy
Amrata Joshi
14 Nov 2018
3 min read
Save for later

DuckDuckGo chooses to improve its products without sacrificing user privacy

Amrata Joshi
14 Nov 2018
3 min read
DuckDuckGo, an internet privacy company, empowers users to seamlessly take control of their personal information online, without any tradeoffs. DuckDuckGo doesn’t store IP addresses and doesn’t create unique cookies. It doesn’t even collect or share any type of personal information. The Improvements Lately, the company came up with some improvements. If you ever happen to search for  DuckDuckGo, you might have come across a "&atb=" URL parameter in the web address at the top of your browser. This parameter allows DuckDuckGo to anonymously  A/B (split) test product changes. To explain this further, let’s take, for example, users in the A group would get blue links and users in the B group would get red links. From this, it would be easier for the team at DuckuckGo to measure how the usage of DuckDuckGo has been impacted by different color links. The team at DuckDuckGo also measures the engagement of specific events on the page (e.g. A misspelling message is displayed, when it is clicked). It allows them to run experiments where they can test different misspelling messages and use CTR (click through rate) to determine the message's efficacy. The requests made for improving DuckDuckGo are anonymous and the information is used only for improving the products. Similar "atb.js" or "exti" requests are made by browser extensions and mobile apps. The browser extensions and mobile apps will only send one type of these requests a day. This means an approximate count of the devices which accessed DuckDuckGo can be known. But this would be done without knowing anything about those devices or the searches made by users. These requests are all fully encrypted, such that nobody else can see them except for DuckDuckGo. There is no personal information attached to the request. So, DuckDuckGo cannot ever tell what individual people are doing since everyone is anonymous. The team has developed systems from scratch, instead of relying on third-party services. This is how they stick to their privacy promise of not collecting and leaking any personal information. This move by the company centered around anonymity might benefit a lot to the company, as data breach incidents on various organizations are trending lately.  With the daily searches crossing the 30 million mark, the company has already experienced 50% growth in the last year. These improvements prove to be cherry on the cake! Could DDG possibly pose a real threat to the leading search engine, Google? Read more about this news on the official website of DuckDuckGo. 10 great tools to stay completely anonymous online Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers Google launches a Dataset Search Engine for finding Datasets on the Internet
Read more
  • 0
  • 0
  • 14777
article-image-google-makes-major-inroads-into-healthcare-tech-by-absorbing-deepmind-health
Amrata Joshi
14 Nov 2018
3 min read
Save for later

Google makes major inroads into healthcare tech by absorbing DeepMind Health

Amrata Joshi
14 Nov 2018
3 min read
Yesterday, Google announced that it is absorbing DeepMind Health, a London-based AI lab. In 2014, DeepMind was acquired by Google for £400 million. One of the reasons for DeepMind to join hands with Google in 2014 was the opportunity to use Google’s scale and experience in building billion-user products. Google and DeepMind Health together working on Streams The team at DeepMind introduced Streams in 2017. It was first rolled out at the Royal Free Hospital, where it is primarily used to identify and treat acute kidney injury (AKI). This app provides real-time alerts and information, pushing the right information to the right clinician at the right time. It also brings together important medical information like blood test results in one place. It helps the clinicians at our partner hospitals to spot serious issues while they are on the move. Streams app was developed to help the UK’s National Health Service (NHS). The need for Artificial Intelligence in Streams The team at DeepMind was keen on using AI because of the potential it has to revolutionize the understanding of diseases. AI could possibly help in knowing the root cause of the disease by understanding as to how they develop. This could, in turn, help scientists discover new ways of treatment. The team at DeepMind plans to work on a number of innovative research projects, such as using AI to spot eye disease in routine scans. The goal of DeepMind is to make Streams an AI-powered assistant for nurses and doctors everywhere. This could be possible by combining the best algorithms with intuitive design, all backed up by rigorous evidence. The future of Streams Acute kidney injury (AKI) is responsible for 40,000 deaths in the UK every year. With Streams now powered by the intelligence of teams from DeepMind Health and Google, the scenario might change! Antitrust and privacy concerns Last year, the Royal Free NHS Foundation Trust in London went against data protection rules and gave 1.6 million patient records to DeepMind for a trial. Tension is now increasing for the privacy advocates in the UK because Google is getting its hands on healthcare related information. The data could be misused in the future. Many have given a negative response to this news and are opposing it. As DeepMind had promised before to not share personally identifiable health data with Google, this new move has got many, questioning the intention of DeepMind. https://twitter.com/juliapowles/status/1062417183404445696 https://twitter.com/DeepMind_Health/status/1062389671576113155 https://twitter.com/TomValletti/status/1062457943382245378 Read more about this news on DeepMind’s official blog post. DeepMind open sources TRFL, a new library of reinforcement learning building blocks Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users
Read more
  • 0
  • 0
  • 13286

article-image-microsoft-fixes-62-security-flaws-on-patch-tuesday-and-re-releases-windows-10-version-1809-and-windows-server-2019
Savia Lobo
14 Nov 2018
3 min read
Save for later

Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019

Savia Lobo
14 Nov 2018
3 min read
Yesterday, on Microsoft's Patch Tuesday the company released its monthly security patches that fixed 62 security flaws. These fixes also included a fix for a zero-day vulnerability that was under active exploitation before these patches were made available. Microsoft also announced the re-release of its Windows 10 version 1809 and Windows Server 2019. Zero-day vulnerability CVE-2018-8589 Microsoft credited Kaspersky Lab researchers for discovering this zero-day, which is also known as CVE-2018-8589 and impacts the Windows Win32k component. A Kaspersky spokesperson told ZDNet, “they discovered the zero-day being exploited by multiple cyber-espionage groups (APTs).” The zero-day had been used to elevate privileges on 32-bit Windows 7 versions. This is the second Windows elevation of privilege zero-day patched by Microsoft discovered by Kaspersky researchers. Last month, Microsoft patched CVE-2018-8453, another zero-day that had been used by a state-backed cyber-espionage group known as FruityArmor. However, in this month’s Patch Tuesday, Microsoft has not patched a zero-day that is affecting the Windows Data Sharing Service (dssvc.dll). This zero-day was disclosed on Twitter at the end of October. According to ZDNet, “Microsoft has published this month a security advisory to instruct users on how to properly configure BitLocker when used together with solid-state drives (SSDs).” Re-release of Windows 10 version 1809 and Windows Server 2019 As reported by Microsoft, the Windows 10 October 2018 update caused user’s data loss post updating. Due to this, the company decided to pause the update. However, yesterday, Microsoft announced that it is re-releasing Windows 10 version 1809. John Cable, the director of Program Management for Windows Servicing and Delivery at Microsoft said, “the data-destroying bug that triggered that unprecedented decision, as well as other quality issues that emerged during the unscheduled hiatus, have been thoroughly investigated and resolved." Microsoft also announced the re-release of Windows Server 2019, which was affected by the same issue. According to ZDNet, “The first step in the re-release is to restore the installation files to its Windows 10 Download page so that "seekers" (the Microsoft term for advanced users who go out of their way to install a new Windows version) can use the ISO files to upgrade PCs running older Windows 10 versions.” Michael Fortin, Windows Corporate Vice President, in a blog post, offered some context behind the recent issues and announced changes to the way the company approaches communications and also the transparency around their process. Per Fortin, "We obsess over these metrics as we strive to improve product quality, comparing current quality levels across a variety of metrics to historical trends and digging into any anomaly." To know more about this in detail, visit Microsoft’s official blog post. A Microsoft Windows bug deactivates Windows 10 Pro licenses and downgrades to Windows 10 Home, users report Microsoft announces .NET standard 2.1 Microsoft releases ProcDump for Linux, a Linux version of the ProcDump Sysinternals tool  
Read more
  • 0
  • 0
  • 13980

article-image-c-8-0-to-have-async-streams-recursive-patterns-and-more
Prasad Ramesh
14 Nov 2018
4 min read
Save for later

C# 8.0 to have async streams, recursive patterns and more

Prasad Ramesh
14 Nov 2018
4 min read
C# 8.0 will introduce some new features and will likely ship out the same time as .NET Core 3.0. Developers will be able to use the new features with Visual Studio 2019. Nullable reference types in C# 8.0 This feature aims to help prevent the null reference exceptions that appear everywhere. They have riddled object-oriented programming for half a century now. The null reference exceptions stop developers from using null in ordinary reference types like string. They make these types non-nullable. They are warnings, however, not errors. On existing code, there will be new warnings. Developers will have to opt into using the new feature at the project, file, or source line level. C# 8.0 will let you express your “nullable intent”, and throws a warning when you don’t follow it. string s = null; // Warning: Assignment of null to non-nullable reference type string? s = null; // Ok Asynchronous streams with IAsyncEnumerable<T> The async feature that was from C# 5.0 lets developers consume and produce asynchronous results. This is in straightforward code, without callbacks. This isn’t helpful when developers want to consume or produce continuous streams of results. For example, data from an IoT device or a cloud service. Async streams are present for this use. C# 8.0 will come with IAsyncEnumerable<T>. It is an asynchronous version of the existing IEnumerable<T>. Now you can await foreach over functions to consume their elements, then yield return to them in order to produce elements. async IAsyncEnumerable<int> GetBigResultsAsync() {    await foreach (var result in GetResultsAsync())    {        if (result > 20) yield return result;    } } Ranges and indices A type Index is added which can be used for indexing. A type index can be created from an int that counts from the beginning. It can be alternatively created with a prefix ^ operator that counts from the end. Index i1 = 3;  // number 3 from beginning Index i2 = ^4; // number 4 from end int[] a = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; Console.WriteLine($"{a[i1]}, {a[i2]}"); // "3, 6" C# 8.0 will have an added Range type consisting of two Indexes. One will be for the start and one for the end. They can be written with an x..y range expression. Default implementations of interface members Currently, once an interface is published members can’t be added anymore without breaking all its existing implementers. With the new release, a body for an interface member can be provided. If somebody doesn’t implement that member, the default implementation will be available instead. Allowing recursive patterns C# 8.0 will allow patterns to contain other patterns. IEnumerable<string> GetEnrollees() {    foreach (var p in People)    {        if (p is Student { Graduated: false, Name: string name }) yield return name;    } } The pattern in the above code checks that the Person is a Student. It then applies the constant pattern false to their Graduated property to see if they’re still enrolled. Then checks if the pattern string name to their Name property to get their name. Hence, if p is a Student who has not graduated and has a non-null name, that name will yield return. Switch expressions Switch statements with patterns a powerful feature in C# 7.0. But since they can be cumbersome to write, the next C# version will have switch expressions. They are a lightweight version of switch statements, where all the cases are expressions. Target-typed new-expressions In many cases, on creating a new object, the type is already given from context. C# 8.0 will let you omit the type in those cases. For more details, visit the Microsoft Blog. ReSharper 18.2 brings performance improvements, C# 7.3, Blazor support and spellcheck Qml.Net: A new C# library for cross-platform .NET GUI development Microsoft announces .NET standard 2.1
Read more
  • 0
  • 0
  • 15042
article-image-introducing-web-high-level-shading-language-whlsl-a-graphics-shading-language-for-webgpu
Bhagyashree R
14 Nov 2018
3 min read
Save for later

Introducing Web High Level Shading Language (WHLSL): A graphics shading language for WebGPU

Bhagyashree R
14 Nov 2018
3 min read
Yesterday, the W3C GPU for the Web Community Group introduced a new graphics shading language for the WebGPU API called Web High Level Shading Language (WHLSL, pronounced “whistle”). The language extends HLSL to provide better security and safety. Last year, a W3C GPU for the Web Community Group was formed by the engineers from Apple, Mozilla, Microsoft, Google, and others. This group is working towards bringing in a low-level 3D graphics API to the Web called WebGPU. WebGPU, just like other modern 3D graphics API, uses shaders. Shaders are programs that take advantage of the specialized architecture of GPUs. For instance, apps designed for Metal use the Metal Shading Language, apps designed for Direct3D 12 use HLSL, and apps designed for Vulkan use SPIR-V or GLSL. That’s why the WebKit team introduced WHLSL for the WebGPU API. Here are some of the requirements WHLSL aims to fulfill: Need for a safe shader language Irrespective of what an application does, the shader should only be allowed to read or write data from the Web page’s domain. Without this safety insurance, malicious websites can run a shader that reads pixels out of other parts of the screen, even from native apps. Well-specified language To ensure interoperability between browsers, a shading language for the Web must be precisely specified. Also, often rendering teams write shaders in their own custom in-house language, and are later cross-compiled to whichever language is necessary. That is why the shader language should have a reasonably small set of unambiguous grammar and type checking rules that compiler writers can reference when emitting this language. Translatable to other languages As WebGPU is designed to work on top of Metal, Direct3D 12, and Vulkan, the shader should be translatable to Metal Shading Language, HLSL (or DXIL), and SPIR-V. There should be a provision to represent the shaders in a form that is acceptable to APIs other than WebGPU. Performant language To provide an overall improved performance the compiler needs to run quickly and programs produced by the compiler need to run efficiently on real GPUs. Easy to read and write The shader language should be easy to read and write for a developer. It should be familiar to both GPU and CPU programmers. GPU programmers are important clients as they have experience in writing shaders. As GPUs are now popularly being used in various fields other than rendering including machine learning, computer vision, and neural networks, the CPU programmers are also important clients. To learn more in detail about WLHSL, check out WebKit’s post. Working with shaders in C++ to create 3D games Torch AR, a 3D design platform for prototyping mobile AR Bokeh 1.0 released with a new scatter, patches with holes, and testing improvements
Read more
  • 0
  • 0
  • 15164

article-image-mondays-google-outage-was-a-bgp-route-leak-traffic-redirected-through-nigeria-china-and-russia
Natasha Mathur
14 Nov 2018
4 min read
Save for later

Monday’s Google outage was a BGP route leak: traffic redirected through Nigeria, China, and Russia

Natasha Mathur
14 Nov 2018
4 min read
Google faced a major outage on Monday this week as it went down for over an hour, taking a toll on Google Search and a majority of its other services such as the Google Cloud Platform. The outage was apparently a result of Google losing control over the normal routes of its IP addresses as they instead got misdirected, due to a BGP (Border Gateway Protocol) issue, to China Telecom, Nigeria, and Russia. The issue began at 21:13 UTC when MainOne Cable Company, a carrier in Lagos, Nigeria declared its own autonomous system 37282 as the right path to reach 212 IP prefixes that belong to Google, reported ArsTechnica. Shortly after, China Telecom improperly accepted the route and further declared it worldwide, leading to Transtelecom and other large service providers in Russia to follow the same route. A networking and security company, BGPmon, who assesses the route health of networks, tweeted out on Monday that it “appears that Nigerian ISP AS37282 'MainOne Cable Company' leaked many @google prefixes to China Telecom, who then advertised it to AS20485 TRANSTELECOM (Russia). From there on others appear to have picked this up”. BGPmon also tweeted that redirection of IP addresses came in five distinct waves over a 74-minute period: https://twitter.com/bgpmon/status/1062130855072546816 Another Network Intelligence company, ThousandEyes tweeted how a “potential hijack” was underway. As per ThousandEyes, it had detected over 180 prefixes affected by this route leak, covering a wide range of Google services. https://twitter.com/thousandeyes/status/1062102171506765825 This led to a growing suspicion among many as China Telecom, a Chinese state-owned telecommunication company recently came under the spotlight for misrouting the western carrier traffic through mainland China. On further analysis, however, ThousandEyes reached a conclusion that, “the origin of this leak was the BGP peering relationship between MainOne, the Nigerian provider, and China Telecom”. MainOne is in a peering relationship with Google via IXPN in Lagos and has got direct routes to Google, that leaked into China Telecom. These routes then further got propagated from China Telecom, via TransTelecom to NTT and other transit ISPs. “We also noticed that this leak was primarily propagated by business-grade transit providers and did not impact consumer ISP networks as much”, reads the ThousandEyes blog. BGPmon further tweeted that apart from Google, Cloudflare also faced the same issue as its IP addresses followed the same route as Google’s. https://twitter.com/bgpmon/status/1062145172773818368 However, Matthew Prince, CEO, CloudFare, told Ars Technica that this routing issue was just an error and chances of it being a malicious hack was low .“If there was something nefarious afoot there would have been a lot more direct, and potentially less disruptive/detectable, ways to reroute traffic. This was a big, ugly screw up. Intentional route leaks we’ve seen to do things like steal cryptocurrency are typically far more targeted” said Prince. “We’re aware that a portion of Internet traffic was affected by the incorrect routing of IP addresses, and access to some Google services was impacted. The root cause of the issue was external to Google and there was no compromise of Google services,” a Google representative told ArsTechnica.   MainOne also updated regarding the issue on its site, saying, that it faced a “technical glitch during a planned network update and access to some of the Google services was impacted. We promptly corrected the situation at our end and are doing all that is necessary to ensure it doesn’t happen again. The error was accidental on our part; we were not aware that any Google services were compromised as a result”. MainOne further addressed the issue on Twitter saying that the problem occurred due to a misconfiguration in BGP filters: https://twitter.com/Mainoneservice/status/1062321496838885376 The main takeaway from this incident remains that doing business on the Internet is still risky and there are going to be times when it’ll lead to unpredictable and destabilizing events, that may not necessarily be ‘malicious hacks’. Basecamp 3 faces a read-only outage of nearly 5 hours GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users
Read more
  • 0
  • 0
  • 13586
Modal Close icon
Modal Close icon