Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-github-business-cloud-is-now-fedramp-authorized
Savia Lobo
25 Oct 2018
3 min read
Save for later

GitHub Business Cloud is now FedRAMP authorized

Savia Lobo
25 Oct 2018
3 min read
Yesterday, GitHub announced that its Business Cloud is now FedRAMP (Federal Risk and Authorization Management Program) authorized. This is to support the US government’s recent efforts to streamline the security review and authorization for certain software tools. GitHub is being used by Governments around the world to build software, shape policy, and share information with constituents. With the FedRAMP initiative, users can continue to use GitHub with the confidence that the platform meets the low impact software-as-a-service (SaaS) baseline of security standards set by the US federal government partners. What does being FedRAMP authorized mean? FedRAMP, a supporting body of the US General Services Administration (GSA), standardizes security assessment, authorization, and continuous monitoring of cloud products and services by federal agencies. It offers a single authorization process, speeding up the government’s adoption of cloud services so that the agencies do not have to individually authorize cloud service offerings. The team at GSA recognized an opportunity to fine-tune FedRAMP specifically for software-as-a-service (SaaS) providers. This allows GitHub to provide feedback as they created the new FedRAMP Tailored framework. GitHub has completed the assessment phase and its Business Cloud has secured the FedRAMP tailored authorization. Enhancements for the GitHub community At present GitHub has thousands of active government users post the GSA made their initial commit in the year 2013. The New York Senate was the first government organization to post code to GitHub in 2009. Agencies use GitHub to develop software, collaborate with the public on open source, publish data sets, solicit input on policies, and more. The tailored framework lowers the barrier to entry for cloud software providers interested in securing FedRAMP Authorization. The new framework controls will help SaaS providers to meet government security standards more efficiently. This makes it easier for federal, state, and local government agencies to use the development tools they need to do their best work. With GitHub’s FedRAMP Authorized service, agencies can: Secure collaboration in the cloud Foster innovation and continuous testing of new ideas Modernize the way you build software These services are not restricted to government agencies. Everyone in the GitHub community can benefit from these security and privacy enhancements. To know more about FedRAMP in detail, visit GitHub or head over to FedRAMP’s official website. What we learnt from the GitHub Octoverse 2018 Report GitHub down for over 7 hours due to failure in its data storage system Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows  
Read more
  • 0
  • 0
  • 14161

article-image-michelangelo-pyml-introducing-ubers-platform-for-rapid-machine-learning-development
Amey Varangaonkar
25 Oct 2018
3 min read
Save for later

Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development

Amey Varangaonkar
25 Oct 2018
3 min read
Transportation network giants Uber have developed Michelangelo PyML - a Python-powered platform for rapid prototyping of machine learning models. The aim of this platform is to offer machine learning as a service that democratizes machine learning and makes it possible to scale the AI models to meet business needs efficiently. Michelangelo PyML is an integration of Michelangelo - which Uber developed for large-scale machine learning in 2017. This will make it possible for their data scientists and engineers to build intelligent Python-based models that run at scale for online as well as offline tasks. Why Uber chose PyML for Michelangelo Uber developed Michelangelo in September 2017 with a clear focus of high performance and scalability. It currently enables Uber’s product teams to design, build, deploy and maintain machine learning solutions at scale and powers roughly close to 1 million predictions per second. However, that also came at the cost of flexibility. Users mainly were faced with 2 critical issues: It was possible to train the models using the algorithms that were only natively supported by Michelangelo. To run unsupported algorithms, the platform’s capability had to be extended to include additional training and deployment components. This caused a lot of inconvenience at times. The users could not use any feature transformations apart from those offered by Michelangelo’s DSL (Domain Specific Language) Apart from these constraints, Uber also observed that data scientists usually preferred Python over other programming language, given the rich suite of libraries and frameworks available in Python for effective analytics and machine learning. Also, many data scientists gathered and worked with data locally using tools such as pandas, scikit-learn and Tensorflow, as opposed to Big Data tools such as Apache Spark and Hive, while spending hours in setting them up. How PyML improves Michelangelo Based on the challenges faced in using Michelangelo, Uber decided to revamp the platform by integrating PyML to make it more flexible. PyML provides a concrete framework for data scientists to build and train machine learning models that can be deployed quickly, safely and reliably across different environments. This, without any restriction on the types of data they can use or the algorithms they can choose to build the model, makes it an ideal choice of tool to integrate with a platform like Michelangelo. By integrating Python-based models that can operate at scale with Michelangelo, Uber will now be able to handle online as well as offline queries and give smart predictions quite easily. This could be a potential masterstroke by Uber, as they try to boost their business and revenue growth after it slowed down over the last year. Read more Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop? Uber’s Head of corporate development, Cameron Poetzscher, resigns following a report on a 2017 investigation into sexual misconduct Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop
Read more
  • 0
  • 0
  • 19249

article-image-btrfs-makes-multiple-performance-improvements-to-be-shipped-in-the-next-linux-kernel-release
Sugandha Lahoti
25 Oct 2018
2 min read
Save for later

Btrfs makes multiple performance improvements to be shipped in the next Linux Kernel release

Sugandha Lahoti
25 Oct 2018
2 min read
In order to prepare for the Linux 4.20 release, there are multiple performance improvements made to the Btrfs file-system. These changes are to be shipped in the next Linux kernel release. Btrfs is a modern ‘copy on write’ filesystem for Linux. It offers a lot of features not readily available in other in-tree Linux file-systems such as fault tolerance, repair, and easy administration.  However, its performance has been degrading for some time (partially because copy-on-write by default damages some workloads). However, with performance improvements for the Linux 4.20 release, there should be multiple speed-ups to Btrfs. Improvements include more files/sec in fsmark, better perf on multi-threaded workloads (filebench, dbench), fewer context switches and overall better memory allocation characteristics (multiple benchmarks). Apart from general performance, there's an improvement for qgroups + balance workload. Performance improvements Btrfs has deprecated the blocking mode of path; only the spinning mode is used. Blocking mode of path is eliminated because it resulted in unnecessary wakeups and updates to the path locks. Improvement for qgroups + balance workload include speedup balancing with qgroups, as well as skip quota accounting on unchanged subtrees. The overall gain is about 30+ % in runtime. A small improvement has been made to rb-tree to avoid pointer chasing. rb-tree with cached first node is now used for several structures. Btrfs now has better error reporting, after processing blockgroups and whole device. It continues trimming block groups after an error is encountered. It also has less interaction with transaction commit that improves latency on slower storage (eg. image files over NFS). Cleanups in Btrfs Unused struct members and variables are removed Function return type cleanups are performed Delayed refs code refactoring is done Protection is provided against deadlock that could be caused by crafted image that tries to allocate from a tree that's locked already These are just a select few updates. Read the full list of changes in a post by David Sterba. Linux 4.19 kernel releases with open arms and AIO-based polling interface; Linus back to managing the Linux kernel. KUnit: A new unit testing framework for Linux Kernel bpftrace, a DTrace like tool for Linux now open source
Read more
  • 0
  • 0
  • 3458

article-image-pipelinedb-1-0-0-the-high-performance-time-series-aggregation-for-postgresql-released
Melisha Dsouza
25 Oct 2018
3 min read
Save for later

PipelineDB 1.0.0, the high performance time-series aggregation for PostgreSQL, released!

Melisha Dsouza
25 Oct 2018
3 min read
Three years ago, the PipelineDB team published the very first release of PipelineDB, as a fork of PostgreSQL. It received enormous support and feedback from thousands of organizations worldwide, including several Fortune 100 companies. It was highly requested that the fork be released as an extension of PostgreSQL. Yesterday, the team released PipelineDB 1.0.0 as a PostgreSQL extension under the liberal Apache 2.0 license. What is PipelineDB? PipelineDB can be used while storing huge amounts of time-series data that needs to be continuously aggregated. It only stores the compact output of these continuous queries as incrementally updated table rows, which can be evaluated with minimal query latency. It is used for analytics use cases that only require summary data, for instance, for real-time reporting dashboards. PipelineDB will sespeciallybe beneficial in scenarios where queries are known in advance. These queries can be run continuously in order to make the data infrastructure that powers these real time analytics applications simpler, faster, and cheaper as compared to the traditional “store first, query later” data processing model. How does PipelineDB work? PipelineDB uses SQL to write time-series events to a stream, which are also structured as tables. A continuous view is then used to perform an aggregation over this stream. Even if billions of rows are written to the stream, the continuous view ensures that only one physical row per hour is actually persisted within the database. Once the continuous view reads new incoming events and the distinct count is updated to reflect new information, the raw events will be discarded and not stored in PipelineDB. Which enables it to achieve: Enormous levels of raw event throughput on modest hardware footprints Extremely low read query latencies Traditional dependence between data volumes ingested and data volumes stored is broken All of this facilitates a high performance for the system which is sustained indefinitely. PipelineDB also supports another type of continuous queries  called ‘continuous transforms’. Continuous transforms are stateless and apply a transformation to a stream. They write out the result to another stream. Features of PipelineDB PipelineDB 1.0.0 has brought about some changes to version 0.9.7. The main highlights are as follows. Non-standard syntax has been removed. Configuration parameters are now qualified by pipelinedb. PostgreSQL pg_dump, pg_restore, and pg_upgrade tooling is now used instead of the PipelineDB variants Certain functions and aggregates are renamed to be descriptive about what problem they solve for the users . “Top-K” now represents Filtered-Space-Saving “Distributions” now refer to T-Digests “Frequency” now refers to Count-Min-Sketch Bloom filters introduced for set membership analysis Distributions and percentiles analysis is now possible What’s more? Continuous queries can be chained together into arbitrarily complex topologies of continuous computation. Each continuous query produces its own output stream of its incremental updates. This can be consumed by another continuous query as any other stream. The team aims to follow up with the functionality of automated partitioning for continuous views in the upcoming release. You can head over to the PipelineDb blog for more insights on this news. Citus Data to donate 1% of its equity to non-profit PostgreSQL organizations PostgreSQL 11 is here with improved partitioning performance, query parallelism, and JIT compilation PostgreSQL group releases an update to 9.6.10, 9.5.14, 9.4.19, 9.3.24
Read more
  • 0
  • 0
  • 12547

article-image-motorola-partners-with-ifixit-to-support-users-right-to-repair-provides-oem-fix-kits
Bhagyashree R
25 Oct 2018
3 min read
Save for later

Motorola partners with iFixit to support users’ ‘Right to Repair’; provides OEM Fix Kits

Bhagyashree R
25 Oct 2018
3 min read
On Tuesday, iFixit announced its partnership with Motorola to supply customers with repair kits for Motorola’s smartphones. Users can either send their device to Motorola for repairs or they can buy an iFixit battery or screen replacement kit (Motorola OEM Fix Kits) that includes the tools required to undertake the task themselves. iFixit is a private company that sells repair parts and publishes free online repair guides for consumer electronics and gadgets on its website. According to iFixit, this partnership is geared towards supporting the right to repair movement. Motorola is the first major smartphone manufacturer to supply repair kits to users. With this partnership, Motorola is adopting an open attitude towards repair. In the official announcement, iFixit said: “For fixers like us, this partnership is representative of a broader movement in support of our Right to Repair. It’s proof that OEM manufacturers and independent repair can co-exist. Big business and social responsibility, and innovation and sustainability don’t need to be mutually exclusive. Motorola is setting an industry-leading example of a company that’s looking forward—not just six months ahead to next quarter’s margins, but decades ahead when devices are damned for the landfill.” iFixit’s site lists 16 Motorola repair kits for devices including the Moto Z Force, Z Play, Droid Turbo 2, G5, and G4. Along with these repairing kits called Motorola OEM Fix Kits, you will also get a free step-by-step guide. These kits range from $40 to $ 200 and include key replacement parts along with many of the specialty tools you’ll need to complete the repair. What is Right to Repair? Recently, CBC reported that Apple has been overpricing the repair costs and they threaten third-party shops who fix Apple products at less price. Apple only allows authorized Apple Store technicians to repair its devices. In order to confirm this malpractice by Apple, CBC News visited one of these authorized repairing centers and recorded the entire conversation with the technician using a hidden camera. The video shows that Apple customers are often told malfunctioning computers are not worth fixing, even when minor repairs could solve the problem. The news of Apple’s malpractice has fueled the Right to Repair movement. This movement advocates that a manufacturer should practice fair repair by providing repair documentation and supply parts to consumers and independent repair shops. iFixit is one of the supporters of this movement. Earlier this month, they published an article on why support the Right to Repair. They have also teamed up with the Repair Association, an advocate for consumer’s right to repair and modify products from automobiles to IT equipment. Read the official announcement on iFixit’s website. A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Is Facebook planning to spy on you through your mobile’s microphones? iPhone XS and Apple Watch details leaked hours before product launch
Read more
  • 0
  • 0
  • 3325

article-image-mozilla-pledges-to-match-donations-to-tor-crowdfunding-campaign-up-to-500000
Melisha Dsouza
24 Oct 2018
2 min read
Save for later

Mozilla pledges to match donations to Tor crowdfunding campaign up to $500,000

Melisha Dsouza
24 Oct 2018
2 min read
Today, the Tor Project launched its annual end-of-year crowdfunding campaign ‘Strength in Numbers’ and it's receiving support from Firefox maker Mozilla. The Tor network disguises a users identity by moving their traffic across different Tor servers, and encrypting that traffic so it isn't traced back to them, thus "ensuring privacy and online freedom". Started back in 2016, Tor’s Crowdfunding campaigns allow the community to realize the opportunity that Tor promises. Their vision to deliver significant advancements in the hidden services field aims to draw contributions from donors, further facilitating their participation in shaping the evolution of hidden services. Tor announced that Mozilla will match donations up to a total of $500,000. This means a significant portion of the donations Tor receives during this campaign will be automatically be doubled. This is not the first time that Mozilla, Tor’s long term ally, has supported its network. Its partnership with Tor helped the organization raise over $400,000 from a similar campaign. Mozilla's support has been beneficial to Tor, who began soliciting ‘crowdfunded’ donations in 2015 to offset its reliance on government grants. 2018 has been a busy year for the Tor network who have always aimed to take a stand against restrictive online practices and foster privacy and online freedom to its users. In wake of the same, they build the  Tor Browser 8 based on Firefox’s 2017 Quantum structure and the Tor Browser for Android  to reach out to users in nations that have tightened restrictions on free expression and accessing the open web and not much freedom is provided to its citizens. Looks like Mozilla has given them a good head start to continue their work in 2019. Tor plans to do the following in 2019 with community support: Improve the capacity, modularization, and scalability of the Tor network Make improvements and integrations into other privacy and circumvention tools easier and reliable Better test  and design solutions around internet censorship Strengthen the development of Tor Browser for Android And much more! You can head over to Tor’s official Blog to know more about this news. Tor Project gets its first official mobile browser for Android, the privacy friendly Tor Browser Tor Browser 8.0 powered by Firefox 60 ESR released
Read more
  • 0
  • 0
  • 10186
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-a-multimillion-dollar-ad-fraud-scheme-that-secretly-tracked-user-affected-millions-of-android-phones-this-is-how-google-is-tackling-it
Bhagyashree R
24 Oct 2018
4 min read
Save for later

A multimillion-dollar ad fraud scheme that secretly tracked user affected millions of Android phones. This is how Google is tackling it.

Bhagyashree R
24 Oct 2018
4 min read
Yesterday, Google issued a response to how it is handling the huge ad fraud after Buzzfeed News reported it to them last week. According to this report, almost 125 Android apps and websites were affected in this ad fraud. Many of these affected apps are targeted at kids or teens. What this investigation by Buzzfeed News revealed? Buzzfeed News in their report said that application developers were being contacted by sketchy websites such as We Purchase Apps offering to buy their mobile applications. After acquiring these apps, they changed the details of the applications on Google Play Store. These companies were part of a massive, sophisticated digital advertising fraud scheme. This fraud involved more than 125 Android apps and websites connected to a network of front and shell companies in Cyprus, Malta, British Virgin Islands, Croatia, Bulgaria, and elsewhere. This report also revealed that those using these apps were secretly tracked: “A significant portion of the millions of Android phone owners who downloaded these apps were secretly tracked as they scrolled and clicked inside the application." Schemes like these are targeting Android applications because of its huge user base and also because Google Play Store has a less strict app review process as compared to Apple’s App Store. Android apps are bought by offering huge sums and sold, injected with malicious code, repurposed without users’ or Google’s knowledge, and are turned into engines of fraud. How does the ad fraud scheme work? As revealed by Buzzfeed News, the web-based traffic is generated by a botnet called TechSnab. This botnet is a small to medium-sized botnet that has existed for a few years. These botnets create hidden browser windows that visit web pages to inflate ad revenue. The malware contains common IP based cloaking, data obfuscation, and anti-analysis defenses. The botnets directed traffic to a network of websites created specifically for this operation and monetized with Google and many third-party ad exchanges. Based on the analysis of historical ads.txt crawl data, inventory from these websites was widely available throughout the advertising ecosystem. As many as 150 exchanges, supply-side platforms (SSPs) or networks may have sold this inventory. The botnet operators had hundreds of accounts across 88 different exchanges based on accounts listed with DIRECT status in their ads.txt files. How Google is tackling this ad fraud? Buzzfeed News shared a list of apps and websites connected to the scheme with Google last week. Google investigated and found that dozens of apps used its mobile advertising network and confirmed in its post yesterday, the presence of a botnet driving traffic to websites and apps in the scheme. One of Google’s Spokesperson told Buzzfeed News: “We take seriously our responsibility to protect users and provide a great experience on Google Play. Our developer policies prohibit ad fraud and service abuse on our platform, and if an app violates our policies, we take action.” In the past week, Google has removed apps involved in this ad fraud scheme, banning them from monetizing with Google. Additionally, they have blacklisted those apps and websites that are outside their ad network to ensure that advertisers using Display & Video 360 do not buy any of this traffic. Google is taking the following steps to curb this ad fraud scheme: Their engineering and operations teams are taking systemic action to disrupt this threat, including the takedown of command and control infrastructure that powers the associated botnet. Technical information related to this scheme is shared with trusted partners across the ecosystem so that they can make their security stronger and minimize the impact of this threat. Active infections associated with TechSnab, the botnet revealed in the investigation, are reduced significantly with the help of Google Chrome Cleanup tool. This tool prompted users to uninstall the malware. According to Google’s investigation, mobile apps were majorly impacted. They checked for apps that are monetizing via AdMob and removed those that were engaged in this behavior from their ad network. To know more about Google’s steps towards these ad fraud schemes check out their official announcement. Also, read the full investigation report shared by the Buzzfeed News. OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google Cloud’s Titan and Android Pie come together to secure users’ data on mobile devices Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system
Read more
  • 0
  • 0
  • 13102

article-image-lyft-acquires-computer-vision-startup-blue-vision-labs-in-a-bid-to-win-the-self-driving-car-race
Prasad Ramesh
24 Oct 2018
3 min read
Save for later

Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race

Prasad Ramesh
24 Oct 2018
3 min read
Lyft created its Level 5 division last year with the aim to focus on developing self-driving cars. It is now acquiring London based computer vision startup Blue Vision Labs to bring safe and reliable autonomous driving to the streets first. Self-driving cars is one of the most challenging areas of applied machine learning today. In just a year, the Level 5 division into a team of 300 engineers and researchers. This acquisition marks the first acquisition for Level 5 and also its first step into the UK self-driving space. Blue Vision uses computer vision to build large-scale robotics and augmented reality applications. It was found in 2016 by graduates from University of Oxford and Imperial College London. Today they consist of 40 skilled experts in computer vision and robotics. With the technology from Blue Vision Labs, entire city maps in 3D can be formed just with the cameras mounted on cars. These maps make the car aware of its environment with high accuracy. In a Medium post, the VP of Engineering with Lyft, Luc Vincent says: “Blue Vision Labs is the first company able to build city-scale 3D maps from cell phone acquired imagery. This is truly amazing tech.” Vincent also hinted at bigger plans for the role Blue Vision Labs will play in the growth Lyft’s self driving division. He said, “It also has applications well beyond self-driving. For example, we are keen to explore how we can leverage Blue Vision Labs’ stack to more precisely pinpoint drivers’ and riders’ locations, and create new augmented reality interfaces that make transportation simpler and better for everyone.” In-vehicle advertising is a space all tech titans serious about autonomous tech like Alphabet’s Waymo and Apple’s secretive self-driving car project are vying for. Lyft seems to understand the value of being first to market in this area with this promising acquisition. Although there is no official statement about the acquisition details, as per some sources for TechCrunch, it is “around $72 million with $30 million on top of that based on hitting certain milestones.” This acquisition will drive Lyft’s self-driving visions on the streets of UK. Self-driving cars vacate an extra seat in cars and can contribute towards reducing the effects of problems like reducing pollution, traffic etc, believes the Lyft Level 5 team. To know more details about Lyft self-driving, visit the Lyft website. nuScenes: The largest open-source dataset for self-driving vehicles by Scale and nuTonomy This self-driving car can drive in its imagination using deep reinforcement learning Tesla is building its own AI hardware for self-driving cars
Read more
  • 0
  • 0
  • 11850

article-image-citus-data-to-donate-1-of-its-equity-to-non-profit-postgresql-organizations
Sugandha Lahoti
24 Oct 2018
2 min read
Save for later

Citus Data to donate 1% of its equity to non-profit PostgreSQL organizations

Sugandha Lahoti
24 Oct 2018
2 min read
Citus Data, which works on Postgres database technologies, announced that it will donate 1 percent of its equity to non-profit PostgreSQL organizations in the US and Europe. Their aim is to support the growth, education, and future innovation of the open-source Postgres database in both the US and in Europe. The company is also joining the Pledge 1% movement which provides a platform where companies can take a pledge to give back to the community. They have four options, Pledge 1% of equity, time, product, or profit. Citus Data basically creates an extension to Postgres that transforms PostgreSQL into a distributed database. Citus Data CEO Umur Cubukcu said, “You can contribute to open source in different ways. You can open source software you’ve created, you can maintain certain features and projects, and you can contribute to events with speakers and sponsorships—all of which our team spends a lot of time on. We are excited to create a new way to contribute to open source, by this donation.” According to Ozgun Erdogan, one of Citus Data founders “This 1% stock donation is a way for us to give back and to share a piece of our future success. And we believe the donation will make a real difference to future projects in the Postgres community.” RedMonk analyst and co-founder James Governor said, "Citus Data is both making an innovative bet, and paying it forward, by applying the 1% Pledge model to underpin the renaissance of the Postgres community". Magnus Hagander, open source advocate, PostgreSQL core team member, and president of PostgreSQL Europe says “What do I think about this donation of 1 percent equity from the team at Citus Data? I think it's a generous way to support the PostgreSQL community, and shines a light on the importance of supporting open source projects that underpin so many products and companies today.” Read more about the news on Citus Data blog. PostgreSQL 11 is here with improved partitioning performance, query parallelism, and JIT compilation. How to perform full-text search (FTS) in PostgreSQL Azure Database services are now generally available for MySQL and PostgreSQL
Read more
  • 0
  • 0
  • 11317

article-image-amazon-tried-to-sell-its-facial-recognition-technology-to-ice-in-june-emails-reveal
Richard Gall
24 Oct 2018
3 min read
Save for later

Amazon tried to sell its facial recognition technology to ICE in June, emails reveal

Richard Gall
24 Oct 2018
3 min read
It has emerged that Amazon representatives met with Immigrations and Customs Enforcement (ICE) this Summer in a bid to sell its facial recognition tool Rekognition. Emails obtained by The Daily Beast show that officials from Amazon met with ICE on June 12 in Redwood City. In that meeting, Amazon outlined some of AWS capabilities, stating that "we are ready and willing to help support the vital HSI [Homeland Security Investigations] mission." The emails (which you can see for yourself here) also show that Amazon were keen to set up a "workshop" with U.S. Homeland Security, and "a meeting to review the process in more depth and help assess your target list of 'Challenges [capitalization intended]'." What these 'Challenges' are referring to exactly is unclear. The controversy around Amazon's Rekognition tool These emails will only serve to increase the controversy around Rekognition and Amazon's broader involvement with security services. Earlier this year the ACLU (American Civil Liberties Union) revealed that a small number of law enforcement agencies were using Rekognition for various purposes. Later, in July, the ACLU published the results of its own experiment with Rekognition in which it incorrectly matched mugshots with 28 Congress members. Amazon responded to this research with a rebuttal on the AWS blog. In it, the Dr. Matt Wood stated that "machine learning is a very valuable tool to help law enforcement agencies, and while being concerned it’s applied correctly, we should not throw away the oven because the temperature could be set wrong and burn the pizza." This post was referenced in the email correspondence between Amazon and ICE. Clearly, the issue of accuracy was an issue in the company's discussion with security officials. The controversy continued this month after an employee published an anonymous letter on Medium, urging the company not to sell Rekognition to police. They wrote: "When a company puts new technologies into the world, it has a responsibility to think about the consequences." Amazon claims Rekognition isn't a surveillance service We covered this story on the Packt Hub last week. Following publication, an Amazon PR representative contacted us, stating that  "Amazon Rekognition is NOT a surveillance service" [emphasis the writer's, not mine]. The representative also cited the post mentioned above by Dr. Matt Wood, keen to tackle some of the challenges presented by the ACLU research. Although Amazon's position is clear, it will be difficult for the organization to maintain that line given these emails. Separating the technology from its deployment is all well and good until its clear that you're courting the kind of deployment for which you are being criticised. Note 10.30.2018 - Amazon spokesperson responded with a comment, wishing to clarify the events described from its perspective: “We participated with a number of other technology companies in technology “boot camps” sponsored by McKinsey Company, where a number of technologies were discussed, including Rekognition. As we usually do, we followed up with customers who were interested in learning more about how to use our services (Immigration and Customs Enforcement was one of those organizations where there was follow-up discussion).”
Read more
  • 0
  • 0
  • 14734
article-image-google-cloud-storage-security-gets-an-upgrade-with-bucket-lock-cloud-kms-keys-and-more
Melisha Dsouza
24 Oct 2018
3 min read
Save for later

Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more

Melisha Dsouza
24 Oct 2018
3 min read
Earlier this month, the team at Google Cloud Storage announced new capabilities for improving the reliability and performance of user’s data. They have now rolled out updates for storage security that will cater to privacy of data and compliance with financial services regulations.  With these new security upgrades including the general availability of Cloud Storage Bucket Lock, UI changes for privacy management, Cloud KMS integration with Cloud Storage and much more; users will be able to build reliable applications as well as ensure the safety of data. Storage security features on Google Cloud Storage: #1 General availability of Cloud Storage Bucket Lock Cloud Storage Bucket Lock is now generally available. This feature is especially useful for users that need a Write Once Read Many (WORM) storage, as it prevents deletion or modification of content for a specified period of time. To help organizations meet compliance, legal and regulatory requirements for retaining data for specific lengths of time, Bucket Lock provides retention lock capabilities, as well as event, holds for content. Bucket Lock works with all tiers of Cloud Storage. Both primary and archive data can use the same storage setup. Users can automatically move locked data into colder storage tiers and delete data once the retention period expires. Bucket Lock has been used in a diverse range of applications from financial records compliance and Healthcare records retention to Media content archives and much more. You can head over to the Bucket Lock documentation to learn more about this feature. #2 New UI features for secure sharing of data The new UI features in the Cloud Storage console enable users to securely share their data and gain insights over which data, buckets, and objects are publicly visible across their Cloud Storage environment. The public sharing option in the UI has been replaced with an Identity and Access Management (IAM) panel. This mechanism will prevent users from clicking the mouse by mistake and publicly sharing their objects. Administrators can clearly understand which content is publicly available. The mechanism also enables users to know how their data is being shared publicly. #3 Use Cloud KMS keys with Cloud Storage data Cloud Key Management System (KMS) provides users with sophisticated encryption key management capabilities. Users can manage and control encryption keys for their Cloud Storage datasets through the Cloud Storage–KMS integration. This KMS integration helps users manage active keys, authorize users or applications to use certain keys, monitor key use, and more. Cloud Storage users can also perform a  key rotation, revocation, and deletion. Head over to Google Cloud storage blog to learn more about Cloud KMS integration. #4 Access Transparency for Cloud Storage and Persistent Disk This new transparency mechanism will show users who, when, where and why Google support or the engineering team has accessed their Cloud Storage and Persistent Disk environment. Users can use Stackdriver APIs to monitor logs related to Cloud Storage actions programmatically and also archive their logs if required for future auditing. This gives complete visibility into administrative actions for monitoring and compliance purposes You can learn more about AXT on Google's blog post. Head over to Google Cloud Storage blog to understand how these new upgrades will add to the security and control of cloud resources. What’s new in Google Cloud Functions serverless platform Google Cloud announces new Go 1.11 runtime for App Engine Cloud Filestore: A new high performance storage option by Google Cloud Platform  
Read more
  • 0
  • 0
  • 15187

article-image-nips-foundation-decides-against-name-change-as-poll-says-it-is-an-unpopular-superficial-move-instead-increases-focus-on-diversity-and-inclusivity-initiatives
Melisha Dsouza
24 Oct 2018
5 min read
Save for later

NIPS Foundation decides against name change as poll finds it an unpopular superficial move; instead increases ‘focus on diversity and inclusivity initiatives’

Melisha Dsouza
24 Oct 2018
5 min read
The ‘Neural Information Processing Systems’, also known as ‘NIPS’ is a well known for hosting the most influential AI conferences over the past 32 years, all around the globe. The conference is organized by NIPS Foundation and brings together researchers from biological, psychological, technological, mathematical, and theoretical areas of science and engineering - including the big names of the tech industry like Google, Nvidia, Facebook, and Microsoft. The acronym of the conference has been receiving a lot of attention from members worldwide over the past few years. Some members of the community have pointed out that the current acronym ‘NIPS’ has unintended connotations which makes the name sound “sexist“ On the other hand, the decision of bringing about a name change only added further confusion and frustration. In August 2018, the organizers of the conference conducted a  poll on the NIPS website asking people whether they agree or disagree with the potential name change. This was done taking cue from the several well-publicized incidents of insensitivity at past conferences. The poll requested alternative names for the conference, rating of the existing and alternative names, and encouraging additional comments from members. "Arguments in favor of keeping the existing name include a desire to respect the intellectual tradition that brought the meeting to where it is today and the strong brand that comes with this tradition. Arguments in favor of changing the name include a desire to better reflect the modern scope of the conference and to avoid distasteful connotations of the name." - Organizers of NIPS Out of the 2270 participants who took the survey,  over 86% were male, around 13% were female, and 0.01% other gender or non-responsive. A key question in the poll was: “Do you think we should change the name of the NIPS conference?” To this, around 30% of the respondents said they support the name change (28% males and about 44% females) while 31% ‘strongly disagreed’ with the name change proposal (31% male and 25% female). Here is the summary of the response distribution:                                                    Source: nips.cc Some respondents also questioned whether the name was deliberately selected for a double entendre. But the foundation denies the claims as the name was selected in 1987, and sources such as Oxford English Dictionary show that the slang reference to a body part did not come into usage until years later. To the foundation, the results of the poll did not provide any useful insights to the situation. The first poll resulted in a long list of alternative names. Most of them being unsuitable for reasons like- existing brand, too close to names of other conferences, offensive connotations in some language.  After shortlisting six names, a second poll was conducted. None of these names were strongly preferred by the community. Since the polls have not returned a consensus result, the foundation has decided not to change the name of the conference- at least for now. Here are some of the comments posted on the NIPS website (with permission) “Thanks for considering the name change. I am not personally bothered by the current name, which is semi-accurate and has no ill intent -- but I think the gesture of making a name change will send a much-needed inclusive vibe in the right direction” “If it were up to me, I'd call off this nice but symbolic gesture and use whatever time, money, and energy it requires to make actual changes that boost inclusivity, like providing subsidized child care so that parents can attend, or offering more travel awards to scholars from lesser-developed countries” “Please, please please change the name. It is sexist and a racist slur!!! I'm embarrassed every time I have to say the name of the conference” “As a woman, I find it offensive that the board is seriously considering changing the name of the meeting because of an adolescent reference to a woman’s body. From my point of view, it shows that the board does not see me as an equal member of the community, but as a woman first and a scientist second” “I am a woman, I have experienced being harassed by male academics, and I would like this problem to be discussed and addressed. But not in this frankly almost offensive way” Much of the feedback received from its members pointed towards taking a more substantive approach to diversity and inclusivity. Taking this into account, The NIPS code of conduct was implemented, two Inclusion and Diversity chairs were appointed to the organizing committee and, childcare support for NIPS 2018 Conference in Montreal has been introduced. In addition, NIPS has welcomed the formation of several co-located workshops focused on diversity in the field.  NIPS is also extending support to additional groups, including Black in AI (BAI), Queer in AI@NIPS, Latinx in AI (LXAI), and Jews in ML (JIML). Twitter saw some pretty strong opinions on this decision- https://twitter.com/StephenLJames/status/1054996053177589760 The foundation hopes that the community's support will help in improving the inclusiveness of the conference for its diverse set of members. Head over to the Neural Information Processing Systems Blog post for more insights on this news. NIPS 2017 Special: 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel NIPS 2017 Special: How machine learning for genomics is bridging the gap between research and clinical trial success by Brendan Frey  
Read more
  • 0
  • 0
  • 14617

article-image-the-second-instance-of-windows-zero-day-vulnerability-disclosed-in-less-than-two-months
Savia Lobo
24 Oct 2018
3 min read
Save for later

The second instance of Windows zero-day vulnerability disclosed in less than two months

Savia Lobo
24 Oct 2018
3 min read
Two months ago, a security researcher with the name SandboxEscaper disclosed a local privilege escalation exploit in Windows. The researcher is back with another Windows zero-day vulnerability, which was disclosed on Twitter yesterday. A Proof-of-Concept (PoC) for this vulnerability was also published on Github. https://twitter.com/SandboxEscaper/status/1054744201244692485 Many security experts analyzed the PoC and stated that this zero-day vulnerability only affects recent versions of the Windows OS, such as Windows 10 (all versions, including the latest October 2018 Update), Server 2016, and even the new Server 2019. An attacker can use it to elevate their privileges on systems they already have an access to. Will Dormann, software vulnerability analyst, CERT/CC, says, “this is because the "Data Sharing Service (dssvc.dll), does not seem to be present on Windows 8.1 and earlier systems." According to ZDNet, experts who analyzed the PoC say, “The PoC, in particular, was coded to delete files for which a user would normally need admin privileges to do so. With the appropriate modifications, other actions can be taken.” The second zero-day Windows exploit This zero-day exploit is quite identical to the previous exploit released by SandboxEscaper in August, said Kevin Beaumont, an infosec geek at Vault-Tec. "It allows non-admins to delete any file by abusing a new Windows service not checking permissions again", he added. However, Microsoft released a security patch for the previous vulnerability during the September 2018 Patch Tuesday updates. SandboxEscaper’s PoC for the previous exploit “wrote garbage data to a Windows PC, the PoC for the second zero-day will delete crucial Windows files, crashing the operating system, and forcing users through a system restore process”. Hence, Mitja Kolsek, CEO of ACROS Security, advised users to avoid running this recent PoC. Kolsek's company released an update for their product (called 0Patch) that would block any exploitation attempts until Microsoft releases an official fix. Kolsek and his team are currently working on porting their ‘micro-patch’ to all affected Windows versions. As per ZDNet, malware authors integrated SandboxEscaper's first zero-day inside different malware distribution campaigns. Experts believe that malware authors can use the zero-day to delete OS files or DLLs and replace them with malicious versions. SandboxEscaper argues that this second zero-day can be just as useful for attackers as the first. To know more about this news in detail, head over to ZDNet’s website. ‘Peekaboo’ Zero-Day Vulnerability allows hackers to access CCTV cameras, says Tenable Research Implementing Identity Security in Microsoft Azure [Tutorial] Upgrade to Git 2.19.1 to avoid a Git submodule vulnerability that causes arbitrary code execution
Read more
  • 0
  • 0
  • 2421
article-image-react-16-6-0-releases-with-a-new-way-of-code-splitting-and-more
Bhagyashree R
24 Oct 2018
4 min read
Save for later

React 16.6.0 releases with a new way of code splitting, and more!

Bhagyashree R
24 Oct 2018
4 min read
After the release of React 16.5.0 last month, the React team announced the release of React 16.6.0 yesterday. This release comes with a few new convenient features including support for code splitting, easier way to consume Context from class components, and more. Let’s see what changes have been made in this release: React.memo() as an alternative to PureComponent for functions React.memo() is similar to React.PureComponent but for function components instead of class. It can be used in cases when the function component renders the same result given the same props. You can wrap the function component in a call to React.memo() for a performance boost in some cases by memorizing the result. This will make React skip rendering the component and reuse the last rendered result. React.lazy() for code splitting components You can now render a dynamic import as a regular component with the help of React.lazy() function. To do code splitting you can use the Suspense component by wrapping a dynamic import in a call to React.lazy(). Library authors can leverage this Suspense component to start building data fetching with Suspense support in the future. Note that the feature is not yet available for server-side rendering. Added Class.contextType In React 16.3 a new Context API was introduced as a replacement to the previous Legacy Context API, which will be removed in a future major version. React 16.6 comes with contextType, which makes it convenient to consume a context value from within a class component. A Context object created by React.createContext() can be assigned to the contextType property. This enables you to consume the nearest current value of that Context type using this.context. This update is done because developers were having some difficulties adopting the new render prop API in class components. getDerivedStateFromError() to render the fallback UI React 16 introduced error boundaries to solve the problem of a JavaScript error in a part of the UI breaking the whole app. What Error boundaries do is, they: Catch JavaScript errors anywhere in their child component tree Log those errors Display a fallback UI instead of the component tree that crashed An already existing method, componentDidCatch() gets fired after an error has occurred. But before it is fired, null is rendered in place of the tree that causes an error. This sometimes results in breaking the parent components that don’t expect their refs to be empty. This version introduces another lifecycle method called getDerivedStateFromError() that lets you render the fallback UI before the render completes. Deprecations in StrictMode The StrictMode component was released in React 16.3 that lets you opt-in early warnings for any deprecations. Two more APIs are now added to the deprecated APIs list: ReactDOM.findDOMNode() and Legacy Context. The reason these APIs are deprecated are: ReactDOM.findDOMNode(): This API was supported to allow searching the tree for a DOM node given a class instance. Typically, you don’t require this because you can directly attach a ref to a DOM node. Most uses of this API are often misunderstood and are unnecessary. Legacy Context: It makes React slightly slower and bigger than it needs to be. Instead of the Legacy Context, you can use the new Context API that was introduced in React 16.3. Miscellaneous changes unstable_AsyncMode renamed to unstable_ConcurrentMode unstable_Placeholder renamed to Suspense and delayMs to maxDuration To read the entire changelog of React 16.6, check out their official announcement and also React’s GitHub repository. React 16.5.0 is now out with a new package for scheduling, support for DevTools, and more! React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more! InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out
Read more
  • 0
  • 0
  • 14446

article-image-baidu-releases-a-new-ai-translation-system-stacl-that-can-do-simultaneous-interpretation
Sugandha Lahoti
24 Oct 2018
3 min read
Save for later

Baidu releases a new AI translation system, STACL, that can do simultaneous interpretation

Sugandha Lahoti
24 Oct 2018
3 min read
Baidu has released a new AI-powered tool called STACL, that performs simultaneous interpretation. A simultaneous interpreter performs translation concurrently with the speaker’s speech, with a delay of only a few seconds. However, Baidu has taken a step ahead by predicting and anticipating the words a speaker is about to say a few seconds in the future. Current translation systems are generally prone to latency such as “3-word delay” and their systems are overcomplicated and slow to train. Baidu’s STACL overcomes these limitations by predicting the verb to come, based on all the sentences it has seen in the past. The system uses a simple “wait-k” model trained to generate the target sentence concurrently with the source sentence, but always k words behind, for any given k. STACL directly predicts target words, and seamlessly integrates anticipation and translation in a single model. STACL is also flexible in terms of the latency-quality trade-off, where the user can specify any arbitrary latency requirements (e.g., one-word delay or five-word delay). Presently, STACL works on text-to-text translation and speech-to-text translation. The model is trained on newswire articles, where the same story appeared in multiple languages. In the paper, the researchers demonstrated its capabilities in translating from Chinese to English. Source: Baidu They have also come up with a new metric of latency called “Averaged Lagging”, which addresses deficiencies in previous metrics. The system is of course, far from perfect. For instance, at present, it can’t correct its mistakes or apologize for it. However,  it is adjustable in the sense that users will be able to make trade-offs between speed and accuracy. It can also be made more accurate by training it in a particular subject so that it understands the likely sentences that will appear in presentations related to that subject. The researchers are also planning to include speech-to-speech translation capabilities in STACL. To do this, they will need to integrate speech synthesis into the system while trying to make it sound natural. According to Liang Huang, principal scientist of Baidu’s Silicon Valley AI Lab, “STACL will be demoed at a Baidu World conference on November 1st, where it will provide a live simultaneous translation of the speeches. Baidu has previously shown off a prototype consumer device that does sentence-by-sentence translation,” and Huang says “his team plans to integrate STACL into that gadget.” Go through the research paper and video demos for extensive coverage. Baidu announces ClariNet, a neural network for text-to-speech synthesis. Baidu Security Lab’s MesaLink, a cryptographic memory safe library alternative to OpenSSL. Baidu releases EZDL – a platform that lets you build AI and machine learning models without any coding knowledge
Read more
  • 0
  • 0
  • 15639
Modal Close icon
Modal Close icon