Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-packt-has-put-together-a-new-cybersecurity-bundle-for-humble-bundle
Richard Gall
29 Nov 2018
2 min read
Save for later

Packt has put together a new cybersecurity bundle for Humble Bundle

Richard Gall
29 Nov 2018
2 min read
It might not even be December yet, but if you're interested in cybersecurity Christmas has come early. Packt has once again teamed up with Humble Bundle to bring readers a diverse set of titles covering some of the most important and cutting edge trends in contemporary security. While the offer runs, you can get your hands on $1,533 worth of eBooks and videos, for just $15. That's one steal that Packt wholeheartedly approves. Go to Humble Bundle now. As always, you'll also be able to support charity when you buy from Humble Bundle. You can choose who to donate to, but this month the featured charity is Innocent Lives Foundation. What you get in Packt's cybersecurity Humble Bundle For as little as $1 you can get your hands on: Nmap: Network Exploration and Security Auditing Cookbook - Second Edition Network Analysis Using Wireshark 2 Cookbook - Second Edition Practical Cyber Intelligence Cybersecurity Attacks (Red Team Activity) [Video] Python For Offensive PenTest: A Complete Practical Course Or you can pay as little as $8 to get all of the above as well as: Cryptography with Python [Video] Digital Forensics and Incident Response Hands-On Penetration Testing on Windows Industrial Cybersecurity Metasploit Penetration Testing Cookbook - Third Edition Web Penetration Testing with Kali Linux - Third Edition Hands-On Cybersecurity for Architects Mastering pfSense - Second Edition Mastering Kali Linux [Video] Alternatively, for as little as $15, you'll get all of the products above, but also get:   Mastering Kali Linux for Advanced Penetration Testing - Second Edition Kali Linux - An Ethical Hacker's Cookbook Learning Malware Analysis Cybersecurity - Attack and Defense Strategies Practical Mobile Forensics - Third Edition Hands-On Cybersecurity with Blockchain Metasploit for Beginners CompTIA Security+ Certification Guide Ethical Hacking for Beginners [Video] Mastering Linux Security and Hardening [Video] Learn Website Hacking / Penetration Testing From Scratch [Video]
Read more
  • 0
  • 0
  • 18091

article-image-red-hat-acquires-israeli-multi-cloud-storage-software-company-noobaa
Savia Lobo
29 Nov 2018
3 min read
Save for later

Red Hat acquires Israeli multi-cloud storage software company, NooBaa

Savia Lobo
29 Nov 2018
3 min read
On Tuesday, Red Hat announced that it has acquired an Israel-based multi-cloud storage software company NooBaa. This is Red Hat’s first acquisition since it was acquired by IBM in October. However, this acquisition is not subject to IBM’s approval as Red Hat's acquisition process by IBM stands incomplete. Early this month, Red Hat CEO Jim Whitehurst said, “Until the transaction closes, it is business as usual. For example, equity practices will continue until the close of the transaction, Red Hat M&A will continue as normal, and our product roadmap remains the same." NooBaa, founded in 2013, addresses the need for greater visibility and control over unstructured data spread throughout the distributed environments. The company also developed a data platform designed to serve as an abstraction layer over existing storage infrastructure. This abstraction not only enables data portability from one cloud to another but allows users to manage data stored in multiple locations as a single, coherent data set that an application can interact with. NooBaa's technologies complement and enhance Red Hat's portfolio of hybrid cloud technologies, including Red Hat OpenShift Container Platform, Red Hat OpenShift Container Storage and Red Hat Ceph Storage. Together, these technologies are designed to provide users with a set of powerful, consistent and cohesive capabilities for managing application, compute, storage and data resources across public and private infrastructures. Ranga Rangachari, VP and GM of Red Hat's storage and hyper-converged infrastructure said, “Data portability is a key imperative for organizations building and deploying cloud-native applications across private and multiple clouds. NooBaa’s technologies will augment our portfolio and strengthen our ability to meet the needs of developers in today’s hybrid and multi-cloud world. We are thrilled to welcome a technical team of nine to the Red Hat family as we work together to further solidify Red Hat as a leading provider of open hybrid cloud technologies.” He further added, "By abstracting the underlying cloud storage infrastructure for developers, NooBaa provides a common set of interfaces and advanced data services for cloud-native applications. Developers can also read and write to a single consistent endpoint without worrying about the underlying storage infrastructure." To know more about this news in detail, head over to RedHat’s official announcement. Red Hat announces full support for Clang/LLVM, Go, and Rust Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem 4 reasons IBM bought Red Hat for $34 billion
Read more
  • 0
  • 0
  • 10771

article-image-how-3-glitches-in-azure-active-directory-mfa-caused-a-14-hour-long-multi-factor-authentication-outage-in-office-365-azure-and-dynamics-services
Savia Lobo
29 Nov 2018
3 min read
Save for later

How 3 glitches in Azure Active Directory MFA caused a 14-hour long multi-factor authentication outage in Office 365, Azure and Dynamics services

Savia Lobo
29 Nov 2018
3 min read
Early this week, Microsoft posted a report on what caused the multi-factor authentication outage in its Office 365 and Azure last week, which prevented users from signing into their cloud services for 14 hours. Microsoft researchers reported that they found out three issues that combined to cause the log-in glitch. Interestingly, all these three glitches occurred within a single system, i.e. Azure Active Directory Multi-Factor Authentication, a service which Microsoft uses to monitor and manage multi-factor login for the Azure, Office 365, and Dynamics services. According to the Microsoft researchers, “There were three independent root causes discovered. In addition, gaps in telemetry and monitoring for the MFA services delayed the from identification and understanding of these root causes which caused an extended mitigation time." All three glitches occurred within a single system: Azure Active Directory Multi-Factor Authentication. Microsoft uses that service to handle multi-factor login for the Azure, Office 364, and Dynamics services. The three root causes for the multi-factor authentication outage Microsoft, in their report, discovered three independent root causes. They said that the gaps in telemetry and monitoring for the MFA services delayed the identification and understanding of these root causes, which caused an extended mitigation time. 1. The first root cause manifested as latency issue in the MFA frontend’s communication to its cache services. This issue began under high load once a certain traffic threshold was reached. Once the MFA services experienced this first issue, they became more likely to trigger second root cause. 2. The second root cause is a race condition in processing responses from the MFA backend server that led to recycles of the MFA frontend server processes which can trigger additional latency and the third root cause (below) on the MFA backend. The third identified root cause was previously undetected issue in the backend MFA server that was triggered by the second root cause. This issue causes accumulation of processes on the MFA backend leading to resource exhaustion on the backend at which point it was unable to process any further requests from the MFA frontend while otherwise appearing healthy in our monitoring. On the day of the outage, these glitches first hit EMEA and APAC customers, and the US subscribers. According to The Register, “Microsoft would eventually solve the problem by turning the servers off and on again after applying mitigations. Because the services had presented themselves as healthy, actually identifying and mitigating the trio of bugs took some time.” Microsoft said, "The initial diagnosis of these issues was difficult because the various events impacting the service were overlapping and did not manifest as separate issues”. The company is further looking into ways to prevent the repetition of such an outage in the future by reviewing how it handles updates and testing. They also plan to review its internal monitoring services and how it contains failures once they begin. To know more about this in detail, head over to Microsoft Azure’s official page. A Microsoft Windows bug deactivates Windows 10 Pro licenses and downgrades to Windows 10 Home, users report Microsoft fixing and testing the Windows 10 October update after file deletion bug Microsoft announces official support for Windows 10 to build 64-bit ARM apps  
Read more
  • 0
  • 0
  • 9833

article-image-slash-data-launches-developer-survey-in-a-bid-to-take-a-global-snapshot-of-the-software-world
Richard Gall
29 Nov 2018
3 min read
Save for later

Slash Data launches developer survey in a bid to take a global snapshot of the software world

Richard Gall
29 Nov 2018
3 min read
Developer research organization Slash Data has launched the 16th edition of its developer survey. Covering just about every aspect of the developer world, from data science to game development, it's an attempt to better understand what working in software development in 2018 actually looks like, and find out what really matters to engineers today. Take the survey now. In a year when technology has been a big part of the headlines, it has felt like the actual work involved in building technology solutions has been forgotten. Slash Data's research is an important small step in throwing the spotlight on the people that really matter in tech. With previous editions featuring insights from more than 40,000 developers from 167 different countries, it promises to offer a truly global perspective on the developer landscape. The survey will be open until January 20 2019, with Slash Data publishing the results in March. Who is Slash Data? Slash Data's mission is to help the world understand developers. As a research organization, they aim to stand out from established names - like Gartner, for example, and look closely at what's happening 'on the ground' not in the minds of business leaders. With the survey being backed by some of biggest organizations in the tech industry, including Microsoft, MongoDB, Pivotal, Women Who Code and Qualcomm, Slash Data are delivering some work that could make an impact on everyone. What questions does the survey ask? The survey covers programming languages and tools, developer skills, and the resources and spaces developers use to learn. As well as all that, it also focuses on a number of key areas in technology, such as mobile, cloud, and machine learning. With such a broad set of questions, it's essential that Slash Data hear from developers of all stripes, with a diverse range of experiences. Read next: 4 key findings from The State of JavaScript 2018 developer survey Why you should take the Slash Data developer survey Clearly, you should take the survey to help Slash Data produce some really detailed insights. But there are other reasons too: A chance to win prizes When you sign up to take the survey, you'll be able to enter a competition with a chance of winning some incredible prizes. These include: Samsung S9 Plus, Oculus Rift & Touch Virtual Reality System, Filco (Ninja Majestouch-2 Tenkeyless NKR Tactile Action Keyboard), Axure RP 8 Pro one year license, $200 towards the software subscription of choice Help Slash Data support the Raspberry Pi Foundation Slash Data has confirmed that it will make a small donation to the Raspberry Pi Foundation for every completed survey submission. This will help the Raspberry Pi Foundation continue its work to support computer science education in schools. It's accessible to non-English speakers Because Slash Data want to reach developers in every part of the world, the company has made a real effort to make the survey accessible to those who don't speak English. From December 10 2018, you'll be able to take the survey in 7 additional languages: Chinese Traditional Chinese Simplified Vietnamese Korean Russian Japanese Portuguese Help Slash Data uncover new insights about the tech industry With the technology industry changing rapidly it can be hard to understand what's actually happening. What do businesses expect from software? And how are engineers trying to build it more effectively - and more efficiently? That's why the Slash Data survey is so useful. It will give us all an insight into what really matters in tech.
Read more
  • 0
  • 0
  • 2173

article-image-introducing-wavemaker-10-an-apaas-software-to-rapidly-build-applications-with-angular-7-and-kubernetes-support
Bhagyashree R
29 Nov 2018
2 min read
Save for later

Introducing WaveMaker 10: An aPaaS software to rapidly build applications with Angular 7 and Kubernetes support

Bhagyashree R
29 Nov 2018
2 min read
Last week, the WaveMaker team released its enhanced platform, WaveMaker 10. This version comes with an advanced technology stack leveraging Angular 7, integrated artifact repository, IDE synchronization features, and more. WaveMaker is an application platform-as-a-service (aPaaS) software that allows developers to rapidly build and run custom apps. It enables developers to build extensible and customizable apps with standard enterprise-grade technologies. The platform also comes with built-in templates, layouts, themes, and widgets to help you build responsive apps without having to write any code. Key enhancements in WaveMaker 10 Improved application stack with Angular 7 and Kubernetes support Developers can now leverage Angular 7 to build responsive web and mobile apps. Angular 7 support provides greater performance and efficiency, type safety, and modern user experience. Scaling applications with Kubernetes is supported via a 1-click deployment feature. You can now natively pack your apps as containers and deploy them to a running Kubernetes cluster. Enhanced developer productivity and collaboration To give developers more control over their code and help them build apps faster, WaveMaker 10 comes with enhanced IDE support. With the newly introduced workspace sync plugin, developers can pull code changes seamlessly between WaveMaker and any IDE without having to manually export and import them. To allow developers to share reusable application elements like service prefabs, templates, themes, and data models, an integrated artifact repository is introduced. The platform can now be localized in a regional language enabling better collaboration between global development teams. Increased enterprise security and accessibility Support for configuring and implementing role-based access at both platform and project levels is introduced in WaveMaker 10. You can now create multiple developer personas with unique permission sets. Open ID authentications for Single Sign-On (SSO) are supported by both the platform and applications built using it. Additionally, all WaveMaker 10 applications are protected from OWASP Top 10 Vulnerabilities to ensure greater security against threats and malicious injections. Applications built with WaveMaker 10 also support Web Content Accessibility Guidelines (WCAG) 2.1, making them more accessible to users with disabilities. Head over to WaveMaker’s official website to know more in detail. Angular 7 is now stable Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12 Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS
Read more
  • 0
  • 0
  • 2655

article-image-amazon-announces-the-public-preview-of-aws-app-mesh-a-service-mesh-for-microservices-on-aws
Amrata Joshi
29 Nov 2018
3 min read
Save for later

Amazon announces the public preview of AWS App Mesh, a service mesh for microservices on AWS

Amrata Joshi
29 Nov 2018
3 min read
Yesterday, at AWS re:Invent, Amazon introduced AWS App Mesh, a service mesh for controlling and monitoring communication easily across the microservices on AWS. App Mesh standardizes the communication of microservices and gives users an end-to-end visibility. It can also be used with Amazon ECS and Amazon EKS to run containerized microservices. Earlier it was difficult to pinpoint the exact location of errors when the number of microservices grew within an application. In order to solve this problem one had to build monitoring and control logic directly into the code and redeploy the microservices. AWS App Mesh solves the problem by making it easy to run microservices by providing visibility and network traffic controls for every microservice in an application. It also removes the need for updating application code. With App Mesh, the logic for monitoring and controlling communications between the microservices is implemented as a proxy. This proxy runs alongside each microservice, instead of being built into the microservice code. App Mesh automatically sends the configuration information to each microservice proxy. The major advantage of placing a proxy in front of every microservice is that the metrics, logs, and traces between the services can automatically get captured. Key Features of AWS App Mesh Identifies issues with microservices App Mesh captures metrics, logs, and traces from every microservice and exports this data to multiple AWS and third-party tools, including AWS X-Ray, Amazon CloudWatch, etc. for monitoring and controlling. This helps in identifying and isolating issues with any microservice in order to optimize the application. Configures the traffic flow With App Mesh one can easily implement custom traffic routing rules for ensuring that every microservice is highly available during deployments and after failures. AWS App Mesh is responsible for deploying and configuring a proxy that manages all communications traffic to and from the containers. It also removes the need for configuring the microservice’s communication protocols, writing custom code, or implementing libraries for operating applications. Works with existing microservices App Mesh can be used with existing or new microservices that are running on Amazon ECS, AWS Fargate, Amazon EKS, and self-managed Kubernetes on AWS. App Mesh monitors and controls the communications for microservices that are running across orchestration systems, clusters. Uses Envoy Proxy for monitoring App Mesh also uses the open source Envoy proxy with a wide range of AWS partner and open source tools for monitoring the microservices. Envoy is a self-contained process, designed to run alongside every application server. To know more about this news, check out the Amazon’s official blog post. Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 Amazon launches AWS Ground Station, a fully managed service for controlling satellite communications, downlink and processing satellite data 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations
Read more
  • 0
  • 0
  • 10385
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-aws-deepracer-a-self-driving-race-car-and-amazons-autonomous-racing-league-to-help-developers-learn-reinforcement-learning-in-a-fun-way
Amrata Joshi
29 Nov 2018
4 min read
Save for later

Introducing AWS DeepRacer, a self-driving race car, and Amazon’s autonomous racing league to help developers learn reinforcement learning in a fun way

Amrata Joshi
29 Nov 2018
4 min read
Yesterday, at the AWS re:Invent conference, Andy Jassy, CEO at Amazon Web Services introduced AWS DeepRacer and announced a global autonomous AWS DeepRacer racing league. Amazon DeepRacer AWS DeepRacer is a 1/18th scale radio-controlled, self-driving four-wheel race car which has been designed to help developers learn about reinforcement learning. This car features a 4-megapixel camera with 1080p resolution, an Intel Atom processor, multiple USB ports, and a 2-hour battery. The car comes with a 4GB RAM and 32GB expandable storage. The compute battery is a 13600mAh USB-C PD. It is embedded with an accelerometer and gyroscope. The console, simulator, and car are a great combination to experiment with RL algorithms and generalization methods. It includes a fully-configured cloud environment that users can use to train Reinforcement Learning models. This car also uses a camera to view the track and a reinforcement model to control throttle and steering. AWS DeepRacer is integrated with Amazon SageMaker to take advantage of its new reinforcement learning model and with AWS RoboMaker in order to provide a 3D simulation environment. It is also integrated with Amazon Kinesis Video Streams for the video streaming of virtual simulation footage and Amazon S3 for model storage. It also comes with support for Amazon CloudWatch for log capture. AWS DeepRacer League The AWS DeepRacer League gives users an opportunity to compete in a global racing championship to advance to AWS DeepRacer Championship Cup at re:Invent 2019 and to probably win the AWS DeepRacer Cup. This league is categorized into 2 categories, live events, and virtual events. Live Events Developers can compete by submitting their already built or new reinforcement learning models to the virtual leaderboard for the Summit. The top ten champions will compete in live races on the track, using AWS DeepRacer. The summit winners and top performers across the races will qualify for the AWS DeepRacer Championship Cup. The AWS DeepRacer League will be launched in AWS Summit locations around the world, including Tokyo, London, Sydney, Singapore, and New York in early 2019. Virtual events Developers can build RL models and compete online using the AWS DeepRacer console. The virtual races will take place on the challenging tracks in the 3D racing simulator. What is in store for the developers? Learn reinforcement learning in a new way AWS DeepRacer helps developers to get started with reinforcement learning by providing hands-on tutorials for training RL models and testing them in a fun way, with the car racing experience. It is easy to get started quickly, anywhere One can start training the model on the virtual track in minutes with the AWS DeepRacer console and 3D racing simulator irrespective of place or time. Idea sharing The DeepRacer League gives a platform to developers to meet fellow machine learning enthusiasts, online and also in-person to share ideas and insights. Also, it gives an opportunity to compete and win prizes. Developers will also get a chance to learn about reinforcement learning via workshops. No need to manually set up a software environment The 3D racing simulator and car provide an ideal environment for developers to test the latest reinforcement learning algorithms. With DeepRacer, developers don’t have to manually set up a software environment, simulator or configure a training environment. Public reaction to AWS DeepRacer is mostly positive, however, a few have their doubts. Concerns range from  CPU time, SageMaker requirement, and shipping related queries. https://twitter.com/emurmur77/status/1067955546089607168 https://twitter.com/heri/status/1067927044418203648 https://twitter.com/mnbernstein/status/1067846826571706368 To know more about this news, check out Amazon’s official blog. Amazon launches AWS Ground Station, a fully managed service for controlling satellite communications, downlink and processing satellite data Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 Learning To Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning  
Read more
  • 0
  • 0
  • 12308

article-image-project-fi-is-now-google-fi-will-support-multiple-android-based-phones-offer-beta-service-for-iphone
Melisha Dsouza
29 Nov 2018
3 min read
Save for later

Project Fi is now Google Fi, will support multiple Android based phones, offer beta service for iPhone

Melisha Dsouza
29 Nov 2018
3 min read
Google has officially announced that its Project Fi will be rebranded to ‘Google Fi’. They have also expanded Fi’s support to multiple phones like Samsung, Moto, LG, iPhone and OnePlus. The service for iPhone will be in beta for the time being. Even though Google admits that the process for iPhone will require “a few extra steps to get set up”, there will be a new Google Fi iOS app to help customers get comfortable with the process. What is Google Fi? Google Fi is a “mobile virtual network operator” and is recognized for its unique approach compared to most other network carriers. It does not operate on its own network, but piggybacks on those of T-Mobile, Sprint, and US Cellular, handing a customer's phone to whichever offers the strongest connection at any given time. Fi also offers simplified data plans, easy international use, and a slew of other perks. It has no long-term contracts- a customer has to pay on a month to month basis. The data costs the same internationally as it does at home, in most countries. There's just a single payment "plan," which starts at $20 for access to a line, plus an additional $10 for every gigabyte consumed. If a user has only one line and uses more than 6GB, they only pay a maximum of $80 for that month. The Catch with Fi for iPhones Fi operates as a virtual network operator, and only a few phones including Google Pixels and those that are explicitly “designed for Fi” will be able to dynamically switch between those carriers’ networks. Android phones and iPhones that are that aren't built specifically for Google Fi will miss out on this functionality. In addition, since the iPhone will receive support in beta, there can be a less-than-smooth experience for customers who choose to use Fi on their iPhones. Important secondary features like visual voicemail, calls and texts over Wi-Fi, automated spam detection, and international tethering will be left out because of the beta support. The Fi website cautions that iPhone users will have to do a bit of tweaking to get their texting feature to work properly. The iMessage service  will function "out of the box," APN settings will need to be modified to enable MMS. That being said, the real catch with Google Fi has always been its simplicity and affordability, both of which will remain irrespective of the device a customer chooses to use. Google Fi still has some catching up to do with other carriers when it comes to features like including support for the RCS Universal Profile for texting and number sharing for things like LTE smartwatches. This announcement of extending Fi’s support for multiple devices does signal Google’s efforts to broaden its user base and boost device support. Head over to Google’s official Blog for more information on this announcement. A year later, Google Project Zero still finds Safari vulnerable to DOM fuzzing using publicly available tools to write exploits BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration” #GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment
Read more
  • 0
  • 0
  • 12669

article-image-uber-fined-by-british-ico-and-dutch-dpa-for-nearly-1-2m-over-a-data-breach-from-2016
Prasad Ramesh
29 Nov 2018
3 min read
Save for later

Uber fined by British ICO and Dutch DPA for nearly $1.2m over a data breach from 2016

Prasad Ramesh
29 Nov 2018
3 min read
British and Dutch authorities have fined Uber for a total of nearly $1.2m on Tuesday over a data breach incident that occurred in 2016. The Information Commissioner's Office (ICO) from UK imposed a £385,000 fine (close to $500,000) on Uber for “failing to protect customers' personal information during a cyber attack". The said attack happened in November 2016. Additionally, the Dutch Data Protection Authority imposed their own €600,000 (close to $680,000) fine over the same incident for not reporting the data breach to the Dutch DPA within 72 hours after the discovery of the breach. For the same data breach, the US government has fined Uber $148m. Attackers obtained login credentials to access Uber’s servers and downloaded files in November 2016. These files contained records of users worldwide including passengers’ full names, phone numbers, and email addresses. Personal details of around 2.7million UK customers and 174,000 Dutch citizens were downloaded from Uber cloud servers by hackers in this breach. Steve Eckersley, the Director of Investigations at ICO, said: “This was not only a serious failure of data security on Uber’s part, but a complete disregard for the customers and drivers whose personal information was stolen. At the time, no steps were taken to inform anyone affected by the breach, or to offer help and support. That left them vulnerable.” As the attack occurred in 2016, it was not subject to the EU's GDPR that came into effect May 2018. The GDPR rules could have increased the fines for Uber. The affected customers and drivers were not told about the incident and Uber started monitoring the accounts for fraud only after an year. The attackers then demanded $100,000 to destroy the data they took which Uber paid as “bug bounty”. This is unlike a legitimate bug bounty program which is a common practice in tech industries. The attackers had malicious intent hence they downloaded the data as opposed to just pointing out the breach. Eckersley further added: “Paying the attackers and then keeping quiet about it afterwards was not, in our view, an appropriate response to the cyber attack.” In a statement, Uber representatives said “We’re pleased to close this chapter on the data incident from 2016. We’ve also made significant changes in leadership to ensure proper transparency with regulators and customers moving forward. We learn from our mistakes and continue our commitment to earn the trust of our users every day.” Uber posted a billion dollar loss this quarter. Can Uber Eats revitalize the Uber growth story? EU slaps Google with $5 billion fine for the Android antitrust case Origin DApp: A decentralized marketplace on Ethereum mainnet aims to disrupt gig economy platforms like Airbnb and Uber
Read more
  • 0
  • 0
  • 11870

article-image-the-state-of-mozilla-2017-report-focuses-on-internet-health-and-user-privacy
Prasad Ramesh
29 Nov 2018
4 min read
Save for later

The State of Mozilla 2017 report focuses on internet health and user privacy

Prasad Ramesh
29 Nov 2018
4 min read
The State of Mozilla 2017 report is out and contains information on areas where Mozilla has made an impact and its activities in 2017-18. We look at some of the important details from the report. Towards building a healthier internet In the last two years, there have been scandals and news around big tech companies relating to data misuse, privacy hindrances and more. Some of these include the Cambridge Analytica scandal, Google tracking, and many others. Public and political trust from large tech companies has eroded following the uncovering of how some of these companies operate and treat user data. The Mozilla report says that now the focus is on how to limit these tech platforms and encourage them to adopt data regulation protocols. Mozilla seeks to fill the void where there is a lack of people who can decide correctly towards building a better internet. The State of Mozilla 2017 report reads: “When the United States Federal Communications Commission attacks net neutrality or the Indian government undermines privacy with Aadhaar, we see people around the world—including hundreds of thousands of members of the Mozilla community—stand up and say, Things should not work this way.” Read also: Is Mozilla the most progressive tech organization on the planet right now? The Mozilla Foundation and the Mozilla Corporation Mozilla was founded in 1998 as an open source project back when open source was truly open source, free of things like the Commons Clause. Mozilla has two organizations. The Mozilla Foundation which supports emerging leaders and mobilizes citizens towards better health of the internet. Second, the Mozilla Corporation which is a wholly owned subsidiary of the former and creates Mozilla products and advances public policy. The Mozilla Foundation Mozilla invests in people and organizations with a common vision other than building products. Another part of the State of Mozilla 2017 reads: “Our core program areas work together to bring the most effective ideas forward, quickly and where they have the most impact. As a result of our work, internet users see a change in the products they use and the policies that govern them.” Every year Mozilla Foundation creates the open source Internet Health Report to shed light on what’s been happening on the internet, specifically on its wellbeing. Their research includes data from multiple sources on areas like privacy and security, open innovation, decentralization, web literacy, and digital inclusion. Per the health report, Mozilla spent close to a million in 2017 on their agenda-setting work. Mozilla has also mobilized conscious internet users with campaigns around net neutrality in the US, India’s Aadhaar biometric system, copyright reform in the EU, and more. Mozilla has also invested in connecting internet health leaders and worked on data and privacy issues across the globe. It also invested about $24M in 2017 in this work. The Mozilla Corporation Mozilla says that to take the charge in changing internet culture they need to do more than building products. Post Firefox Quantum’s success, their focus is to better enable people in taking control of their online life. Another part of the State of Mozilla 2017 report highlights their vision stating that “Over the coming years, we will become the leading provider of user agency and online privacy by developing long-term trusted relationships with "conscious choosers" with a focus on helping people navigate their connected lives.” Mozilla pulled its ads from Facebook after the Cambridge Analytica scandal After learning about the Cambridge Analytica incident and guided by the Mozilla Manifesto, they decided to pull their ads from Facebook. Their Manifesto says “Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional,”. After sending a message with this action, Mozilla also launched Facebook Container. It is a version of multi-account containers that prevent Facebook from tracking its users when they are not on the platform. They say that everyone has a right to keep their private information private and control their own web experiences. You can view the full State of Mozilla 2017 report at the Mozilla website. Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web Mozilla criticizes EU’s terrorist content regulation proposal, says it’s a threat to user rights Is Mozilla the most progressive tech organization on the planet right now?
Read more
  • 0
  • 0
  • 9259
article-image-google-chrome-announces-an-update-on-its-autoplay-policy-and-its-existing-youtube-video-annotations
Natasha Mathur
29 Nov 2018
4 min read
Save for later

Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations

Natasha Mathur
29 Nov 2018
4 min read
Google Chrome team finally announced the release date for its Autoplay Policy, earlier this week. The policy had been delayed when it was released with the Chrome 66 stable release, back in May this year. The latest policy change is scheduled to come out along with Chrome 71, in the upcoming month. The Autoplay policy imposes restrictions that prevent videos and audios from autoplaying in the web browser. For websites that want to be able to autoplay their content, the new policy change will prevent playback by default. For most of the sites, playback will be resumed but a small code adjustment will be required in other cases to resume the audio. Additionally, Google has added a new approach to the policy that includes tracking users' past behavior with the sites that have autoplay enabled. So in case, if a user regularly lets an audio play for more than 7 seconds on a website, the autoplay gets enabled for that website. This is done with the help of a “Media Engagement Index” (MEI) i.e. an index stored locally per Chrome profile on a device. MEI tracks the number of visits to a site that includes audio playback of more than 7 seconds long. Each website gets a score between zero and one in MEI, where higher scores indicate that the user doesn’t mind audio playing on that website. For new user profiles or if a user clears their browsing data, a pre-seed list based on anonymized user aggregated MEI scores is used to track which websites can autoplay. The pre-seeded site list is algorithmically generated and only sites with enough users permitting autoplay on that site are added to the list. “We believe by learning from the user – and anticipating their intention on a per website basis – we can create the best user experience. If users tend to let content play from a website, we will autoplay content from that site in the future. Conversely, if users tend to stop autoplay content from a given website, we will prevent autoplay for that content by default”, mentions the Google team. The reason behind the delay The autoplay policy had been delayed by Google after receiving feedback from the Web Audio developer community, especially the web game developer and WebRTC developers. As per the feedback, the autoplay change was affecting many web games and audio experiences, especially on the sites that had not been updated for the change. Delaying the policy rollout gave web game developers enough time to update their websites. Moreover, Google also explored ways to reduce the negative impact of audio play policy on websites with audio enabled. Following this, Google has made an adjustment to its implementation of Web Audio to reduce the number of websites that had been originally impacted. New adjustments made for the developers As per new adjustments by Google in the autoplay policy, audio will get resumed automatically in case the user has interacted with a page and when the start() method of a source node is called. Source node represents individual audio snippets that most games play. One such example is that of a sound that gets played when a player collects a coin or the background music that plays in a particular stage within a game. Game developers call the start() function on source nodes more often than not in cases whenever any of these sounds are necessary for the game. These changes will enable the autoplay in most web games when the user starts playing the game. Google team has also introduced a mechanism for users that allows them to disable the autoplay policy for cases where the automatic learning doesn’t work as expected. Along with the new autoplay policy update,  Google will also stop showing existing annotations on the YouTube videos to viewers starting from January 15, 2019. All the other existing annotations will be removed. “We always put our users first but we also don’t want to let down the web development community. We believe that with our adjustments to the implementation of the policy, and the additional time we provided for web audio developers to update their code, that we will achieve this balance with Chrome 71”, says the Google team. For more information, check out Google’s official blog post. “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018 Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native Meet Carlo, a web rendering surface for Node applications by the Google Chrome team
Read more
  • 0
  • 0
  • 22679

article-image-the-linux-and-risc-v-foundations-team-up-to-drive-open-source-development-and-adoption-of-risc-v-instruction-set-architecture-isa
Bhagyashree R
29 Nov 2018
3 min read
Save for later

The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA)

Bhagyashree R
29 Nov 2018
3 min read
Yesterday, the Linux Foundation announced that they are joining hands with the RISC-V Foundation to drive the open source development and adoption of the RISC-V instruction set architecture (ISA). https://twitter.com/risc_v/status/1067553703685750785 The RISC-V Foundation is a non-profit corporation, which is responsible for directing the future development of the RISC-V ISA. Since its formation, the RISC-V Foundation has quickly grown and now includes more than 100 member organizations. With this collaboration, the foundations aim to further grow this RISC-V ecosystem and provide improved support for the development of new applications and architectures across all computing platforms. Rick O’Connor, the executive director of the RISC-V Foundation, said, “With the rapid international adoption of the RISC-V ISA, we need increased scale and resources to support the explosive growth of the RISC-V ecosystem. The Linux Foundation is an ideal partner given the open source nature of both organizations. This joint collaboration with the Linux Foundation will enable the RISC-V Foundation to offer more robust support and educational tools for the active RISC-V community, and enable operating systems, hardware implementations and development tools to scale faster.” The Linux Foundation will provide governance, best practices for open source development, and resources such as training programs and infrastructure tools. Along with this, they will also help RISC-V in community outreach, marketing, and legal expertise. Jim Zemlin, the executive director at the Linux Foundation believes that RISC-V has great potential seeing its popularity in areas like AI, machine learning, IoT, and more. He said, “RISC-V has great traction in a number of markets with applications for AI, machine learning, IoT, augmented reality, cloud, data centers, semiconductors, networking and more. RISC-V is a technology that has the potential to greatly advance open hardware architecture. We look forward to collaborating with the RISC-V Foundation to advance RISC-V ISA adoption and build a strong ecosystem globally.” The two foundations have already started working on a pair of getting started guides for running Zephyr, a small, scalable open source real-time operating system (RTOS) optimized for resource-constrained devices. They are also conducting RISC-V Summit, a 4-day event starting from December 3-6 in Santa Clara. This summit will include sessions on RISC-V ISA architecture, commercial and open-source implementations, software and silicon, vectors and security, applications and accelerators, and much more. Read the complete announcement on the Linux Foundation’s official website. Uber becomes a Gold member of the Linux Foundation The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project Google becomes new platinum member of the Linux foundation
Read more
  • 0
  • 0
  • 17053

article-image-ipv6-support-to-be-automatically-rolled-out-for-most-netify-application-delivery-network-users
Melisha Dsouza
29 Nov 2018
3 min read
Save for later

IPv6 support to be automatically rolled out for most Netify Application Delivery Network users

Melisha Dsouza
29 Nov 2018
3 min read
Earlier this week,, Netlify announced in a blog post that the company has begun the rollout of IPv6 support on the Netlify Application Delivery Network. Netlify has adopted the IPv6 support as a solution to the IPv4 address capacity problem. This news comes right after the announcement that Netlify raised $30 million for a new ‘Application Delivery Network’, aiming to replace servers and infrastructure management. Netlify provides developers with an all-in-one workflow to build, deploy, and manage modern web projects. Their ‘Application Delivery Network’ is a new platform for the web and will assist web developers in building newer web-based applications. There is no need for developers to setup or manage servers as all content and applications will be created directly on a global network. It removes the dependency on origin infrastructure, allowing companies to host the entire application globally using APIs and microservices. IP addresses are assigned to every server connected to the internet. Netifly explain how  traditionally used IPv4 address pool is getting smaller with continuous expansion of the internet. This is where IPv6 steps in. IPv6 defines an IP address as a 128-bit entity instead of integer-based IPv4 addresses. For example, IPv4 defines an address as 167.99.129.42, and IPv6 address would instead look like 2001:0db8:85a3:0000:0000:8a2e:0370:7334. Even though the IPv6 format is complex to remember, it creates vastly more possible addresses to help support the rapid growth of the internet. In addition to efficient routing and packet processing, IPv6 also accounts for better security as compared to IPv4. This is because IPSec, which provides confidentiality, authentication and data integrity, is baked into IPv6. According to the blog post, users that are serving their sites on a subdomain of netlify.com or using custom domains registered from an external domain registrar, will automatically begin using IPv6 on their ADN. Customers using Netlify for DNS management, can go to the Domains section on the dashboard and enable IPv6 for each of their domains. Customers having a complex or bespoke DNS configuration or enterprise customers using Netlify’s Enterprise ADN infrastructure, are advised to contact Netlify’s support team or their account manager to ensure that their specific configuration is migrated to IPv6 appropriately. Netlify’s users have received this news well: https://twitter.com/sethvargo/status/1067152518638116864 Hacker News is also flooded with positive comments for Netlify: Netlify has starting off on the right foot, it would be interesting to see what customers think after implementing the IPv6 for their Netlify ADN. Head over to Netlify’s blog for more insights on this news. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! libp2p: the modular P2P network stack by IPFS for better decentralized computing  
Read more
  • 0
  • 0
  • 10435
article-image-european-consumer-groups-accuse-google-of-tracking-its-users-location-calls-it-a-breach-of-gdpr
Sugandha Lahoti
29 Nov 2018
4 min read
Save for later

European Consumer groups accuse Google of tracking its users’ location, calls it a breach of GDPR

Sugandha Lahoti
29 Nov 2018
4 min read
Just when Google is facing large walkouts and protests against its policies, another consumer group has lodged a complaint against Google’s user tracking. According to a report published by the European Consumer Organisation (BEUC), Google is using various methods to encourage users to enable the settings ‘location history’ and ‘web and app activity’ which are integrated into all Google user accounts. They allege that Google is using these features to facilitate targeted advertising. BEUC and its members including those from the Czech Republic, Greece, Norway, Slovenia, and Sweden argue that what Google is doing is in breach of the GDPR. Per the report, BEUC says “We argue that consumers are deceived into being tracked when they use Google services. This happens through a variety of techniques, including withholding or hiding information, deceptive design practices, and bundling of services. We argue that these practices are unethical, and that they in our opinion are in breach of European data protection legislation because they fail to fulfill the conditions for lawful data processing.” Android users are generally unaware of the fact that their Location History or Web & App Activity is enabled. Google uses a variety of dark patterns, to collect the exact location of the user, including the latitude (e.g. floor of the building) and mode of transportation, both outside and inside, to serve targeted advertising. Moreover, there is no real option to turn off Location History, only to pause it. Even if the user has kept Location History disabled, their location will still be shared with Google through Web & App Activity. “If you pause Location history, we make clear that — depending on your individual phone and app settings — we might still collect and use location data to improve your Google experience.” said a Google spokesman to Reuters. “These practices are not compliant with the General Data Protection Regulation (GDPR), as Google lacks a valid legal ground for processing the data in question. In particular, the report shows that users’ consent provided under these circumstances is not freely given,” BEUC, speaking on behalf of the countries’ consumer groups, said. Google claims to have a legitimate interest in serving ads based on personal data, but the fact that location data is collected, and how it is used, is not clearly expressed to the user. BEUC calls out Google saying that the company’s legitimate interest in serving advertising as part of its business model overrides the data subject’s fundamental right to privacy. BEUC argues that in light of how Web & App Activity is presented to users, the interests of the data subject should take precedence. Reuters asked for comment on the consumer groups’ complaints to a Google spokesman. According to them, “Location History is turned off by default, and you can edit, delete, or pause it at any time. If it’s on, it helps to improve services like predicted traffic on your commute. We’re constantly working to improve our controls, and we’ll be reading this report closely to see if there are things we can take on board,”. People are largely supportive of BEUC on the allegations they made on Google. https://www.youtube.com/watch?v=qIq17DeAc1M However, some people feel that it is just another attack on Google. If people voluntarily and most of them knowingly use these services and consent to giving personal information, it should not be a concern for any third party. “I can't help but think that there's some competitors' money behind these attacks on Google. They provide location services which you can turn off or delete yourself, which is anonymous to anyone else, and there's no evidence they sell your data (they just anonymously connect you to businesses you search for). Versus carriers which track you without an option to opt-in or out and actually do sell your data to 3rd parties.” “If the vast majority of customers don't know arithmetic, then yes, that's exactly what happened. Laws are a UX problem, not a theory problem. If most of your users end up getting deceived, you can't say "BUT IT WAS ALL RIGHT THERE IN THE SMALL PRINT, IT'S NOT MY FAULT THEY DIDN'T READ IT!". Like, this is literally how everything else works.” Read the full conversation on Hacker news. You may also go through the full “Every step you take” report published by BEUC for more information. Google employees join hands with Amnesty International urging Google to drop Project Dragonfly. Is Anti-trust regulation coming to Facebook following fake news inquiry made by a global panel in the House of Commons, UK? Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax”
Read more
  • 0
  • 0
  • 12550

article-image-how-the-biggest-ad-fraud-rented-datacenter-servers-and-used-botnet-malware-to-infect-1-7m-systems
Bhagyashree R
28 Nov 2018
4 min read
Save for later

How the biggest ad fraud rented Datacenter servers and used Botnet malware to infect 1.7m systems

Bhagyashree R
28 Nov 2018
4 min read
Yesterday, the Department of Justice charged eight men for their alleged involvement in a massive ad fraud that caused losses of tens of millions of dollars. A 13-count indictment was unsealed in the federal court in Brooklyn against these men. These charges included wire fraud, computer intrusion, aggravated identity theft, and money laundering, among others. They used two mechanisms for conducting this fraud: datacenter-based (Methbot) and botnet-based scheme (3ve). The accused eight men were Aleksandr Zhukov, Boris Timokhin, Mikhail Andreev, Denis Avdeev, Dmitry Novikov, Sergey Ovsyannikov, Aleksandr Isaev, and Yevgeniy Timchenko. According to the DOJ announcement, three of the men have been arrested and are awaiting extradition to the United States. How this ad fraud was conducted? Revenue generated by digital advertising depends on how many users click or view the ads on websites. The perpetrators faked both the users and the webpages. The fraudsters, with the help of an automated program, loaded advertisements on fake web pages, in order to generate advertising revenue. The Department of Justice, on their website listed two schemes through which the accused were able to do this ad fraud: Datacenter-Based Scheme According to the indictment, in the period September 2014 to December 2016, the fraudsters operated an advertising network called Ad Network #1. This network had business arrangements with other advertising networks through which it received payments in return for placing advertising placeholder or ad tags on websites. Instead of placing these ad tags on legitimate publishers’ websites, Ad Network #1 rented more than 1,900 computer servers housed in commercial datacenters. With these datacenter servers, they loaded ads on fabricated websites, and spoofed more than 5,000 domains. To make this look like that a real user has viewed or clicked on the advertisement, they simulated the normal activities a real internet user does. In addition to this, they also leased more than 650,000 IP addresses and assigned multiple IP addresses to each datacenter server. These IP addresses were then registered fraudulently to make it appear that the datacenter servers were residential computers belonging to individual human internet users. Through this scheme, Ad Network #1 was able to generate billions of ad views and caused businesses to pay more than $7 million for ads that were never actually viewed by real human internet users. Botnet-based scheme The indictment further reveals that between December 2015 and October 2018, Ovsyannikov, Timchenko, and Isaev started another advertising network called Ad Network #2. In this scheme, they used a global botnet network of malware-infected computers. The three fraudsters developed an intricate infrastructure of command-and-control servers to direct and monitor the infected computers. This infrastructure enabled the fraudsters to access more than 1.7 million infected computers, belonging to ordinary individuals and businesses in the United States and elsewhere. They used hidden browsers on those infected computers to download fabricated webpages and load ads onto those fabricated webpages. Through this scheme, Ad Network #2 caused businesses to pay more than $29 million for ads. This is one of the most complex and sophisticated ad frauds popularly named as 3ve (pronounced “Eve”). U.S law enforcement authorities with various private sector companies including White Ops and Google began the process of dismantling this criminal cyber infrastructure utilized in the botnet-based scheme. 3ve infected computers with malicious software known as Kovter. As a part of the investigation, FBI also discovered an additional cybercrime infrastructure committing digital advertising fraud called Boaxxe. This infrastructure used datacenter servers located in Germany and a botnet of computers in the United States infected. Google and White Ops investigators also realized that this is not a simple botnet seeing its evading efforts to filter and contain its traffic. Scott Spencer, a Google product manager told Buzzfeed: “The thing that was really different here was the number of techniques that they used, their ability to quickly respond when they thought they were being detected, and to evolve the mechanisms they were using in real time. We would start to filter traffic and we’d see them change things, and then we’d filter a different way and then they’d change things.” The United States Computer Emergency Readiness Tea (US-CERT) has published an alert which highlights the 3ve’s botnet behavior and how it interacts with Boaxxe and Kovter botnets. It also lists some measures to avoid getting affected by these malwares. To know more details about this case, check out the announcement by the Department of Justice. A multimillion-dollar ad fraud scheme that secretly tracked user affected millions of Android phones. This is how Google is tackling it. Fake news is a danger to democracy. These researchers are using deep learning to model fake news to understand its impact on elections. DARPA on the hunt to catch deepfakes with its AI forensic tools underway
Read more
  • 0
  • 0
  • 10735
Modal Close icon
Modal Close icon