Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-introducing-ct-wasm-a-type-driven-extension-to-webassembly-for-secure-in-browser-cryptography
Bhagyashree R
23 Jan 2019
3 min read
Save for later

Introducing CT-Wasm, a type-driven extension to WebAssembly for secure, in-browser cryptography

Bhagyashree R
23 Jan 2019
3 min read
Researchers from the University of California and the University of Cambridge have come up with Constant-Time WebAssembly (CT-Wasm), the details of which are shared in their paper: CT-Wasm: Type-Driven Secure Cryptography for the Web Ecosystem in December. It is a type-driven, strict extension to WebAssembly, which aims to address the state of cryptography in the web ecosystem. CT-Wasm provides developers a principled direction for improving the quality and auditability of web platform cryptography libraries while also maintaining the convenience that has made JavaScript successful. Why CT-Wasm is introduced? A lot of work has been done towards the implementation of client and server-side cryptography in JavaScript. But, there are still some widespread concerns related to security in JavaScript, which CT-WASM tries to solve: Side channels: While implementing a cryptography algorithm, the functional correctness is not the only concern. It is also important to ensure the properties of information flow that take into account the existence of side channels. For instance, an attacker can use the duration of the computation as a side channel. They can compare different executions to find out which program paths were used and work backward to determine information about secret keys and messages. Additionally, modern JavaScript runtimes are extremely complex software systems, that include just-in-time (JIT) compilation and garbage collection (GC) techniques that can inherently expose timing side-channels. In-browser cryptography: Another concern is, in-browser cryptography, which refers to the implementation of cryptographic algorithms using JavaScript in a user’s browser. Unskilled cryptographers: Most of the JavaScript cryptography is implemented by unskilled cryptographers who do not generally care about the most basic timing side channels. How it solves the concerns in JavaScript cryptography? Recently, all browsers have added support for WebAssembly (WASM), a bytecode language. As Wasm is a low-level bytecode language, it already provides a firmer foundation for cryptography than JavaScript: Wasm’s “close-to-the-metal” instructions provide more confidence in its timing characteristics than JavaScript’s unpredictable optimizations. It has a strong, static type system, and principled designed. It uses a formal small-step semantics and a well-typed Wasm program enjoys standard progress and preservation properties. CT-Wasm extends Wasm to become a verifiably secure cryptographic language by augmenting its type system and semantics with cryptographically meaningful types to produce Constant-Time WebAssembly (CT-Wasm). It combines the convenience of in-browser JavaScript crypto with the security of a low-level, formally specified language. Using CT-Wasm, developers can distinguish between secret data such as keys and messages and public data. After distinguishing the secret data, they can impose secure information flow and constant-time programming disciplines on code that handles secret data and ensure that well-typed CT-Wasm code cannot leak such data. CT-Wasm allows developers to incorporate third-party cryptographic libraries as they do with JavaScript and ensures that these libraries do not leak any secret information by construction. For more details, read the paper: CT-Wasm: Type-Driven Secure Cryptography for the Web Ecosystem. The elements of WebAssembly – Wat and Wasm, explained [Tutorial] Now you can run nginx on Wasmjit on all POSIX systems Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux
Read more
  • 0
  • 0
  • 17365

article-image-linus-torvalds-is-sorry-for-his-hurtful-behavior-is-taking-a-break-from-the-linux-community-to-get-help
Natasha Mathur
17 Sep 2018
4 min read
Save for later

Linus Torvalds is sorry for his ‘hurtful behavior’, is taking ‘a break (from the Linux community) to get help’

Natasha Mathur
17 Sep 2018
4 min read
Linux is one of the most popular operating systems built around the Linux kernel by Linus Torvalds. Because it is free and open source, it gained a huge audience among developers very fast. Torvalds further welcomed other developers’ contributions to add to the kernel granted that they keep their contributions free. Due to this, thousands of developers have been working to improve Linux over the years, leading to its huge popularity today. Yesterday, Linus, who has been working on the Kernel for almost 30-years caught the Linux community by surprise as he apologized and opened up about going on a break over his ‘hurtful’ behavior that ‘contributed to an unprofessional environment’. In a long email to the Linux Kernel mailing list, Torvalds announced Linux 4.19 release candidate and then talked about his ‘look yourself in the mirror’ moment. “This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry” admitted Torvalds. The confession came about after Torvalds confessed to messing up the schedule of the Maintainer's Summit, a meeting of Linux's top 40 or so developers, by planning a family vacation. “Yes, I was somewhat embarrassed about having screwed up my calendar, but honestly, I was mostly hopeful that I wouldn't have to go to the kernel summit that I have gone to every year for just about the last two decades. That whole situation then started a whole different kind of discussion --  I realized that I had completely mis-read some of the people involved,” confessed Torvalds. Torvalds has been notorious for his outspoken nature and outbursts towards others (especially the developers in the Linux Community). Sarah Sharps, Linux maintainer quit the Linux community in 2015 over Torvald’s offensive behavior and called it ‘toxic’. Torvalds exploded at Intel, earlier this year, for spinning Spectre fix as a security feature. Also, Torvalds responded with profanity, last year, about different approaches to security during a discussion about whitelisting the proposed features for Linux version 4.15. “Maybe I can get an email filter in place so that when I send email with curse-words, they just won't go out. I really had been ignoring some fairly deep-seated feelings in the Community...I am not an emotionally empathetic kind of person...I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely,” writes Torvalds. Torvalds then went ahead to talk about him taking a break from the Linux Community. “This is not some kind of "I'm burnt out, I need to just go away" break. I'm not feeling like I don't want to continue maintaining Linux. I very much want to continue to do this project that I've been working on for almost three decades. I need to take a break to get help on how to behave differently and fix some issues in my tooling and workflow”. A discussion with over 500 comments has started already on Reddit regarding Torvald’s decision.  While some people are supporting Torvald by accepting his apology, there are others who feel that the apology was long overdue and will believe him after he puts his words into action. https://twitter.com/TejasKumar_/status/1041527028271312897 https://twitter.com/coreytabaka/status/1041468174397399041 Python founder resigns – Guido van Rossum goes ‘on a permanent vacation from being BDFL’ Facebook and Arm join Yocto Project as platinum members for embedded Linux development NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 201
Read more
  • 0
  • 0
  • 17363

article-image-firefox-preview-3-0-released-with-enhanced-tracking-protection-open-links-in-private-tab-by-default-and-more
Fatema Patrawala
28 Nov 2019
3 min read
Save for later

Firefox Preview 3.0 released with Enhanced Tracking Protection, Open links in Private tab by default and more

Fatema Patrawala
28 Nov 2019
3 min read
Earlier this month, the Firefox team released the Firefox Preview 3.0 with various features to make browsing and bookmarking safer and easier. This release features a default Enhanced Tracking Protection feature for all users, and notifications support for long-running downloads. Key features in Firefox Preview 3.0 Enhanced tracking protection The Enhanced tracking protection will protect you from ads, analytics, cryptomining and fingerprinting trackers. Open links in private tabs by default Firefox Preview 3.0 lets you open pages directly in private browsing, so you can search and browse without saving any history on the browser. Option to clear browsing information on exit The Quit option in the menu automatically deletes your browsing history every time they exit Firefox through that Quit option. Option to choose what information should be synced across devices  In this release you can choose what types of browsing information should be synced across devices. Set an autoplay or background behavior The latest Firefox Preview gives you lots of options for playing video and audio on phones, including background playback and auto-play settings. See and manage downloads You can easily download files from various sites within Firefox Preview. A progress bar displays in the Notifications panel when the download begins, giving you the ability to pause/resume or cancel the download. If the download fails, tap Try Again to restart it. If the download is successful, a confirmation pop-up displays where you can tap Open to open the file. Updated browser menu An updated browser menu has replaced the Quick Action bar present in older versions of Firefox. Manually add search engines Firefox Preview gives the ability to set a default search engine. There are a variety of search engines to choose from such as Google and Bing. You can also manually add other search engines and set them as your default. Move the navigation bar to the top or bottom By default, the Firefox Preview navigation bar displays at the bottom of the app. However, you can move it to the top of the app if desired. Force enable zoom With this, you’ll always have the ability to zoom in when accessing various websites. You can use the + sign that displays at the bottom of every website within Firefox Preview to zoom in if necessary. To know more about this release in detail, check out the official Firefox blog page. Firefox 70 released with better security, CSS, and JavaScript improvements The new WebSocket Inspector will be released in Firefox 71 Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Scroll Snapping and other cool CSS features come to Firefox 68
Read more
  • 0
  • 0
  • 17353

article-image-openai-launches-spinning-up-a-learning-resource-for-potential-deep-learning-practitioners
Prasad Ramesh
09 Nov 2018
3 min read
Save for later

OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners

Prasad Ramesh
09 Nov 2018
3 min read
OpenAI released Spinning Up yesterday. It is an educational resource for anyone who wants to become a skilled deep learning practitioner. Spinning Up has many examples in reinforcement learning, documentation, and tutorials. The inspiration to build Spinning Up comes from OpenAI Scholars and Fellows initiatives. They observed that it’s possible for people with little-to-no experience in machine learning to rapidly become practitioners with the right guidance and resources. Spinning Up in Deep RL is also integrated into the curriculum for OpenAI 2019 cohorts of Scholars and Fellows. A quick overview of Spinning Up course content A short introduction to reinforcement learning. What is it? The terminology used, different types of algorithms and basic theory to develop an understanding. An essay that lays out points and requirements to grow into a reinforcement learning research role. It explains the background, practice learning, and developing a project. A list of important research papers organized by topic for learning. A well-documented code repository of short, standalone implementations of various algorithms. These include Vanilla Policy Gradient (VPG), Trust Region Policy Optimization (TRPO), Proximal Policy Optimization (PPO), Deep Deterministic Policy Gradient (DDPG), Twin Delayed DDPG (TD3), and Soft Actor-Critic (SAC). And finally, a few exercises to solve and start applying what you’ve learned. Support plan for Spinning Up Fast-paced support period For the first three weeks after release OpenAI will quickly work on bug-fixes, installation issues, and resolving errors in the docs. They will work to streamline the user experience so that it as easy as possible to self-study with Spinning Up. A major review in April 2019 Around April next year, OpenAI will perform a serious review of the state of package based on feedback received from the community. After that any plans for future modification will be announced. Public release of internal development On making changes to Spinning Up in Deep RL with OpenAI Scholars and Fellows, the changes will also be pushed to the public repository so that it is available to everyone immediately. In Spinning Up, running deep reinforcement learning algorithms is as easy as: python -m spinup.run ppo --env CartPole-v1 --exp_name hello_world For more details on Spinning Up, visit the OpenAI Blog. This AI generated animation can dress like humans using deep reinforcement learning Curious Minded Machine: Honda teams up with MIT and other universities to create an AI that wants to learn MIT plans to invest $1 billion in a new College of computing that will serve as an interdisciplinary hub for computer science, AI, data science
Read more
  • 0
  • 0
  • 17351

article-image-facebook-released-hermes-an-open-source-javascript-engine-to-run-react-native-apps-on-android
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Facebook released Hermes, an open source JavaScript engine to run React Native apps on Android

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Facebook released a new JavaScript engine called Hermes under an open source MIT license. According to Facebook, this new engine will speed up start times for native Android apps built with React Native framework. https://twitter.com/reactnative/status/1149347916877901824 Facebook software engineer Marc Horowitz unveiled Hermes at the Chain React 2019 conference held yesterday in Portland, Oregon. Hermes is a new tool for developers to primarily improve app startup performance in the same way Facebook does for its apps, and to make apps more efficient on low-end smartphones. The supposed advantage of Hermes is that developers can target all three mobile platforms with a single code base; but as with any cross-platform framework, there are trade offs in terms of performance, security and flexibility. Hermes is available on GitHub for all developers to use. It has also got its own Twitter account and home page. In a demo, Horowitz showed that a React Native app with Hermes was fully loaded within half the time the same app without Hermes loaded, or about two seconds faster. Check out the video below: Horowitz emphasized on the fact that Hermes cuts the APK size (the size of the app file) to half the 41MB of a stock React Native app, and removes a quarter of the app's memory usage. In other words, with Hermes developers can get users interacting with an app faster with fewer obstacles like slow download times and constraints caused by multiple apps sharing in a limited memory resources, especially on lower-end phones. And these are exactly the phones Facebook is aiming at with Hermes, compared to the fancy high-end phones that well-paid developers typically use themselves. "As developers we tend to carry the latest flagship devices. Most users around the world don't," he said. "Commonly used Android devices have less memory and less storage than the newest phones and much less than a desktop. This is especially true outside of the United States. Mobile flash is also relatively slow, leading to high I/O latency." It's not every day a new JavaScript engine is born, but while there are plenty such engines available for browsers, like Google's V8, Mozilla's SpiderMonkey, Microsoft's Chakra, Horowitz notes Hermes is not aimed at browsers or, for example, how Node.js on the server side. "We're not trying to compete in the browser space or the server space. Hermes could in theory be for those kinds of use cases, that's never been our goal." The Register reports that Facebook has no plan to push Hermes' beyond React Native to Node.js or to turn it into the foundation of a Facebook-branded browser. This is because it's optimized for mobile apps and wouldn't offer advantages over other engines in other usage scenarios. Hermes tries to be efficient through bytecode precompilation – rather than loading JavaScript and then parsing it. Hermes employs ahead-of-time (AOT) compilation during the mobile app build process to allow for more extensive bytecode optimization. Along similar lines, the Fuchsia Dart compiler for iOS is an AOT compiler. There are other ways to squeeze more performance out of JavaScript. The V8 engine, for example, offers a capability called custom snapshots. However, this is a bit more technically demanding than using Hermes. Hermes also abandons the just in time (JIT) compiler used by other JavaScript engines to compile frequently interpreted code into machine code. In the context of React Native, the JIT doesn't do that much to ease mobile app workloads. The reason Hermes exists, as per Facebook, is to make React Native better. "Hermes allows for more optimization on mobile since developers control the build stack," said a Facebook spokesperson in an email to The Register. "For example, we implemented bytecode precompilation to improve performance and developed more efficient garbage collection to reduce memory usage." In a discussion on Hacker News, Microsoft developer Andrew Coates claims that internal testing of Hermes and React Native in conjunction with Microsoft Office for Android shows TTI using Hermes at 1.1s, compared to 1.4s for V8, and with 21.5MB runtime memory impact, compared to 30MB with V8. Hermes is mostly compatible with ES6 JavaScript. To keep the engine small, support for some language features is missing, like with statements and local mode eval(). Facebook’s spokesperson also said to The Register that they are planning to publish benchmark figures in the next week to support its performance claims. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more
Read more
  • 0
  • 0
  • 17350

article-image-developers-can-now-incorporate-unity-features-into-native-ios-and-android-apps
Sugandha Lahoti
18 Jun 2019
2 min read
Save for later

Developers can now incorporate Unity features into native iOS and Android apps

Sugandha Lahoti
18 Jun 2019
2 min read
Yesterday, Unity made an update stating that from Unity 2019.3.a2 onwards, Android and iOS developers will be able to incorporate Unity features into their apps and games. Developers will be able to integrate the Unity runtime components and their content (augmented reality, 3D/2D real-time rendering, 2D mini-games, and more)  into a native platform project so as to use Unity as a library. “We know there are times when developers using native platform technologies (like Android/Java and iOS/Objective C) want to include features powered by Unity in their apps and games,” said J.C. Cimetiere, senior technical product manager for mobile platforms, in a blog post. How it works The mobile app build process overall is still the same. Unity creates the iOS Xcode and Android Gradle projects. However, to enable this feature, Unity team has modified the structure of the generated iOS Xcode and Android Gradle projects as follows: A library part – iOS framework and Android Archive (AAR) file – that includes all source files and plugins A thin launcher part that includes app representation data and runs the library part They have also released step-by-step instructions on how to integrate Unity as a library on iOS and Android, including basic sample projects. Currently, Unity as a Library supports full-screen rendering only. For now, rendering on only a part of the screen is not supported. Also loading more than one instance of the Unity runtime is not supported. Developers need to adapt third-party plugins (native or managed) for them to work properly.   Unity hopes that this integration will boost AR marketing by helping brands and creative agencies easily insert AR directly into their native mobile apps. Unity Editor will now officially support Linux Unity has launched the ‘Obstacle Tower Challenge’ to test AI game players Obstacle Tower Environment 2.0: Unity announces Round 2 of its ‘Obstacle Tower Challenge’ to test AI game players.
Read more
  • 0
  • 0
  • 17345
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-resecurity-reports-iriduim-behind-citrix-data-breach-200-government-agencies-oil-and-gas-companies-and-technology-companies-also-targeted
Melisha Dsouza
11 Mar 2019
4 min read
Save for later

Resecurity reports ‘IRIDUIM’ behind Citrix data breach, 200+ government agencies, oil and gas companies, and technology companies also targeted.

Melisha Dsouza
11 Mar 2019
4 min read
Last week, Citrix, the American cloud computing company, disclosed that it suffered a data breach on its internal network. They were informed of this attack through the FBI. In a statement posted on Citrix’s official blog, the company’s Chief Security Information Officer Stan Black said, “the FBI contacted Citrix to advise they had reason to believe that international cybercriminals gained access to the internal Citrix network. It appears that hackers may have accessed and downloaded business documents. The specific documents that may have been accessed, however, are currently unknown.” The FBI informed Citrix that the hackers likely used a tactic known as password spraying to exploit weak passwords. The blog further states that “Once they gained a foothold with limited access, they worked to circumvent additional layers of security”. In wake of these events, a security firm Resecurity reached out to NBC news and claimed that they had reasons to believe that the attacks were carried out by Iranian-linked group known as IRIDIUM.  Resecurity says that IRIDIUM "has hit more than 200 government agencies, oil and gas companies, and technology companies including Citrix." Resecurity claims that IRIDIUM breached Citrix's network during December 2018. Charles Yoo, Resecurity's president, said that the hackers extracted at least six terabytes of data and possibly up to 10 terabytes of sensitive data stored in the Citrix enterprise network, including e-mail correspondence, files in network shares and other services used for project management and procurement. “It's a pretty deep intrusion, with multiple employee compromises and remote access to internal resources." Yoo further added that his firm has been tracking the Iranian-linked group for years, and has reasons to believe that Iridium broke its way into Citrix's network about 10 years ago, and has been “lurking inside the company's system ever since.” There is no evidence to prove that the attacks directly penetrated U.S. government networks. However, the breach carries a potential risk that the hackers could eventually enter into sensitive government networks. According to Black, “At this time, there is no indication that the security of any Citrix product or service was compromised.” Resecurity said that it first reached out to Citrix on December 28, 2018, to share an early warning about “a targeted attack and data breach”. According to Yoo, an analysis of the indicated that the hackers were focused in particular on FBI-related projects, NASA and aerospace contracts and work with Saudi Aramco, Saudi Arabia's state oil company. “Based on the timing and further dynamics, the attack was planned and organized specifically during Christmas period,” Resecurity says in a blog. A spokesperson for Citrix confirmed to The Register that "Stan’s blog refers to the same incident" described by Resecurity. “At this time, there is no indication that the security of any Citrix product or service was compromised,” says Black Twitter was abuzz with users expressing their confusion over the timeline of events and wondering about the consequences if IRIDIUM was truly lurking in Citrix’s network for 10 years: “Based on the timing and further dynamics, the attack was planned and organized specifically during Christmas period,” Resecurity says in a blog. https://twitter.com/dcallahan2/status/1104301320255754241 https://twitter.com/MalwareYoda/status/1104170906740350977 https://twitter.com/Maliciouslink/status/1104375001715798016 The data breach is worrisome, considering that Citrix sells workplace software to government agencies and handles sensitive computer projects for the White House communications agency, the U.S. military, the FBI and many American corporations. U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches Internal memo reveals NASA suffered a data breach compromising employees social security numbers Equifax data breach could have been “entirely preventable”, says House oversight and government reform committee staff report
Read more
  • 0
  • 0
  • 17342

article-image-do-google-ads-secretly-track-stack-overflow-users
Vincy Davis
27 Jun 2019
5 min read
Save for later

Do Google Ads secretly track Stack Overflow users?

Vincy Davis
27 Jun 2019
5 min read
Update: A day after a user found a bug on Stack Overflow’s devtools website, Nick Craver, the Architecture Lead for Stack Overflow, has updated users on their working. He says that the fingerprinting issue has emerged from the ads relayed through 3rd party providers. Stack Overflow has been reaching out to experts and the Google Chrome security team and has also filed a bug in the Chrome tracker. Stack Overflow has contacted Google, their ad server for assistance and are testing deployment of Safe Frame to all ads. The Safe Frame API will configure if all ads on the page should be forced to be rendered using a SafeFrame container. Stack Overflow is also trying to deploy the Feature-Policy header to block access to most browser features from all components in the page. Craver has also specified in the update that Stack Overflow has decided not to turn off these ad campaigns swiftly, as they need the repro to fix these issues. A user by the name greggman has discovered a bug on Stack Overflow’s devtools website. Today, while working on his browser's devtools website, he noticed the following message: Image source: Stack Overflow Meta website  greggman then raised the query “Why is Stack Overflow trying to start audio?” on the Stack Overflow Meta website, which is intended for bugs, features, and discussion of Stack Overflow for its users. He then found out that the above message appears whenever a particular ad is appearing on the website. The ad is from Microsoft via Google.  Image source: Stack Overflow Meta Website  Later another user, TylerH did an investigation and revealed some intriguing information about the identified bug. He found out that the Google Ad is employing the audio API, to collect information from the users’ browser, in an attempt to fingerprint it.   He says that “This isn't general speculation, I've spent the last half hour going though the source code linked above, and it goes to considerable lengths to de-anonymize viewers. Your browser may be blocking this particular API, but it's not blocking most of the data.”  TylerH claims that this fingerprint tracking of users is definitely not done for legitimate feature detection. He adds that this technique is done in aggregate to generate a user fingerprint, which is included along with the advertising ID, while recording analytics for the publisher. This is done to detect the following : Users’ system resolution and accessibility settings The audio API capabilities, supported by the users’ browser The mobile browser-specific APIs, supported by the users’ browser TylerH states that this bug can detect many other details about the user, without the users’ consent. Hence he issues a warning to all Stack Overflow users to “Use an Ad blocker!” As both these findings gained momentum on the Stack Overflow Meta website, Nick Craver,  the Architecture Lead for Stack Overflow replied to greggman and TylerH, “Thanks for letting us know about this. We are aware of it. We are not okay with it.” Craver also mentioned that Stack Overflow has reached out to Google, to obtain their support. He also notified users that “This is not related to ads being tested on the network and is a distinctly separate issue. Programmatic ads are not being tested on Stack Overflow at all.” Users are annoyed at this response by Craver. Many are not ready to believe that the Architecture Lead for Stack Overflow did not have any idea about this and is now going to work on it. A user on Hacker News comments that this response from Craver “encapsulates the entire problem with the current state of digital advertising in 1 simple sentence.” Few users feel like this is not surprising at all, as all websites use ads as tracking mechanisms. A HN user says that “Audio feature detection isn't even a novel technique. I've seen trackers look at download stream patterns to detect whether or not BBR congestion control is used, I have seen mouse latency based on the difference between mouse ups and downs in double clocks and I have seen speed-of-interaction checks in mouse movements.”  Another comment reads, “I think ad blocking is a misnomer. What people are trying to do when blocking ads is prevent marketing people from spying on them. And the performance and resource consumption that comes from that. Personal opinion: Laws are needed to make what advertisers are doing illegal. Advertisers are spying on people to the extent where if the government did it they'd need a warrant.” While there is another user, who thinks that the situation is not that bad, with Stack Overflow at least taking responsibility of this bug. The user on Hacker News wrote, “Let's be adults here. This is SO, and I imagine you've used and enjoyed the use of their services just like the rest of us. Support them by letting passive ads sit on the edge of the page, and appreciate that they are actually trying to solve this issue.” Approx. 250 public network users affected during Stack Overflow’s security attack Stack Overflow confirms production systems hacked Facebook again, caught tracking Stack Overflow user activity and data
Read more
  • 0
  • 0
  • 17329

article-image-apex-ai-announced-apex-os-and-apex-autonomy-for-building-failure-free-autonomous-vehicles
Sugandha Lahoti
20 Nov 2018
2 min read
Save for later

Apex.AI announced Apex.OS and Apex.Autonomy for building failure-free autonomous vehicles

Sugandha Lahoti
20 Nov 2018
2 min read
Last week, Alphabet’s Waymo announced that they will launch the world’s first commercial self-driving cars next month. Just two days after that, Apex.AI. announced their autonomous mobility systems. This announcement came soon after they closed a $15.5MM Series A funding, led by Canaan with participation from Lightspeed. Basically, Apex. AI designed a modular software stack for building autonomous systems. It easily integrates into existing systems as well as 3rd party software. An interesting thing they claim about their system is the fact that “The software is not designed for peak performance — it’s designed to never fail. We’ve built redundancies into the system design to ensures that single failures don’t lead to system-wide failures.” Their two products are Apex.OS and Apex.Autonomy. Apex.OS Apex.OS is a meta-operating system, which is an automotive version of ROS (Robot Operating System). It allows software developers to write safe and secure applications based on ROS 2 APIs. Apex.OS is built with safety in mind. It is being certified according to the automotive functional safety standard ISO 26262 as a Safety Element out of Context (SEooC) up to ASIL D. It ensures system security through HSM support, process level security, encryption, authentication. Apex.OS improves production code quality through the elimination of all unsafe code constructs. It ships with support for automotive hardware, i.e. ECUs and automotive sensors. Moreover it comes with a complete documentation including examples, tutorials, design articles, and 24/7 customer support. Apex.Autonomy Apex.Autonomy provides developers with building blocks for autonomy. It has well-defined interfaces for easy integration with any existing autonomy stack. It is written in C++, is easy to use, and can be run and tested on Linux, Linux RT, QNX, Windows, OSX. It is designed with production and ISO 26262 certification in mind and is CPU bound on x86_64 and amd64 architectures. A variety of LiDAR sensors are already integrated and tested. Read more about the products on Apex. AI website. Alphabet’s Waymo to launch the world’s first commercial self driving cars next month. Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race. Indeed lists top 10 skills to land a lucrative job, building autonomous vehicles.
Read more
  • 0
  • 0
  • 17320

article-image-nsa-releases-ghidra-a-free-software-reverse-engineering-sre-framework-at-the-rsa-security-conference
Savia Lobo
06 Mar 2019
2 min read
Save for later

NSA releases Ghidra, a free software reverse engineering (SRE) framework, at the RSA security conference

Savia Lobo
06 Mar 2019
2 min read
The National Security Agency released the Ghidra toolkit, today at the RSA security conference in San Francisco. Ghidra is a free, software reverse engineering (SRE) framework developed by NSA's Research Directorate for NSA's cybersecurity mission. Ghidra helps in analyzing malicious code and malware like viruses and can also provide cybersecurity professionals with a better understanding of potential vulnerabilities in their networks and systems. “The NSA's general plan was to release Ghidra so security researchers can get used to working with it before applying for positions at the NSA or other government intelligence agencies with which the NSA has previously shared Ghidra in private”, ZDNet reports. Ghidra’s anticipated release broke out at the start of 2019 following which users have been looking forward to this release. This is because Ghidra is a free alternative to IDA Pro, a similar reverse engineering tool which can only be available under an expensive commercial license, priced in the range of thousands of US dollars per year. NSA cybersecurity advisor, Rob Joyce said that Ghidra is capable of analyzing binaries written for a wide variety of architectures, and can be easily extended with more if ever needed. https://twitter.com/RGB_Lights/status/1103019876203978752 Key features of Ghidra Ghidra includes a suite of software analysis tools for analyzing compiled code on a variety of platforms including Windows, Mac OS, and Linux It includes capabilities such as disassembly, assembly, decompilation, graphing and scripting, and hundreds of other features Ghidra supports a wide variety of processor instruction sets and executable formats and can be run in both user-interactive and automated modes. With Ghidra users may develop their own Ghidra plug-in components and/or scripts using the exposed API To know more about the Ghidra cybersecurity tool, visit its documentation on GitHub repo or its official website. Security experts, Wolf Halton and Bo Weaver, discuss pentesting and cybersecurity [Interview] Hackers are our society’s immune system – Keren Elazari on the future of Cybersecurity 5 lessons public wi-fi can teach us about cybersecurity
Read more
  • 0
  • 0
  • 17319
article-image-gdpr-complaint-in-eu-claim-billions-of-personal-data-leaked-via-online-advertising-bids
Vincy Davis
21 May 2019
4 min read
Save for later

GDPR complaint in EU claim billions of personal data leaked via online advertising bids

Vincy Davis
21 May 2019
4 min read
Last year, a GDPR complaint was filed against Google and other ad auction companies regarding data breach. The complaint alleged that tech companies broadcasted people’s personal data to dozens of companies, without proper security through a mechanism of “behavioural ads”. The complaint was filed by a host of privacy activists and pro-privacy browser firm Brave. This year in January, new evidences emerged indicating the broadcasted data includes information about people’s ethnicity, disabilities, sexual orientation and more. This sensitive information allows advertisers to specifically target incest, abuse victims, or those with eating disorders. This complaint was filed by an anti-surveillance NGO, the Panoptykon Foundation. The initial complaints were filed in Ireland, the UK, and Poland. Now, yesterday, a new GDPR complaint about Real-Time Bidding (RTB) in the online advertising industry was filed with Data Protection Authorities in Spain, Netherlands, Belgium, and Luxembourg. In total seven EU countries have raised the GDPR issue, this week when it marked completion of one year since Europe’s General Data Protection Regulation (GDPR) came into force. The complaints were lodged by Gemma Galdon Clavell , Diego Fanjul , David Korteweg , Jef Ausloos , Pierre Dewitte , and Jose Belo . The complaints suggest Google and other major companies have leaked vast scale of personal data to the “Ad Tech” industry. https://twitter.com/mikarv/status/1130374705440018433 How RTB system is used for data breach According to the complaint, Google’s DoubleClick recently renamed “Authorized Buyers”, has 8.4 million websites and uses it to broadcasts personal data about visitors to over 2,000 companies. Google is using Real-Time Bidding (RTB) system for it. This means every time a person visits Google web page, intimate personal data about the users and what they are viewing is broadcasted in a “bid request”. These requests are then sent to hundreds of other companies to solicit bids from potential advertisers’ for the opportunity to show an ad to a specific visitor. This data includes people’s exact locations, inferred religious, sexual, political characteristics. The data also includes what users are reading, watching, and listening to online, and a unique code which details to  'Expression of Interest' section on a website. The next biggest ad exchange is AppNexus, owned by AT&T, which conducts 131 billion personal data broadcasts every day. Once the data is broadcasted, there is no control as to what happens to the data thereafter. Google has a self-regulatory guideline for companies that rely on its broadcast, according to which, companies should inform them if they are breaking any rules. Google has assured that over 2,000 companies are “certified” in this way. However, Google DoubleClick/Authorized Buyers sends intimate personal information about virtually every single online person to these companies, billions of times a day. This is one of the massive leakage of personal data recorded so far as this occurs hundreds of billions of times every day. In a statement to Fix AdTech, CEO of Eticas, Gemma Galdon Cavell has said, “We hope that this complaint sends a strong message to Google and those using Ad Tech solutions in their websites and products. Data protection is a legal requirement must be translated into practices and technical specifications” Google will be fined heavy for not complying to GDPR Under the GDPR, a company is not permitted to use personal data unless it tightly controls what happens to that data. Article 5 (1)(f) requires that personal data be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorized or unlawful processing and against accidental loss.” The largest GDPR fine ever, is issued to Google amounting to 50M euros. In January, a French data protection watchdog, CNIL alleged that the search engine giant was breaking GDPR rules around transparency. It also reported that Google did not have valid legal base, when processing people's data for advertising purposes. Meanwhile, Google is still appealing to the fine. Many users on Hacker News are having varied opinions regarding the need for regulation and also about the credibility of GDPR. A user states, “To be clear, I think some privacy regulation is necessary, but there seems to be some kind of dissonance. People want a service, but are unwilling to pay for it nor give their data. Then they complain to the government that they should be able to get the service without payment anyway.” Another user added, “From a user perspective, GDPR has no impact so far. I am still being tracked to death wherever I go. Neither do companies offer me a way to get the data they have about me.” GAO recommends for a US version of the GDPR privacy laws ProtonMail shares guidelines to help organizations achieve EU GDPR compliance As US-China tech cold war escalates, Google revokes Huawei’s Android support, allows only those covered under open source licensing
Read more
  • 0
  • 0
  • 17317

article-image-the-microsoft-github-deal-has-set-into-motion-an-exodus-of-github-projects-to-gitlab
Amarabha Banerjee
05 Jun 2018
4 min read
Save for later

The Microsoft-GitHub deal has set into motion an exodus of GitHub projects to GitLab

Amarabha Banerjee
05 Jun 2018
4 min read
Microsoft has acquired GitHub in a major deal worth $7.5 billion. Not only has this put the open source community in a frenzy, but has also opened up different options for the developers and programmers who don’t want to share their project and code details with Microsoft. There is a history to this particular behavior of the open source community towards Microsoft. Firstly let’s reframe the question - what is the fear that’s causing the migration? Microsoft has this well known habit of acquiring promising open source projects and then slowly letting them die. They even had a name for the strategy: ‘Embrace, Extend, Extinguish’. That’s a key reason, open source developers dread Microsoft. The other factor for the fear is Microsoft’s history of using their patents to sue open source projects. These are some reasons the open source developers have traditionally avoided Microsoft and their products for a long time.   The other side of the argument is that Microsoft is not the same company as it used to be in terms of their approach to open source mainly due to the change in their leadership team. Their present focus has also shifted from operating systems to the cloud, building Azure solutions, and promoting office 365. They have recently open sourced their scripting language powershell in an attempt to lure the open source developers under the organizational umbrella. In lesser words, Microsoft is trying for an image makeover and their GitHub deal might be yet another attempt to give the open source developers a bigger umbrella and more resources to develop production ready applications.   Whatever’s the actual reason, it’s pretty clear what’s on Open Source developers’ minds. As per the latest tweet from GitLab, the rate of new repositories being added to GitLab has increased significantly since Monday - the 4th of June. The snapshot below shows the spike in posting new repositories in Gitlab.   The trends of both Github and Gitlab have also spiked since the buying out news broke and that clearly shows that there is a huge spike in chatter regarding this. GitLab itself had started pushing a trend called #movingtogitlab  and because of the incoming traffic reaching exceptionally high volume, their servers also crashed for a brief period of time. Gitlab had posted the video tutorial called “Migrating from GitHub to GitLab” on the 3rd of June which has already reached 22.5k views which clearly shows that there have been 20k people at least who have tried to export their GitHub project to GitLab. Having said that let’s take a look at the number of active users for both of these platforms. While GitHub has around 24 million active users, GitLab is at a mediocre 100k. So the exodus of a few thousand might not make a significant dent on GitHub’s user base.   On one hand, the markets have rejoiced over the news of the Microsoft acquisition of Github boosting Microsoft’s stock prices well above 101 USD. On the other hand, the overall feeling towards this acquisition has been quite pessimistic among the developer community to say the least. This deal still has to go through regular auditing to check whether the norms for standard acquisition were followed and other details. The completion of this deal will happen only around December 2018 and the question remains whether Microsoft will be getting the same GitHub that they bought and what will this deal mean for Gitlab. The question on everyone’s mind right now is will Microsoft act as Github’s owner or steward? Will GitHub become the de facto leader for code sharing and pioneer in open source development? Or will other tools like GitLab, Sourceforge, Bitbucket take advantage of the situation and come to the forefront? The most interesting and positive thing to emerge from this scenario would be if Microsoft itself comes across as a leader in open source projects which would mean more funds and resources for useful and viable tech research and development and probably a brighter future for the tech world. Microsoft is going to acquire GitHub 10 years of GitHub Is Comet the new Github for Artificial Intelligence?
Read more
  • 0
  • 2
  • 17317

article-image-aws-elastic-load-balancing-support-added-for-redirects-and-fixed-responses-in-application-load-balancer
Natasha Mathur
30 Jul 2018
2 min read
Save for later

AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer

Natasha Mathur
30 Jul 2018
2 min read
AWS announced support for two new actions namely, redirect and fixed-response for elastic load balancing in Application Load Balancer last week. Elastic Load Balancing offers automatic distribution of the incoming application traffic. The traffic is distributed across targets, such as Amazon EC2 instances, IP addresses, and containers. One of the types of load balancers that Elastic load offers is Application Load Balancer. Application Load Balancer simplifies and improves the security of your application as it uses only the latest SSL/TLS ciphers and protocols. It is best suited for load balancing of HTTP and HTTPS traffic and operates at the request level which is layer 7. Redirect and Fixed response support simplifies the deployment process while leveraging the scale, availability, and reliability of Elastic Load Balancing. Let’s discuss how these latest features work. The new redirect action enables the load balancer to redirect the incoming requests from one URL to another URL. This involves redirecting HTTP requests to HTTPS requests, allowing more secure browsing, better search ranking and high SSL/TLS score for your site. Redirects also help redirect the users from an old version of an application to a new version. The fixed-response actions help control which client requests are served by your applications. This helps you respond to the incoming requests with HTTP error response codes as well as custom error messages from the load balancer. There is no need to forward the request to the application. If you use both redirect and fixed-response actions in your Application Load Balancer, then the customer experience and the security of your user requests are improved considerably. Redirect and fixed-response actions are now available for your Application Load Balancer in all AWS regions. For more details, check out the Elastic Load Balancing documentation page. Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] Build an IoT application with AWS IoT [Tutorial]
Read more
  • 0
  • 0
  • 17311
article-image-openai-two-new-versions-and-the-output-dataset-of-gpt-2-out
Vincy Davis
07 May 2019
3 min read
Save for later

OpenAI: Two new versions and the output dataset of GPT-2 out!

Vincy Davis
07 May 2019
3 min read
Today, OpenAI have released the versions of GPT-2, which is a new AI model. GPT-2 is capable of generating coherent paragraphs of text without needing any task-specific training. The release includes a medium 345M version and a small 117M version of GPT-2. They have also shared the 762M and 1.5B versions with partners in the AI and security communities who are working to improve societal preparedness for large language models. The earlier version release of GPT was in the year 2018. In February 2019, Open-AI had made an announcement about GPT-2 with many samples and policy implications. Read More: OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words The team at OpenAI has decided on a staged release of GPT-2. Staged release will have the gradual release of family models over time. The reason behind the staged release of GPT-2 is to give people time to assess the properties of these models, discuss their societal implications, and evaluate the impacts of release after each stage. The 345M parameter version of GPT-2 has improved performance relative to the 117M version, though it does not offer much ease of generating coherent text. Also it would be difficult to misuse the 345M version. Many factors like ease of use for generating coherent text, the role of humans in the text generation process, the likelihood and timing of future replication and publication by others, evidence of use in the wild and expert-informed inferences about unobservable uses, etc were considered while releasing this staged 345M version. The team is hopeful that the ongoing research on bias, detection, and misuse will boost them to publish larger models and in six months, they will share a fuller analysis of language models’ societal implications and the heuristics for release decisions. The team at OpenAI is looking for partnerships with academic institutions, non-profits, and industry labs which will focus on increasing societal preparedness for large language models. They are also open to collaborating with researchers working on language model output detection, bias, and publication norms, and with organizations potentially affected by large language models. The output dataset contains GPT-2 outputs from all 4 model sizes, with and without top-k truncation, as well as a subset of the WebText corpus used to train GPT-2. The dataset features approximately 250,000 samples per model/hyperparameter pair, which will be sufficient to help a wider range of researchers perform quantitative and qualitative analysis. To know more about the release, head over to the official release announcement. OpenAI introduces MuseNet: A deep neural network for generating musical compositions OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes OpenAI Five bots destroyed human Dota 2 players this weekend
Read more
  • 0
  • 0
  • 17309

article-image-security-flaws-in-boeing-787-cis-ms-code-can-be-misused-by-hackers-security-researcher-says-at-black-hat-2019
Savia Lobo
19 Aug 2019
7 min read
Save for later

Security flaws in Boeing 787 CIS/MS code can be misused by hackers, security researcher says at Black Hat 2019

Savia Lobo
19 Aug 2019
7 min read
At the Black Hat 2019 security conference in Las Vegas, Ruben Santamarta, an IOActive Principal Security Consultant in his presentation said that there were vulnerabilities in the Boeing 787 Dreamliner’s components, which could be misused by hackers. The security flaws are in the code for a component known as a Crew Information Service/Maintenance System. “The CIS/MS is responsible for applications like maintenance systems and the so-called electronic flight bag, a collection of navigation documents and manuals used by pilots,” according to Bruce Schneier's (public-interest technologist) blog.  Boeing, however, strongly disagreed with Santamarta’s findings saying that such an attack is not possible and rejected Santamarta’s “claim of having discovered a potential path to pull it off.” SantaMarta says, “An attacker could potentially pivot from the in-flight entertainment system to the CIS/MS to send commands to far more sensitive components that control the plane's safety-critical systems, including its engine, brakes, and sensors.” According to Wired, “Santa­marta himself admits that he doesn't have a full enough picture of the aircraft—or access to a $250 million jet—to confirm his claims.” In a whitepaper Santamarta released earlier this month, he points out that in September 2018, a publicly accessible Boeing server was identified using a simple Google search, exposing multiple files. On further analysis, the exposed files contained parts of the firmware running on the Crew Information System/Maintenance System (CIS/MS) and Onboard Networking System (ONS) for the Boeing 787 and 737 models respectively. These included documents, binaries, and configuration files. Also, a Linux-based Virtual Machine used to allow engineers to access part of the Boeing’s network access was also available.  “The research presented in this paper is based on the analysis of information from public sources, collected documents, and the reverse engineering work performed on the 787’s CIS/MS firmware, which has been developed by Honeywell, based on a regular (nonavionics, non-certified, and non-ARINC-653-compliant) VxWorks 6.2 RTOS (x86) running on a Commercial Off The Shelf (COTS) CPU board (Pentium M),” the whitepaper states.  Santamarta identified three networks in the 787, the Open Data Network (ODN), the Isolated Data Network (IDN), and the Common Data Network (CDN). The ODN talks with the outside, handling communication with potentially dangerous devices. The IDN handles secure devices, but not necessarily ones that are connected to aircraft safety systems; a flight data recorder is an example. Santamarta described the CDN as the "backbone communication of the entire network," connecting to electronics that could impact the safety of the aircraft. According to PCMag, “Santamarta was clear that there are serious limitations to his research, since he did not have access to a 787 aircraft. Still, IOActive is confident in its findings. "We have been doing this for many years, we know how to do this kind of research." SantaMarta said "We're not saying it's doomsday, or that we can take a plane down. But we can say: This shouldn't happen." Boeing, on the other hand, denies the claims put forward by SantaMarta and says that the claims do not represent any real threat of a cyberattack. In a statement to Wired, Boeing writes, "IOActive's scenarios cannot affect any critical or essential airplane system and do not describe a way for remote attackers to access important 787 systems like the avionics system." The statement further reads, "IOActive reviewed only one part of the 787 network using rudimentary tools, and had no access to the larger system or working environments. IOActive chose to ignore our verified results and limitations in its research, and instead made provocative statements as if they had access to and analyzed the working system. While we appreciate responsible engagement from independent cybersecurity researchers, we're disappointed in IOActive's irresponsible presentation." "Although we do not provide details about our cybersecurity measures and protections for security reasons, Boeing is confident that its airplanes are safe from cyberattack," the company's statement concludes. In a follow-up call with WIRED, Boeing’s company spokesperson said that “in investigating IOActive's claims, Boeing had gone so far as to put an actual Boeing 787 in "flight mode" for testing, and then had its security engineers attempt to exploit the vulnerabilities that Santamarta had exposed. They found that they couldn't carry out a successful attack.”  Further, according to Wired, Boeing also consulted with the Federal Aviation Administration and the Department of Homeland Security about Santamarta's attack hypothesis. The DHS didn't respond to a request for comment, but an FAA spokesperson wrote in a statement to WIRED that it's "satisfied with the manufac­turer’s assessment of the issue." The Boeing fleet has been in the news for quite some time ever since Boeing's grounded 737 MAX 8 aircraft killed a total of 346 people in two fatal air crashes in October last year and in March this year.  Stefan Savage, a computer science professor at the University of California at San Diego, said,"The claim that one shouldn't worry about a vulnerability because other protections prevent it from being exploited has a very bad history in computer security." Savage is currently working with other academic researchers on an avionics cybersecurity testing platform. "Typically, where there's smoke there's fire," he further adds.  Per Wired, “The Aviation Industry Sharing and Analysis Center shot back in a press release that his findings were based on "technical errors." Santamarta countered that the A-ISAC was "killing the messenger," attempting to discredit him rather than address his research.” PCMag writes, “Santamarta is skeptical. He conceded that it's possible Boeing added mitigations later on, but says there was no evidence of such protections in the code he analyzed." A reader on Schneier’s blog post writes that Boeing should allow SantaMarta’s team to conduct a test, for the betterment of the passengers, “I really wish Boeing would just let them test against an actual 787 instead of immediately dismissing it. In the long run, it would work out way better for them, and even the short term PR would probably be a better look.” Another reader commented about lax FAA standards on schneier’s blog post, “Reading between the lines, this would infer that FAA/EASA certification requires no penetration testing of an aircrafts systems before approving a new type. That sounds like “straight to the scene of the accident” to me…” A user who is responsible for maintenance of 787’s wrote on HackerNews, “Unlike the security researcher, I do have access to multiple 787s as I am one of many people responsible for maintaining them. I'm obviously not going to attempt to exploit the firmware on an aircraft for obvious reasons, but the security researcher's notion that you can "pivot" from the in flight entertainment to anything to do with aircraft operation is pure fantasy.” He further added, “These systems are entirely separate, including the electricity that controls the systems. This guy is preying on individuals' lack of knowledge about aircraft mechanics in order to promote himself.” Another user on HackerNews shared, “I was flying about a year ago and was messing with the in flight entertainment in a 787. It was pretty easy to figure out how to get to a boot menu in the in flight entertainment. I was thinking "huh, this seems like maybe a way in". Seeing how the in-flight displays navigational data it must be on the network as the flight systems. I'm sure there is some kind of segregation but it’s probably not ultimately secure.” Savage tells Wired, "This is a reminder that planes, like cars, depend on increasingly complex networked computer systems. They don't get to escape the vulnerabilities that come with this." To know more about this news, read the whitepaper by the IOActive team. You can also head over to Wired’s detailed analysis.  “Deep learning is not an optimum solution for every problem faced”: An interview with Valentino Zocca 4 common challenges in Web Scraping and how to handle them Microsoft workers protest the lethal use of Hololens2 in the $480m deal with US military
Read more
  • 0
  • 0
  • 17300
Modal Close icon
Modal Close icon