Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-mozilla-funds-winners-of-the-2018-creative-media-awards-for-highlighting-unintended-consequences-of-ai-in-society
Natasha Mathur
26 Oct 2018
5 min read
Save for later

Mozilla funds winners of the 2018 Creative Media Awards for highlighting unintended consequences of AI in society

Natasha Mathur
26 Oct 2018
5 min read
Mozilla announced funding for the seven projects of its 2018 Creative Media Awards, earlier this week. These projects aimed at promoting art and advocacy to highlight the unintended and indirect consequences of artificial intelligence in our everyday lives. Mozilla’s Creative Media Awards is an initiative taken by Mozilla to support and promote a healthy internet ecosystem. Mozilla announced, in June this year,  that it will be awarding $225,000 to the winner technologists and media makers. “We’re seeking projects that explore artificial intelligence and machine learning. In a world where biased algorithms, skewed data sets, and broken recommendation engines can radicalize YouTube users, promote racism, and spread fake news, it’s more important than ever to support artwork and advocacy work that educates and engages internet users”, reads the Mozilla awards page. The creative media awards are a part of the NetGain Partnership. NetGain Partnership is a collaboration between Mozilla, Ford Foundation, Knight Foundation, MacArthur Foundation, and the Open Society Foundation. The winners of the seven projects come from five different countries, namely, the U.S, the U.K, Netherlands, India, and the United Arab Emirates (UAE). The winners used science fiction, short documentaries, games, and other media to make the” impact of Artificial Intelligence on the society understandable”. These seven projects will be launched by June 2019. Let’s have a look at these projects. Stealing Ur Feelings Stealing Ur Feelings will be an interactive documentary by Noah Levenson in the U.S. Levenson, has been awarded $50,000 as a prize. The documentary will be exploring how an emotion recognition AI tracks whether you’re happy or sad. It will also reveal how companies use that data to influence your behavior. Do Not Draw a Penis Do Not Draw a Penis by Moniker in the Netherlands aims at addressing automated censorship and algorithmic content moderation. Moniker has also been awarded $50,000 as a prize. In Do Not Draw a Penis, users will have to visit a web page and will be met with a blank canvas. On that blank canvas, users can draw whatever they want, and an AI voice will comment on their drawings ( such as “nice landscape!”). However, in case the drawing resembles a penis or other explicit content, the AI will scold the user and destroy the image. A Week With Wanda A week with Wanda by Joe Hall from the UK will be a web-based simulation of risks and rewards attached to Artificial Intelligence. Hall has been awarded $25,000 as a prize. Wanda is an AI assistant that interacts with users over the course of one week to “improve” their lives. So, Wanda might send “uncouth” messages to Facebook friends or order you anti-depressants. It might even freeze your bank account, however, Wanda’s actions are simulated, not real. Survival of the Best Fit Survival of the Best Fit by Alia ElKattan in the United Arab Emirates is a web-based simulation of how blinding use of AI during the hiring process reinforces workplace inequality. ElKattan has been awarded $25000 as a prize. Survival of the Best Fit presents users with an algorithm to experience how white-sounding names are prioritized, among other related biases. The Training Commission The Training Commission is a web-based fiction by Ingrid Burrington and Brendan Byrne in the U.S. This team was awarded $25,000 as a prize. The Training Commission tells stories of AI’s unintended consequences and harms to public life. What Do You See? What do you see| by Suchana Seth from India highlights and explores how differently humans and algorithms “see” the same image, and how easily bias can kick in. Seth has been awarded $25,000 as a prize. What do you see involves humans having to visit a website and describe an image in their own words, without the help of prompts. Then, humans will see how an image captioning algorithm explaining the same image. Mate Me or Eat Me Mate Me or Eat Me is a dating simulator by Benjamin Berman in the U.S. Berman has also been awarded $25,000 as the prize. Mate Me or Eat Me examines how exclusionary real dating apps can be. Users will have to create a monster and mingle with others, swiping right and left to either mate with or eat others. Users are also presented with an insight on how their choice impacts who they see next as well as who all have been excluded from their pool of potential paramours. These seven awardees were selected depending on the quantitative scoring of their applications by a review committee. Committee members comprise the Mozilla staff, current, and alumni Mozilla Fellows, as well as the outside experts.  Diversity in applicant background, past work, and medium were also considered during the selection process. For more information, read the official Mozilla Blog. Mozilla announces $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates Is Mozilla the most progressive tech organization on the planet right now? To bring focus on the impact of tech on society, an education in humanities is just as important as STEM for budding engineers, says Mozilla co-founder
Read more
  • 0
  • 0
  • 10273

article-image-chrome-71-beta-is-here-with-a-relative-time-format-and-more
Prasad Ramesh
26 Oct 2018
3 min read
Save for later

Chrome 71 Beta is here with a relative time format and more

Prasad Ramesh
26 Oct 2018
3 min read
Chrome 71 beta was released yesterday with international relative time formats and other features. A relative time format in Chrome 71 Beta Phrases like ‘yesterday’ or ‘three months’ are common while writing or speaking. But such phrases are not a part of the built-in date and time APIs. So using them in code is not possible. To fulfil this need, libraries provide localized versions of such phrases. But using them required downloading customary phrases for each supported language thereby increasing the bundle size and download time. Chrome 71 Beta has a new Intl.RelativeTimeFormat() function which shifts this job of phrases to the JavaScript engine. Examples to show how this works: const rtf = new Intl.RelativeTimeFormat('en'); rtf.format(3.14, 'second'); // → 'in 3.14 seconds' rtf.format(-15, 'minute'); // → '15 minutes ago' This API can even do things like retrieving information for multiple languages, dealing with parts of a date or time individually etc. For more details on this, visit the Chrome post. Some other features in Chrome Beta 71 Apart from the addition of a relative time format, other features have been added. FullscreenOptions The Element.requestFullscreen() method can now be customized on Android with the use of an optional options parameter. The navigationUI parameter allows choosing between making the navigation bar visible or a completely immersive mode where no controls are shown on the screen until a gesture is performed to bring out the navigation. Support for new font types Chrome Beta 71 brings support for COLR/CPAL fonts. These are a type of OpenType color font composed of layers. The layers are made of vector outline glyphs and color palette information. With this addition, Chrome now supports three color font formats which are cross-platform. The other two are CBDT/CBLC and SBIX. MediaElement and MediaStream nodes defined only for AudioContext Chrome Beta 71 allows creation of MediaElementAudioSourceNode, MediaStreamAudioSourceNode, and MediaStreamAudioDestinationNode elements exclusively by using an AudioContext. In older versions these could be created using an OfflineAudioContext, but that is not supported anymore due to spec compliance and real-time nature of the nodes not being met by OfflineAudioContext. Some interoperability improvements For interoperability with other browsers, Chrome now calls capture event listeners in the capturing phase. This is done at shadow hosts, while previously it was done in the bubbling phase. Chrome now calculates the specificity for the :host() and :host-context() pseudo classes and also for the arguments of ::slotted(). This makes it compliant with the Shadow DOM v1 spec. Items removed in Chrome Beta 71 The items removed in this version are: SpeechSynthesis.speak without user activation importScripts() of new scripts after service worker installation prefixed versions of several APIs URL.createObjectURL for MediaStream document.origin For a complete list of features and details, visit the Chromium Blog. Chrome 70 releases with support for Desktop Progressive Web Apps on Windows and Linux Chrome V8 7.0 is in beta, to release with Chrome 70 Chrome 69 privacy issues: automatic sign-ins and retained cookies; Chrome 70 to correct these
Read more
  • 0
  • 0
  • 2711

article-image-react-introduces-hooks-a-javascript-function-to-allow-using-react-without-classes
Bhagyashree R
26 Oct 2018
3 min read
Save for later

React introduces Hooks, a JavaScript function to allow using React without classes

Bhagyashree R
26 Oct 2018
3 min read
Yesterday, the React community introduced Hooks, a new feature proposal which has landed in the React 16.7.0-alpha. With the help of Hooks, you will be able “hook into” or use React state and other React features from function components. The biggest advantage is that Hooks don’t work inside classes and let you use React without classes. Why Hooks are being introduced? Easily reuse React components Currently, there is no way to attach reusable behavior to a component. There are some patterns like render props and high-order components that try to solve this problem to some extent. But you need to restructure your components to use them. Hooks make it easier for you to extract stateful logic from a component so it can be tested independently and reused. All of this, without having to change your component hierarchy. You can also easily share them among many components or with the community. Splitting related components In React, each lifecycle method often contains a mix of unrelated logic. And many times the mutually unrelated code that changes together splits apart, but completely unrelated code ends up combined in a single method. This could end up introducing bugs and inconsistencies. Rather than forcing a component split based on lifecycle methods, Hooks allow you to split them into smaller functions based on what pieces are related. Use React without classes One of the hurdles that people face while learning React is classes. You need to understand that how this works in JavaScript is very different from how it works in most languages. You also have to remember to bind the event handlers. Hooks solve these problems by letting you use more of React’s features without classes. React components have always been closer to functions. It embraces functions, but without sacrificing the practical spirit of React. With Hooks, you will get access to imperative escape hatches that don’t require you to learn complex functional or reactive programming techniques. Hooks are completely opt-in and are 100% backward compatible. After receiving the community feedback, which seems to be positive going by their ongoing discussion on RFC, this feature, which is currently in alpha release, will be introduced in React 16.7. To know more in detail about React Hooks, check out their official announcement. React 16.6.0 releases with a new way of code splitting, and more! InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out RxDB 8.0.0, a reactive, offline-first, multiplatform database for JavaScript released!
Read more
  • 0
  • 0
  • 31226

article-image-qt-design-studio-1-0-released-with-qt-photoshop-bridge-timeline-based-animations-and-qt-live-preview
Natasha Mathur
26 Oct 2018
2 min read
Save for later

Qt Design Studio 1.0 released with Qt photoshop bridge, timeline based animations and Qt live preview

Natasha Mathur
26 Oct 2018
2 min read
The Qt team released Qt Design Studio 1.0 yesterday. Qt Design Studio 1.0 explores features such as Qt photoshop bridge, timeline-based animations, and Qt live preview among other features. Qt Design Studio is a UI design and development environment which allows designers and developers around the world to rapidly prototype as well as develop complex and scalable UIs. Let’s discuss the features of Qt Design Studio 1.0 in detail. Qt Photoshop Bridge Qt Design Studio 1.0 comes with Qt photoshop bridge that allows users to import their graphics design from photoshop. Users can also create re-usable components directly via Photoshop. Moreover, exporting directly to specific QML types is also allowed. Other than that, Qt photoshop Bridge comes with an enhanced import dialog as well as basic merging capabilities. Timeline-based animations Timeline-based animations in Qt Design Studio 1.0 come with a timeline-/keyframe-based editor. This editor allows designers to easily create pixel-perfect animations without having to write a single line of code. You can also map and organize the relationship between timelines and states to create smooth transitions from state to state. Moreover, selecting multiple keyframes is also enabled. Qt Live Preview Qt Live Preview lets you run and preview your application or UI directly on the desktop, Android devices, as well as the Boot2Qt devices. You can also see how your changes affect the UI live on your target device. Moreover, it also comprises a zoom in and out functionality. Other Features You can insert a 3D studio element to preview it on the end target dice with the Qt Live Preview. There’s a Qt Safe Renderer integration that uses Safe Renderer items and also map them in your UI. You can use states and timeline for the creation of screen flows and transitions. Qt Design Studio is free, however, you will need a commercial Qt developer license to distribute the UIs created with Qt Design Studio. For more information, check out the official Qt Design Studio blog. Qt 3D Studio 2.1 released with new sub-presentations, scene preview, and runtime improvements Qt creator 4.8 beta released, adds language server protocol Qt Creator 4.7.0 releases!
Read more
  • 0
  • 0
  • 13502

article-image-rust-1-30-releases-with-procedural-macros-and-improvements-to-the-module-system
Sugandha Lahoti
26 Oct 2018
3 min read
Save for later

Rust 1.30 releases with procedural macros and improvements to the module system

Sugandha Lahoti
26 Oct 2018
3 min read
Yesterday, the rust team released a new version of the Rust systems programming language known for its safety, speed, and concurrency. Rust 1.30 comes with procedural macros, module system improvements, and more. It has been an incredibly successful year for the Rust programming language in terms of its popularity. It jumped from being the 46th most popular language on GitHub last year to the 18th position this year. The 2018 survey of the RedMonk Programming Language Rankings marked the entry of Rust in their Top 25 list. It topped the list of the most loved programming language among the developers who took the Stack overflow survey of 2018 survey for a straight third year in the row. Still not satisfied? Here are 9 reasons why Rust programmers love Rust. Key improvements in Rust 1.30 Procedural macros are now available Procedural macros allow for more powerful code generation. Rust 1.30 introduces two different kinds of advanced macros, “attribute-like procedural macros” and “function-like procedural macros.” Attribute-like macros are similar to custom derive macros, but instead of generating code for only the #[derive] attribute, they allow you to create new, custom attributes of your own. They’re also more flexible: derive only works for structs and enums, but attributes can go on other places, like functions. Function-like macros define macros that look like function calls. Developers can now also bring macros into scope with the use keyword. Updates to the Module system The module system has received significant improvements to make it more straightforward and easy to use. In addition to bringing macros into scope, the use keyword has two other changes. First, external crates are now in the prelude. Previously, on moving a function to a submodule, developers would have some of their code break. Now, on moving a function, it will check the first part of the path and see if it’s an extern crate, and if it is, it will use it regardless of where developers are in the module hierarchy. Second, use supports bringing items into scope with paths starting with crate. Previously, paths specified after use would always start at the crate root, but paths referring to items directly would start at the local path, meaning the behavior of paths was inconsistent. Now, the crate keyword at the start of the path will indicate if developers would like the path to start at their crate root. These changes combined will lead to a more straightforward understanding of how paths resolve. Other changes Developers can now use keywords as identifiers using the raw identifiers syntax (r#), e.g. let r#for = true; Using anonymous parameters in traits is now deprecated with a warning and will be a hard error in the 2018 edition. Developers can now catch visibility keywords (e.g. pub, pub(crate)) in macros using the vis specifier. Non-macro attributes now allow all forms of literals, not just strings. Previously, you would write #[attr("true")], now you can write #[attr(true)]. Developers can now specify a function to handle a panic in the Rust runtime with the #[panic_handler] attribute. These are just a select few updates. For more information, and code examples, go through the Rust Blog. 3 ways to break your Rust code into modules Rust as a Game Programming Language: Is it any good? Rust 2018 RC1 now released with Raw identifiers, better path clarity, and other changes
Read more
  • 0
  • 0
  • 12779

article-image-filestack-workflows-comes-with-machine-learning-capabilities-to-help-business-manage-their-digital-images
Sugandha Lahoti
25 Oct 2018
3 min read
Save for later

Filestack Workflows comes with machine learning capabilities to help business manage their digital images

Sugandha Lahoti
25 Oct 2018
3 min read
Filestack has come up with Filestack Workflows, a machine learning powered solution to help businesses detect, analyze, moderate and curate content in scalable and automated ways. Filestack and Workflows have traditionally been providing tools for companies to handle content as it is uploaded. Their tools checked for NSFW content, cropped photos, performed copyright detection on Word Docs, etc. However, handling content at scale using tools they've built in-house was proving to be difficult. They relied heavily on developers to implement the code or set up a chain of events. This brought them to develop a new interface that allows businesses to upload, moderate, transform and understand content at scale, freeing them to innovate more and manage less. The Filestack Workflows platform is built on a logic-driven intelligence functionality which uses machine learning to provide quick analysis of images and return actionable insights. This includes object recognition and detection, explicit content detection, optical character recognition, and copyright detection. Filestack Workflows provide flexibility for integration either from Filestack’s own API or from a simple user Interface. Workflows also have several new features that extend far beyond simple image transformation: Optical Character Recognition (OCR) allows users to abstract text from any given image. Images of everything from tax documents to street signs can be uploaded through their system, returning a raw text format of all characters in that image. Not Safe for Work (NSFW) Detection for filtering out content that is not appropriate for the workplace. Their image tagging feature can automate content moderations by implementing “safe for work” and a “not safe for work” score. Copyright Detection to determine if a file is an original work. A single API call will display the copyright status of single or multiple images. They have also released a quick demo to highlight the features of Filestack Workflows. This demo creates a Workflow that takes uploaded content (images or documents) and determines a filetype and then curates ‘safe for work’ images. It determines the Filetype using the following logic: If it is an 'Image' then: Determine if the image is 'Safe for Work' If it is 'Safe', then store to a specific storage source. If it is 'Not Safe' then, pixelate the image, and then store to a specific storage source for modified images. If it is a 'Document', then store to a specific storage source for documents. Read more about the news on Filestack’s blog. Facebook introduces Rosetta, a scalable OCR system that understands text on images using Faster-RCNN and CNN How Netflix uses AVA, an Image Discovery tool to find the perfect title image for each of its shows Datasets and deep learning methodologies to extend image-based applications to videos
Read more
  • 0
  • 0
  • 10322
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-github-updates-developers-and-policymakers-on-eu-copyright-directive-at-brussels
Savia Lobo
25 Oct 2018
2 min read
Save for later

GitHub updates developers and policymakers on EU copyright Directive at Brussels

Savia Lobo
25 Oct 2018
2 min read
On Tuesday, the 16th of October, GitHub hosted Open Source and Copyright: from Industry 4.0 to SMEs in Brussels. Partnering with OpenForum Europe and Red Hat, the event was designed to raise awareness of the EU Copyright Directive among developers and policymakers. GitHub has made its position on the controversial legislation clear, saying that while “current copyright laws are outdated in many respects and need modernization, we are concerned that some aspects of the EU’s proposed copyright reform package would inadvertently affect software.” The event included further discussion on topics such as: Policy: For GitHub, Abby Vollmer shared how developers have been especially effective in getting policymakers to respond to problems with the copyright proposal and asked them to continue reaching out to policymakers about a technical fix to protect open source. Developers: Evis Barbullushi from Red Hat explained why open source is so fundamental to software and critical to the EU, using examples of what open source powers every day. He also highlighted the world-class and commercially mainstream nature of open source. SMEs: Sebastiano Toffaletti (from the European Digital SME Alliance) described concerns about the copyright proposal from the perspective of SMEs, including how efforts to regulate large platforms can end up harming SMEs even if they’re not the target. Research and academia: Roberto Di Cosmo (Software Heritage) wrapped up the talks by noting that he “should not be here, because, in a world in which software was better understood and valued, policymakers would never introduce a proposal that inadvertently puts software at great risk, and motivated developers to fix this underlying problem.” In its previous EU copyright proposal update, GitHub explained that the EU Council, Parliament, and Commission were ready to begin final-stage negotiations of the copyright proposal. These three institutions are now working on the exceptions to copyright for text and data mining (Article 3), among other technical elements of the proposal. Article 13 would likely drive many platforms to use upload filters on user-generated content. Article 2 defines which services are in the scope of Article 13, Articles 2 and 13 will be discussed together. This means developers can still contact policymakers with thoughts on what outcomes are best for software development. The LLVM project is ditching SVN for GitHub. The migration to Github has begun. GitHub Business Cloud is now FedRAMP authorized What we learnt from the GitHub Octoverse 2018 Report
Read more
  • 0
  • 0
  • 13719

article-image-bitbucket-goes-down-for-over-an-hour
Natasha Mathur
25 Oct 2018
2 min read
Save for later

BitBucket goes down for over an hour

Natasha Mathur
25 Oct 2018
2 min read
Bitbucket, a web-based version control repository that allows users to manage and share their Git repositories as a team, suffered an outage today. As per the Bitbucket’s incident page, the outage started at 8 AM UTC today and lasted for over an hour, till 9:02 AM UTC,  before finally getting back to its normal state. The Bitbucket team tweeted regarding the outage, saying: https://twitter.com/BitbucketStatus/status/1055372361036312576 It was only earlier this week when GitHub went down for a complete day due to failure in its data storage system. In the case of GitHub, there was no obvious way to tell if the site was down as the website’s backend git services were working. However, users were not able to log in, outdated files were being served, branches went missing, and they were unable to submit Gists, bug reports, posts, etc among other related issues. Bitbucket, however, was completely broken during the entirety of the outage, as all the services from pipelines to actually getting at the code were down. It was evidently clear that the site was not working as it showed the “Internal Server” error. BitBucket hasn’t spoken out regarding the real cause of the outage, however, as per the BitBucket status page, the site had been experiencing elevated error rates, and degraded BitBucket functionality, for the past two days. This could be the possible reason for the outage. After the outage was over, Bitbucket tweeted about the recovery, saying: https://twitter.com/BitbucketStatus/status/1055384158392922112 As the services were down, developers and coders around the world took to Twitter to vent their frustration. https://twitter.com/HeinrichCoetzee/status/1055370890127519744 https://twitter.com/montakurt/status/1055372412651495424 https://twitter.com/CapAmericanec/status/1055370560606294016   Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows GitHub is bringing back Game Off, its sixth annual game building competition, in November GitHub comes to your code Editor; GitHub security alerts now have machine intelligenc
Read more
  • 0
  • 0
  • 19553

article-image-the-llvm-project-is-ditching-svn-for-github-the-migration-to-github-has-begun
Prasad Ramesh
25 Oct 2018
2 min read
Save for later

The LLVM project is ditching SVN for GitHub. The migration to Github has begun.

Prasad Ramesh
25 Oct 2018
2 min read
The official LLVM monorepo repository was officially published on Github on Tuesday. Now is a good time to modify your workflows to use the monorepo as soon as possible. Any current SVN based workflows will be supported for at the most one more year. The move from SVN to GitHub for LLVM has been long under consideration. After positive responses in the mailing threads and in favor of the GitHub community, LLVM has finally decided to set the migration plan in motion. Two round-table meetings were held this week with the developers to discuss SVN to GitHub migration. Below are some highlights of these meetings. The most important outcome from the meetings is an agreed upon timeline for completing the transition. The latest monorepo prototype will be moved over to the LLVM organization Github project and has now begun mirroring the current SVN repository. Commits will still be made to the SVN repository just as they are currently done. All community members are advised to begin migrating their workflows relying on SVN or the current git mirrors to use the new monorepo. As for CI jobs or internal mirrors that pull from SVN or http://llvm.org/git/*.git they should be modified to pull from the new monorepo instead. Changes are advised to also make them work with the new repository layout. Developers are advised to begin using the new monorepo for development. The provided scripts should help to commit code. These scripts will enable you to commit to SVN from the monorepo without having to use git-svn. The commit access will be turned off to the SVN server and commit access to the monorepo will be enabled in a year. At this point, the monorepo will be the only source for the project. Keep an eye on the LLVM monorepo GitHub repository. There is a getting started guide to work with a GitHub monorepo and for more details you can take a look at the mailing list. LLVM will be relicensing under Apache 2.0 start of next year A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring
Read more
  • 0
  • 0
  • 14603

article-image-data-theorem-launches-two-automated-api-security-analysis-solutions-api-discover-and-api-inspect
Sugandha Lahoti
25 Oct 2018
2 min read
Save for later

Data Theorem launches two automated API security analysis solutions - API Discover and API Inspect

Sugandha Lahoti
25 Oct 2018
2 min read
Data Theorem, a company that delivers mobile app security to developers, has launched an automated API discovery and security analysis solution. This solution will address API security threats prevalent in enterprise serverless and microservices applications. This solution will allow developers to integrate API discovery and security assessment into their DevOps practices and CI/CD processes to protect any modern application. Data Theorem has come up with two new products: API Discover and API Inspect. These tools address security concerns such as Shadow APIs, Serverless Applications, and API Gateway cross-check validation by conducting continuous security assessments on API authentication, encryption, source code, and logging. The new API security solutions support Amazon’s Lambda and API Gateway tools to discover modern APIs and to compute the specification using standards such as Swagger and Open API 3.0. By these solutions, users will be alerted of important and critical vulnerabilities caused by insufficient security protection. It will also alert users of newly created APIs built upon serverless frameworks and will deliver continuous, automated security analysis of these newly created APIs. “Data Theorem uniquely addresses threat models related to modern apps, helping us identify issues related to privacy and application-layer attacks and the potential loss of sensitive data,” said Rich Tener, Director of Security for Evernote, a note-taking app. He further adds, “With Data Theorem, we have continuous security testing in place for all of our apps in the app stores. Traditional API security checks are not enough in our environment. The new API discovery and analysis products Data Theorem has delivered are truly differentiated – I haven’t seen anyone else in the industry building automated API security services like this.” Data Theorem’s new API Discover and API Inspect security tools are available from Data Theorem website. Annual pricing starts at $300 per API operation. How the Titan M chip will improve Android security. How to stop hackers from messing with your home network (IoT). IBM launches Industry’s first ‘Cybersecurity Operations Center on Wheels’ for on-demand cybersecurity support.
Read more
  • 0
  • 0
  • 11178
article-image-tim-cook-talks-about-privacy-supports-gdpr-for-usa-at-icdppc-ex-fb-security-chief-calls-him-out
Prasad Ramesh
25 Oct 2018
4 min read
Save for later

Tim Cook talks about privacy, supports GDPR for USA at ICDPPC, ex-FB security chief calls him out

Prasad Ramesh
25 Oct 2018
4 min read
Apple CEO Tim Cook, advocates data privacy, considers it as a fundamental human right representing ideas of the company. Closely after, ex-Facebook security chief calls him out on his speech over a series of Tweets. Cook on privacy Cook spoke during a keynote speech during the ongoing International Conference of Data Protection and Privacy Commissioners (ICDPPC) conference in Brussels, Belgium. He expressed his ideas of data privacy and praised the successful implementation of the GDPR policy of EU. The Apple CEO is in full support of a policy like GDPR coming into the US. “We at Apple are in full support of a comprehensive federal privacy law in the United States.” There are four essentials to such a law, he said: The right to have personal data minimized The right to knowledge The right to access The right to security He talked about how data collection has become sort of a trade and: “Today that trade has exploded into a data industrial complex. Our own information, from the everyday to the deeply personal, is being weaponized against us with military efficiency.” Cook did not explicitly mention any companies in his speech but it was likely that he was referring to the Facebook Cambridge Analytica scandal and Google being fined for privacy in the EU. There was also a Senate hearing recently on consumer data privacy. Cook added, “In the news almost every day, we bear witness to the harmful, even deadly, effects of these narrowed worldviews. We shouldn't sugarcoat the consequences. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them.” Cook on artificial intelligence Cook believes that for artificial intelligence to be truly smart it should respect human values which include privacy. He went on to say that achieving great artificial intelligence systems with great privacy standards is not just a possibility but a responsibility. He believes that we should not lose “humanity” in pursuit of artificial intelligence. He states: “For artificial intelligence to be truly smart, it must respect human values, including privacy.” Now how a system that makes decisions heavily based on data without using people’s data or obscure it to say the least is something to think about. Ex Facebook security chief on Cook’s speech Alex Stamos, ex Facebook security chief and a current adjunct professor said that he agrees with almost everything Cook had to say. On Twitter, Stamos mentioned Apple blocking the download of VPNs and the use of encrypted messaging apps in China. This could have given the Chinese citizens a way to connect to the internet and send private messages. Also, data on iCloud is supposed to be end-to-end encrypted. But Apple’s Chinese partner Guizhou-Cloud Big Data  stores iCloud data on Chinese government run servers. This gives them a possibility to access user data. He tweeted: “We don't want the media to create an incentive structure that ignores treating Chinese citizens as less-deserving of privacy protections because a CEO is willing to bad-mouth the business model of their primary competitor, who uses advertising to subsidize cheaper devices.” https://twitter.com/alexstamos/status/1055192743033458688 https://twitter.com/alexstamos/status/1055192747970191360 Now if data can really be weaponized against is a question of who has control over it. On a purely objective view yes it can be. But it is the responsibility of these tech giants collecting, using and controlling that data to use it responsibly, to keep it safe. It is understandable that a free to use model works on user data but the companies should respect that data and the people from whom they collect it. There are also some efforts towards mobile OSes that promote privacy. In a world where everything is online, what we share, creating profiles with personal details, using free services in exchange for our own data, complete privacy seems like a luxury. Apple now allows U.S. users to download their personal data via its online privacy data portal Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling” Chrome 69 privacy issues: automatic sign-ins and retained cookies; Chrome 70 to correct these
Read more
  • 0
  • 0
  • 9428

article-image-cathay-pacific-a-major-hong-kong-based-airlines-suffer-data-breach-affecting-9-4-million-passengers
Natasha Mathur
25 Oct 2018
2 min read
Save for later

Cathay Pacific, a major Hong Kong based airlines, suffer data breach affecting 9.4 million passengers

Natasha Mathur
25 Oct 2018
2 min read
A major Hong Kong based international airline, Cathay Pacific Airways Limited, revealed yesterday that it has discovered unauthorized access of data belonging to as many as 9.4 million Cathay passengers. This data includes the passenger name, nationality, date of birth, phone number, email address, passport number, identity card number, customer service remarks, and historical travel information. Moreover, 403 expired credit card numbers and 27 credit card numbers with no CVV were also accessed. Cathay Pacific has its head office and main hub located at Hong Kong International Airport and serves flights around North America, Europe, China, Taiwan, Japan, Southeast Asia, and the Middle East. The company has taken immediate measures to investigate the data breach further. So far, Cathay hasn’t found any evidence of misuse of personal information. The airlines also mentioned that because of the recent data breach, part of the IT security processes have been affected, but and is the flight operations systems which are insulated from the IT security systems remain uncompromised.   Cathay Pacific posted about the data breach on Twitter: https://twitter.com/cathaypacific/status/1055117720444854273 “We are very sorry for any concern this data security event may cause our passengers. We acted immediately to contain the event, commence a thorough investigation with the assistance of a leading cybersecurity firm, and to further strengthen our IT security measures”, said Rupert Hogg, CEO, Cathay Pacific. Cathay is currently contacting the affected passengers, using multiple communications channels, and is providing them with information on steps that can be taken to protect users. “We have no evidence that any personal data has been misused. No-one’s travel or loyalty profile was accessed in full, and no passwords were compromised. Cathay Pacific has notified the Hong Kong Police and is notifying the relevant authorities. We want to reassure our passengers that we took and continue to take measures to enhance our IT security. The safety and security of our passengers remain our top priority”, said Hogg. Timehop suffers data breach; 21 million users’ data compromised Facebook’s largest security breach in its history leaves 50M user accounts compromised Facebook says only 29 million and not 50 million users were affected by last month’s security breach
Read more
  • 0
  • 0
  • 11220

article-image-googles-cloud-robotics-platform-to-be-launched-in-2019-will-combine-the-power-of-ai-robotics-and-the-cloud
Melisha Dsouza
25 Oct 2018
3 min read
Save for later

Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud

Melisha Dsouza
25 Oct 2018
3 min read
Earlier this week, Google announced its plans to launch a ‘Cloud Robotics platform’ for developers in 2019. Since the early onset of ‘cloud robotics’ in the year 2010, Google has explored various aspects of the cloud robotics field. Now, with the launch of Cloud Robotics platform, Google will combine the power of AI, robotics and the cloud to deploy cloud-connected collaborative robots. The platform will encourage efficient robotic automation in highly dynamic environments. The core infrastructure of the Platform will be open source and users will pay only for what services they use. Features of Cloud Robotics platform: #1 Critical infrastructure The platform will introduce secure and robust connectivity between robots and the cloud. Kubernetes will be used for the management and distribution of digital assets. Stackdriver will assist with the logging, monitoring, alerting, and dashboarding processes. Developers will gain access to Google’s data management and AI capabilities, ranging from Cloud Bigtable to Cloud AutoML. The standardized data types and open APIs will help developers build reusable automation components. Moreover, open APIs support interoperability, which means integrators can compose end-to-end solutions with collaborative robots from different vendors. #2 Specialized tools The tools provided with this platform will help developers to build, test, and deploy software for robots with ease. Composing and deploying automation solutions in customers’ environments through system integrators can be done easily. Operators can monitor robot fleets and ongoing missions, as well. Plus, users have to only pay for the services they use. That being said, if a user decides to move to another cloud provider, they can take their data with them! #3 Fostering powerful first-party services and third-partyy innovation Google’s initial Cloud Robotics services can be applied to various use cases like robot localization and object tracking. The services will process sensor data from multiple sources and use machine learning to obtain information and insights about the state of the physical world. It will encourage an ecosystem of hardware, and applications, that can be used and re-used for collaborative automation. #4 Industrial Automation made easy Industrial automation requires extensive custom integration. Collaborative robots can help improve flexibility of the overall process.  It will help save costs and vendor lock ins. That being said, it is difficult to program robots to understand and react to the unpredictable changes of the physical human world. The Google Cloud platform will solve these issues by providing flexible automation services like Cartographer service, Spatial Intelligence service and Object Intelligence service Watch this video to know more about these services: https://www.youtube.com/watch?v=eo8MzGIYGzs&feature=youtu.be Alternatively, head over to Google's Blog to know more about this announcement. What’s new in Google Cloud Functions serverless platform Cloud Filestore: A new high performance storage option by Google Cloud Platform Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence  
Read more
  • 0
  • 0
  • 17607
article-image-grpc-a-cncf-backed-javascript-library-and-an-alternative-to-the-rest-paradigm-is-now-generally-available
Bhagyashree R
25 Oct 2018
3 min read
Save for later

gRPC, a CNCF backed JavaScript library and an alternative to the REST paradigm, is now generally available

Bhagyashree R
25 Oct 2018
3 min read
Yesterday, the Cloud Native Computing Foundation (CNCF) announced the general availability of gRPC-Web, which means that it is stable enough for production use. It is a JavaScript client library that allows web apps to directly communicate with backend gRPC services, without the need for an intermediate HTTP server. This serves as an alternative to the REST paradigm of web development. What is gRPC? Source: gRPC Initially developed at Google, gRPC is an open source remote procedure call (RPC) framework that can run in any environment. gRPC allows a client application to directly call methods on a server application on a different machine as if it was a local object. gRPC is based on the idea of defining a service, specifying the methods that can be called remotely with their parameter and return types. To handle the client calls the server then implements this interface and runs a gRPC server. On the client side, the client has a stub that provides the same methods as the server. One of the advantages of using gRPC is that gRPC clients and servers can be written in any of the languages supported by gRPC. So, for instance, you can easily create a gRPC server in Java with clients in Go, Python, or Ruby. How gRPC-Web works? With gRPC-Web, you can define a service “contract” between client web applications and backend gRPC servers using .proto definitions and auto-generate client JavaScript. Here is how gRPC-Web works: Define the gRPC service: The first step is to define the gRPC service. Similar to other gRPC services, gRPC-Web uses protocol buffers to define its RPC methods and their message request and response types. Run the server and proxy: You need to have a gRPC server that implements the service interface and a gateway proxy that allows the client to connect to the server. Writing the JavaScript client: After the server and gateway are up and running, you can start making gRPC calls from the browser. What are the advantages of using gRPC-Web? Using gRPC-Web eliminates some of the tasks from the development process: Creating custom JSON serialization and deserialization logic Wrangling HTTP status codes Content type negotiation The following are its advantages: End-to-end gRPC gRPC-Web allows you to officially remove the REST component from your stack and replace it with pure gRPC. Replacing REST with gRPC will help in scenarios where a client request goes to an HTTP server, which interacts with five backend gRPC services. Tighter coordination between frontend and backend teams As the entire RPC pipeline is defined using Protocol Buffers, you no longer need to have your “microservices teams” alongside your “client team.” The interaction between the client and the backend is just one more gRPC layer amongst others. Generate client libraries easily With gRPC-Web, the server that interacts with the “outside” world is now a gRPC server instead of an HTTP server. This means that all of your service’s client libraries can be gRPC libraries. If you need client libraries for Ruby, Python, Java, and 4 other languages, you no longer have to write HTTP clients for all of them. You can read CNCF’s official announcement on its website. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 6339

article-image-3d-secure-v2-a-new-authentication-protocol-supported-by-stripe-for-frictionless-authentication-and-better-user-experience
Natasha Mathur
25 Oct 2018
3 min read
Save for later

3D Secure v2: a new authentication protocol supported by Stripe for frictionless authentication and better user experience

Natasha Mathur
25 Oct 2018
3 min read
The EMVCo team released a new version of the 3D Secure protocol, called EMV 3-D Secure (3D Secure v2), using Stripe, an online software platform that handles, builds and runs flexible tools for Internet commerce, around the world, earlier this week. 3D Secure v2 works on overcoming the shortcomings of 3D Secure v1. It explores features such as frictionless authentication and better user experience. Let’s have a look at these features. Frictionless authentication 3D Secure v2 is used by businesses and their payment providers to securely send over 100 data elements on every transaction to the cardholder’s bank. This includes data related to payments such as the shipping address, as well as contextual data, such as the customer’s device ID or previous transaction history. The cardholder’s bank then uses this information to analyze the risk level of the transaction and further selects an appropriate response. If the bank trusts the data provided by the cardholder during the payment phase, then it follows the “frictionless” flow and the cardholder doesn’t see any sign of 3D Secure being applied. If the bank decided on getting further proof, then the transaction follows the “challenge” flow where the customer is asked to provide additional input to make sure the payment is authentic. Better User Experience 3D Secure v2 comes with new mobile SDKs using which businesses can implement native flows within their apps. Moreover, this doesn’t require customers to switch to a browser-based flow for completing the transaction. These new mobile SDKs further make it easy for the customers to authenticate a payment using their mobile banking apps. This SDK also detects whether the bank’s app is installed on the customer’s device and then automatically opens the banking app during the 3D Secure flow without requiring any customer interaction. The customer can then authenticate the payment with the help of a password, fingerprint, or facial recognition. EMVco is expecting the first banks to support the 3D Secure v2 for their cardholders in early 2019. The wider implementation of 3D Secure v2 among banks will be incremental and would take several months. “We’re keen to support 3D Secure v2 for businesses using Stripe as soon as possible, so we’re preparing the Stripe APIs to take full advantage of the 3D Secure v2 improvements”, reads the announcement page. For more information, check out the official announcement. Amazon Cognito for secure mobile and web user authentication [Tutorial] Google Titan Security key with secure FIDO two factor authentication is now available for purchase Multi-Factor Authentication System – Is it a Good Idea for an App?
Read more
  • 0
  • 0
  • 10111
Modal Close icon
Modal Close icon