Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-aws-announces-amazon-ml-solutions-lab
Abhishek Jha
23 Nov 2017
3 min read
Save for later

Amazon ML Solutions Lab to help customers “work backwards” and leverage machine learning

Abhishek Jha
23 Nov 2017
3 min read
For years, Amazon has been using machine learning and deep learning to make product recommendations, sharpen internal algorithms, and boost supply chain, forecasting, and capacity planning. Now the e-commerce giant is providing the customers access to its rich pool of machine learning experts. It has announced a new collaboration and education program, Amazon ML Solutions Lab, to connect machine learning experts from across Amazon with AWS customers. The idea is to accelerate the application of machine learning within the organizations. And the program could help AWS partners develop new machine learning-enabled features, products, and processes. The Amazon ML Solutions Lab combines hands-on educational workshops with brainstorming sessions to help customers “work backwards” from business challenges, and then go step-by-step through the process of developing machine learning-based products. Amazon machine learning experts will help customers to prepare data, build and train models, and put models into production. At the end of the programme, customers will be able to take what they learned through the process and use it elsewhere in their organisation. “By combining the expertise of the best machine learning scientist and practitioners at Amazon with the deep business knowledge of our customers, the Amazon ML Solutions Lab will help customers get up to speed on machine learning quickly, and start putting machine learning to work inside their organizations,” says Swami Sivasubramanian, vice president of Amazon AI. Taking customers through the full process of implementing machine learning, Amazon ML Solutions Lab programs will combine educational workshops and boot camps, advisory professional services, and hands-on help building custom models ready for deployment using customers’ own data. The engagements could range from weeks to months depending on the nature of the solution. The program's format is flexible – customers can participate at a dedicated facility at AWS headquarters in Seattle, or Amazon can send machine learning model developers to a customer's site. For organizations who already have data prepared for machine learning, AWS offers the ML Solutions Lab Express. This four-week intensive program starts with a boot camp hosted at Amazon, and is followed by three weeks of intensive problem-solving and machine learning model building with Amazon machine learning experts. Meanwhile, the Washington Post (owned by Amazon CEO Jeff Bezos) is using the program to build models in areas such as comment moderation, keyword tagging, and headline generation. Johnson & Johnson and the World Bank Group are the other two customers joining in. "We recently reached out to the Amazon ML Solutions Lab to collaborate with our data scientists on a deep learning initiative,” said Jesse Heap, Senior IT Manager at Janssen Inc. (the pharmaceutical companies of Johnson & Johnson), adding that Amazon’s machine learning experts have been training data scientists at Janssen on applying deep learning to pharma-related use cases. Whereas the World Bank Group said it's using the program "to leverage machine learning in our mission to end extreme poverty and promote shared prosperity." As the big cloud providers compete to provide AI expertise to companies that can’t afford to duplicate the advanced machine-learning research, Amazon ML Solutions Lab is a rather smart move from the AWS. The educational initiative could well be a long-term business strategy.
Read more
  • 0
  • 0
  • 13068

article-image-a-wordpress-plugin-vulnerability-is-leaking-twitter-account-information-of-users-making-them-vulnerable-to-compromise
Sugandha Lahoti
21 Jan 2019
3 min read
Save for later

A Wordpress plugin vulnerability is leaking Twitter account information of users making them vulnerable to compromise

Sugandha Lahoti
21 Jan 2019
3 min read
Baptiste Robert, a French security researcher who goes by the online handle Elliot Alderson, has found a vulnerability in a Wordpress plugin called Social Network Tabs. The plugin leaks user’s Twitter account information exposing them to compromise. This WordPress plugin is developed by Design Chemical, which allows websites to help users share content on social media sites. MITRE has assigned the vulnerability CVE-2018-20555. In a twitter thread, Elliot described the details of the bug on Thursday. Per Elliot, the Wordpress Plugin is leaking twice the Twitter access_token, access_token_secret, consumer_key and consumer_secret of their user which is leading to a takeover of their Twitter account.  This was caused by the few lines of code which was within the page where the Twitter widget is displayed. Anyone who viewed this code had access to see the linked Twitter handle and the access tokens. If the access token had read/write rights, the attacker was also able to take over the account and there were 127 such accounts. Elliot tested the bug by searching PublicWWW, a website source code search engine. He was able to find 539 websites using the vulnerable code. He then managed to retrieve access tokens using a script including the Twitter access_token, access_token_secret, consumer_key and consumer_secret from 539 vulnerable websites. According to Elliot, this leak compromised over 446 Twitter accounts with 2 verified accounts and multiple accounts with more than 10K+ followers. The full list of accounts is also made public by him. Elliot talked to Techcrunch about the vulnerability, saying that he had told “Twitter on December 1 about the vulnerability in the third-party plugin, prompting the social media giant to revoke the keys, rendering the accounts safe again. Twitter also emailed the affected users of the security lapse of the WordPress plugin but did not comment on the record when reached.” However, this is not the case. On January 17, he mentioned in a tweet that, “With a simple Google search query, "inurl:/inc/dcwp_twitter.php?1=", you can find that a lot of websites and so Twitter accounts are still vulnerable to this issue. This query returns 3550 results.” He has also written a scraper to automatically extract the keys from the result of this Google search query. SEC’s EDGAR system hacked; allowing hackers to allegedly make a profit of $4.1 million via insider trading Hyatt Hotels launches public bug bounty program with HackerOne Black Hat hackers used IPMI cards to launch JungleSec Ransomware, affects most of the Linux servers.
Read more
  • 0
  • 0
  • 13059

article-image-clojurecuda-0-6-0-now-supports-cuda-10
Prasad Ramesh
22 Nov 2018
2 min read
Save for later

ClojureCUDA 0.6.0 now supports CUDA 10

Prasad Ramesh
22 Nov 2018
2 min read
ClojureCUDA is a CUDA that supports parallel computations on the GPU with CUDA in the Clojure programming language. With this library, you can access high-performance Computing and GPGPU in Clojure. Installation ClojureCUDA 0.6.0 now has support for the new CUDA 10. To start using it: Install the CUDA 10 Toolkit Update your drivers Update the ClojureCUDA version in project.clj All the existing code should work without requiring any changes. CUDA and libraries CUDA is the most used environment for high-performance computing on NVIDIA GPUs. You can now use CUDA directly from the interactive Clojure REPL without having to wrangle with the C++ toolchain. High-performance libraries like Neanderthal take advantage of ClojureCUDA to deliver speed dynamically to Clojure programs. With these higher-level libraries, you can perform fast calculations with just a few lines of Clojure. You don’t even have to write the GPU code yourself. But writing the lower level GPU code is also not so difficult in an interactive Clojure environment. ClojureCUDA features The ClojureCUDA library has features like high performance and optimization for Clojure. High-performance computing CUDA enables various hardware optimizations on NVIDIA GPUs. Users can access the leading CUDA libraries for numerical computing like cuBLAS, cuFFT, and cuDNN. Optimized for Clojure ClojureCUDA is built with a focus on Clojure. The interface and functions fit into a functional style. They are also aligned to number crunching with CUDA. Reusable The library closely follows the CUDA driver API. Users translate examples from best CUDA books easily. Free and Open Source It is licensed under the Eclipse Public License (EPL) which is the same license used for Clojure. ClojureCUDA and other libraries by uncomplicate are open source. You can choose to contribute on GitHub or donate on Patreon. For more details and code examples, visit the dragan Blog. Clojure 1.10.0-beta1 is out! Stable release of CUDA 10.0 out, with Turing support, tools and library changes NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linux
Read more
  • 0
  • 0
  • 13059

article-image-facebook-confessed-another-data-breach-says-it-unintentionally-uploaded-1-5-million-email-contacts-without-consent-2
Amrata Joshi
18 Apr 2019
3 min read
Save for later

Facebook confessed another data breach; says it “unintentionally uploaded” 1.5 million email contacts without consent

Amrata Joshi
18 Apr 2019
3 min read
Facebook has been in the radar since quite some time now, with each month showing some major blunder by the company with respect to its privacy concerns. Last month Facebook opened up about exposing millions of user passwords in a plain text. Recently, one of the Facebook shareholders stood by a proposal to depose Mark Zuckerberg from its position as the board chairperson. And last evening, Facebook broke the news that it may have “unintentionally uploaded” the email contacts of 1.5 million new users on its site since May 2016, without their consent. What exactly happened at Facebook This news comes out when a security researcher highlighted that Facebook was asking some users to enter their email passwords when they signed up for new accounts for verifying their identities, in a move widely condemned by security experts. And it seems that the list of affected users is not just limited to the United States. https://twitter.com/robaeprice/status/1118668162378035200 In a statement to CNBC, a Facebook spokesperson said, “We’ve fixed the underlying issue and are notifying people whose contacts were imported. People can also review and manage contacts they share with Facebook in their settings.” According to a report by Business Insider when a user entered his/her email password, a message popped up which read, “it was "importing" your contacts, without asking for permission first.” The official statement from Facebook reads, “Last month we stopped offering email password verification as an option for people verifying their account when signing up for Facebook for the first time. When we looked into the steps people were going through to verify their accounts we found that in some cases people's email contacts were also unintentionally uploaded to Facebook when they created their account. We estimate that up to 1.5 million people's email contacts may have been uploaded. These contacts were not shared with anyone and we're deleting them. We've fixed the underlying issue and are notifying people whose contacts were imported. People can also review and manage the contacts they share with Facebook in their settings.” Facebook’s justification According to Facebook, the platform used to have a step in the account verification process where few users had the option to confirm their email address and then voluntarily import their email contacts onto Facebook. The idea behind the feature was to help users find their friends easily and also improve ads. When this process got redesigned in May 2016, the text that explained the step was removed but the feature remained intact. So, the email contacts were still being uploaded to the site without users being aware of the fact. With the company confessing such data breach acts repeatedly and stricter legislations coming into place, Facebook might face huge consequences for it in the near future. Facebook shareholders back a proposal to oust Mark Zuckerberg as the board’s chairperson Facebook AI introduces Aroma, a new code recommendation tool for developers Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs    
Read more
  • 0
  • 0
  • 13053

article-image-oculus-connect-5-2018-day-1-highlights-includes-oculus-quest-vader-immortal-and-more
Natasha Mathur
27 Sep 2018
5 min read
Save for later

Oculus Connect 5 2018: Day 1 highlights include Oculus Quest, Vader Immortal and more!

Natasha Mathur
27 Sep 2018
5 min read
Facebook Oculus' annual virtual reality developer conference, ‘The Oculus Connect 5 2018’ started yesterday on the 26th of September, in San Jose, California. It is a two-day event that ends today. This conference brings together VR developers from all around the world having an expertise in a variety of platforms and mediums. This is to push collaboration and share ideas that can bring the VR industry forward. Day 1 of Oculus Connect 5 was full of exciting news and announcements including the launch of Oculus Quest VR headset, Oculus platform updates, Oculus Go getting youtube VR, a three-part Vader Immortal Star Wars VR experience, and much more! Let’s have a look at some of these exciting announcements from Oculus Connect 5 2018. Oculus Quest VR headset revealed Oculus Quest is the first all-in-one VR gaming system by Oculus. It will be launched in Spring 2019. It comes with six degrees of freedom (6DOF), and touch controllers. The best thing is that it doesn’t come with an additional baggage of a PC, wires, and external sensors.  Oculus Quest The Oculus team also presented ‘Oculus Insight’ that uses four ultra wide-angle sensors along with computer vision algorithms. This helps track your exact position in real time without requiring any external sensors. Oculus Quest includes similar optics as Oculus Go along with a display resolution of 1600x1440 per eye. It also incorporates a lens spacing adjustment for enhanced visual comfort. The built-in audio system has been improved with bass up for higher quality sound effects. After its launch in Spring 2019, Oculus Quest will also launch a library with 50 VR apps including popular games like Robo Recall, The Climb, and Moss. Oculus Go getting YouTube VR Another exciting announcement made at the Oculus Connect 5 event was Oculus bringing YouTube VR to its Oculus Go VR headset. Oculus Go VR headset will now have access to more than 800,000 360-degree videos as a result of a partnership with Google’s YouTube service. This YouTube VR service will launch on the headset soon, as announced by Oculus Go product manager, Sean Liu at the Oculus Connect 5 conference. Oculus Venues adding NBA pro basketball games Now Oculus Venue, a concert and sports events app by Oculus has added the world’s most popular sports to the platform, NBA pro basketball games. As per the announcement at the Oculus Connect 5 conference,  NBA games will be available in Oculus Venues later this year. Also, viewers accessing NBA game in VR will be getting a custom virtual jersey of their favorite team which their avatar will be able to wear for the rest of the game season. Apart from that, Oculus Venues is planning on adding a lot more live music, movie marathons, and stand up comedy. Also, everyone who will be attending an NBA game in VR will get a special jersey for their avatar. Oculus Go cast support Another announcement made at the Oculus Connect 5 conference was cast support being added to Oculus Go. This will enable the Go users with the capability to stream their gameplay experience to other screens. As of now, the main focus is on enabling the users to cast Go screen to mobile devices. However, casting Go screen will be available for TV screens too in the near future, as mentioned by Sean Liu, Oculus Go product manager. A three-part Star Wars series - Vader Immortal Another super exciting announcement made at Oculus Connect 5 was the arrival of the first episode of a three-part VR series, called VR immortal. The series is based on Darth Vader and would be premiering with the upcoming Oculus Quest headsets. Vader Immortal  Vader Immortal is being created with David S. Goyer, who is the screenwriter behind films like The Dark Knight trilogy. Vader Immortal explores the events that take place in Secrets of the Empire and between the Revenge of the Sith and A New Hope. It will also explore the Darth Vader's castle as seen in Rogue One: A Star Wars Story. Gaming Announcements Apart from 50 titles for the Oculus Quest at the launch, there are a bunch of other Game Announcements that were made at Oculus Connect 5 2018. Insomniac Games’ Stormland, an action-adventure game which lets you play as a robot, will be launched in 2019. First look at Ready At Dawn’s Lone Echo II, the sequel to Lone Echo I, one of the most critically acclaimed games. Vox Machinae out, the mech simulator game had a surprise launch yesterday for Oculus Rift at the Oculus Connect 5 2018 conference. Fans of giant robots and elaborate cockpits would love this game! Other Oculus platform Announcements Work in ongoing on Avatars. Expressive Avatars with better eye and mouth movements expected later this year. The Oculus mobile app has now added support for Rift. This will enable you to have access to events, friends, and the Oculus store directly from your phone. For more information on Oculus Quest, visit the official Oculus Quest website. What’s new in VR Haptics? Game developers say Virtual Reality is here to stay Why mobile VR sucks
Read more
  • 0
  • 0
  • 13048

article-image-gitlab-11-2-releases-preview-changes-web-ide-android-project-import
Fatema Patrawala
24 Aug 2018
2 min read
Save for later

Gitlab 11.2 releases with preview changes in Web IDE, Android Project Import and more

Fatema Patrawala
24 Aug 2018
2 min read
Gitlab released version 11.2 with new features to help developers get started and iterate faster. Major improvements in this version are enhancements to the Web IDE, support for manifesting files to import Android projects, and custom project templates enabled. Let us look at each in detail: Preview changes in Web IDE Contributing changes to your projects with an advanced code editor and commit staging right within your browser will be faster and easier with the new WebIDE version. You can now easily see the effect of your code change and debug even before you commit with the Gitlab 11.2. You can now preview your JavaScript web app in the Web IDE, viewing your changes in real time, right next to the code for client-side evaluation. In addition, with 11.2, you can delete and rename files and switch branches without ever leaving the Web IDE. Android Project Import Importing complex project structures with multiple sub-structures was a tedious, time-consuming task until now. With the new support for XML manifest files, you can now import larger project structures with multiple repositories altogether, including Android OS code from the Android Open Source Project (AOSP). Simplified Cloud Native & more features To help you quickly install Gitlab on Kubernetes, the Cloud Native Helm Chart is now generally available. A GitLab Runner is deployed, making it easy to get started with GitLab CI/CD. With 11.2, GitLab administrators can offer instance-wide custom project templates, allowing users to start new projects quickly by automating repetitive setup tasks. Features such as issue board milestone lists, summed weights for issue board lists, group milestones on the milestone dashboard page, and todos for epics enable better work management. Major changes and improvements are contributed by the Gitlab community itself. Check out the Gitlab page for more details. GitLab is moving from Azure to Google Cloud in July GitLab open sources its Web IDE in GitLab 10.7 GitLab’s new DevOps solution
Read more
  • 0
  • 0
  • 13047
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-rstudio-1-2-releases-with-improved-testing-and-support-for-python-chunks-r-scripts-and-much-more
Amrata Joshi
06 May 2019
3 min read
Save for later

RStudio 1.2 releases with improved testing and support for Python chunks, R scripts, and much more!

Amrata Joshi
06 May 2019
3 min read
Last week, the team behind RStudio released RStudio 1.2 that includes dozens of new productivity enhancements and capabilities. RStudio 1.2 is compatible with projects in SQL, Stan, Python, and D3. With this release, testing R code integrations for shinytest and testthat is easier. Users can create,  test, and publish APIs in R with Plumber and run R scripts. What’s new in RStudio 1.2? Python sessions This release uses a shared Python session for executing Python chunks. It comes with simple bindings to access R objects from Python chunks and vice versa. Keyring In RStudio 1.2, passwords and secrets are stored securely with keyring by calling rstudioapi::askForSecret(). Users can install keyring directly from dialog prompt. Run R scripts Users can now run any R script as a background job in a clean R session and can also have a look at the script output in real time. Testing with RStudio 1.2 Users can opt for Run Tests command in testthat R scripts for directly running their projects. The testthat output in the Build pane now comes with navigable issue list. PowerPoint Users can now create PowerPoint presentations with R Markdown Package management With RStudio 1.2, users can now Specify a primary CRAN URL and secondary CRAN repos from the package preferences pane. Users can link to a package’s primary CRAN page from the packages pane. The CRAN repos can be configured with a repos.conf configuration file and the r-cran-repos-file option. Plumber Users can now easily create Plumber APIs in RStudio 1.2 and execute them within RStudio to view Swagger documentation and make test calls to the APIs Bug fixes in RStudio 1.2 In this release, the issue regarding “invalid byte sequence” has been fixed. Incorrect Git status has been rectified. Issues with low/no-contrast colors with HTML widgets has been fixed. It seems most users are excited about this release and they think that this way, Python will be more accessible to R users. A user commented on HackerNews, “I’m personally an Emacs Speaks Statistics fan myself, but RStudio has been huge boon to the R community. I expect that this will go a long ways towards making Python more accessible to R users.” Some are not much happy with this release as they think it has less options for graphics. Another comment reads, “I wish rstudio would render markdown in-line. It also tends to forget graphics in output after many open and closes of rmd. I’m intrigued by .org mode but as far as I can tell, there are not options for graphical output while editing.” To know more about this news, check out the post by RStudio. How to create your own R package with RStudio [Tutorial] The new RStudio Package Manager is now generally available Getting Started with RStudio    
Read more
  • 0
  • 0
  • 13043

article-image-storm-2-0-0-releases-with-java-enabled-architecture-new-core-and-streams-api-and-more
Vincy Davis
03 Jun 2019
4 min read
Save for later

Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more

Vincy Davis
03 Jun 2019
4 min read
Last week, Apache Storm PMC announced the release of Storm 2.0.0. The major highlight of this release is that Storm has been re-architected in pure Java. Previously a large part of Storm's core functionality was implemented in Clojure. This release also includes significant improvements in terms of performance, a new stream API, windowing enhancements, and Kafka integration changes. New Architecture Implemented in Java With this release, Storm has been re-architected, with its core functionality implemented in pure Java. This new implementation has improved its performance significantly and also has made internal APIs more maintainable and extensible. The previous language Clojure often posed a barrier for entry to new contributors. Storm's codebase will be now more accessible to developers who don't want to learn Clojure in order to contribute. New High-Performance Core Storm 2.0.0 has a new core featuring a leaner threading model, a blazing fast messaging subsystem and a lightweight back pressure model. This has been designed to push boundaries on throughput, latency, and energy consumption while maintaining backward compatibility. Also, this makes Storm 2.0, the first streaming engine to break the 1-microsecond latency barrier. New Streams API This version has a new typed API, which will express streaming computations more easily, using functional style operations. It builds on top of the Storm's core spouts and bolt APIs and automatically fuses multiple operations together. This will help in optimizing the pipeline. Windowing Enhancements Storm 2.0.0's windowing API can now save/restore the window state to the configured state backend. This will enable larger continuous windows to be supported. Also, the window boundaries can now be accessed via the APIs. Improvements in Kafka Kafka Integration Changes Removal of Storm-Kafka Due to Kafka's deprecation of the underlying client library, the storm-kafka module has been removed. Users will have to move, to the storm-kafka-client module. This uses Kafka's ‘kafka-clients’ library for integration. Move to Using the KafkaConsumer.assign API Kafka's own mechanism which was used in Storm 1.x has been removed entirely in 2.0.0. The storm-kafka-client subscription interface has also been removed, due to the limited control it offered over the subscription behavior. It has been replaced with the ‘TopicFilter’ and ‘ManualPartitioner’ interfaces. For custom subscription users, head over to the storm-kafka-client documentation, which describes how to customize assignment. Other Kafka Highlights The KafkaBolt now allows you to specify a callback that will be called when a batch is written to Kafka. The FirstPollOffsetStrategy behavior has been made consistent between the non-Trident and Trident spouts. Storm-kafka-client now has a transactional non-opaque Trident spout. Users have also been notified that the 1.0.x version line will no longer be maintained and have strongly encouraged users to upgrade to a more recent release. The Java 7 support has also been dropped, and Storm 2.0.0 requires Java 8. There has been a mixed reaction from users over the changes, in Storm 2.0.0. Few users are not happy with Apache dropping the Clojure language. As a user on Hacker News comments, “My team has been using Clojure for close to a decade, and we found the opposite to be the case. While the pool of applicants is smaller, so is the noise ratio. Clojure being niche means that you get people who are willing to look outside the mainstream, and are typically genuinely interested in programming. In case of Storm, Apache commons is run by Java devs who have zero interest in learning Clojure. So, it's not surprising they would rewrite Storm in their preferred language.” Some users think that this move of dropping Clojure language shows that developers nowadays are unwilling to learn new things As a user on Hacker News comments, “There is a false cost assigned to learning a language. Developers are too unwilling to even try stepping beyond the boundaries of the first thing they learned. The cost is always lower than they may think, and the benefits far surpassing what they may think. We've got to work at showing developers those benefits early; it's as important to creating software effectively as any other engineer's basic toolkit.” Others are quite happy with Storm getting Java enabled. A user on Reddit said, “To me, this makes total sense as the project moved to Apache. Obviously, much more people will be able to consider contributing when it's in Java. Apache goal is sustainability and long-term viability, and Java would work better for that.” To download the Storm 2.0.0 version, visit the Storm downloads page. Walkthrough of Storm UI Storing Apache Storm data in Elasticsearch Getting started with Storm Components for Real Time Analytics
Read more
  • 0
  • 0
  • 13043

article-image-uber-and-gm-cruise-are-open-sourcing-their-automation-visualization-systems
Amrata Joshi
20 Feb 2019
4 min read
Save for later

Uber and GM Cruise are open sourcing their Automation Visualization Systems

Amrata Joshi
20 Feb 2019
4 min read
Yesterday, Uber and GM Cruise announced that they are open sourcing their respective Autonomous Visualization Systems (AVS). It is a new way for the industry to understand and share its data. AVS has become a new standard for describing and visualizing autonomous vehicle perception, motion, and planning data while offering a web-based toolkit for building applications. AVS makes it easier for developers to make a decision with respect to their development. It is free for users, so it might encourage developers to come up with interesting developments for the autonomous industry. AVS acts as a standardized visualization layer that frees developers from building custom visualization software for their autonomous vehicles. Developers can now focus on core autonomy capabilities for drive systems, remote assistance, mapping, and simulation, with the help of AVS abstracting visualization. There is a need to have operators understand why their cars make certain decisions. The visualization system helps engineers to break out and playback certain trip intervals for closer inspection. AV operators rely on off-the-shelf visualization systems that aren’t designed with self-driving cars in mind. They are usually limited to bulky desktop computers that are difficult to navigate. Uber has taken a move towards its web-based visualization platform so operators don’t have to learn complex computer graphics and data visualization. Uber opted for XVIZ and streetscape.gl Autonomous vehicle development is rapidly evolving with new services, data sets, and many use cases that require new solutions. The team at Uber had unique requirements that needed to be addressed. The team wanted to manage the data while retaining performance comparable to desktop-based systems. So the team built a system around two key pieces: XVIZ, that provides the data (including management and specification) and streetscape.gl which is the component toolkit to power web applications. Uber’s new tool seems to be more geared to AV operators specifically. While talking about its Autonomous Visualization System, the company said, “It is a customizable web-based platform that allows self-driving technology developers — big or small — to transform their vehicle data into an easily digestible visual representation of what the vehicle is seeing in the real world.” XVIZ XVIZ provides a stream-oriented view of a scene changing over time and also a user interface display system. Users can randomly seek and understand the state of the world at that point. Just like an HTML document, it’s presentation is focused and structured according to a schema that allows for introspection. It also allows for easy exploration and interrogation of the data. streetscape.gl streetscape.gl is a toolkit used for developing web applications that consume data in the XVIZ protocol. It also offers components for visualizing XVIZ streams in 3D viewports, charts, tables, videos, and more. It addresses common visualization challenges such as time synchronization across data streams, coordinate systems, cameras, dynamic styling, and interaction with 3D objects and cross components. Voyage co-founder Warren Ouyang said, “We’re excited to use Uber’s autonomous visualization system and collaborate on building better tools for the community going forward.” Last week, in a Medium post, Cruise introduced its graphics library of two- and three-dimensional scenes called “Worldview.” It provides 2D and 3D cameras, keyboard and mouse movement controls, click interaction, and a suite of built-in drawing commands. Developers can build custom visualizations easily, without having to learn complex graphics APIs or write wrappers to make them work with React. In a statement to Medium, Cruise said, “We hope Worldview will lower the barrier to entry into the powerful world of WebGL, giving web developers a simple foundation and empowering them to build more complex visualizations.” Uber releases Ludwig, an open source AI toolkit that simplifies training deep learning models for non-experts Automation and Robots – Trick or Treat? Home Assistant: an open source Python home automation hub to rule all things smart  
Read more
  • 0
  • 0
  • 13039

article-image-lego-launches-brickheadz-builder-ar-a-new-and-free-android-app-to-bring-bricks-and-toys-to-life
Natasha Mathur
16 Jul 2018
3 min read
Save for later

LEGO launches BrickHeadz Builder AR, a new and free Android app to bring bricks and toys to life

Natasha Mathur
16 Jul 2018
3 min read
LEGO, the Danish Toy Maker, came out with a new and free Augmented reality Android app named “ BrickHeadz Builder AR ” last week. Android users with the latest version of Google LLC’s ARCore can now download the newly launched AR app on their phones. With the magic of Augmented Reality, users can interact virtually with tiny toy figures and building blocks of BrickHeadz toys as it brings the kids’ BrickHeadz toys to life. The BrickHeadz line, launched in March 2017, includes the classic LEGO bricks along with an instruction manual for fans which directs them to build characters with big heads and little bodies. It features characters from the very popular Marvel (such as Iron Man, Captain America, Black Widow, etc), DC (Batman, Robin, Batgirl, The Joker) and Disney (Belle, The Beast, Captain Jack Sparrow) franchises. The company is also planning to expand the line by including more characters. LEGO has always been on the look-out for ways to make the physical play experience even more fun by blending it with virtual play, according to Sean McEvoy, VP of digital games and apps at the Lego Group. Let’s have a look at what the new BrickHeadz Builder AR app is all about. Key Features In the BrickHeadz Builder app, different lego related creations such as characters and objects can be easily accessed. It also enables these characters and objects to interact with each other in interesting ways. The app directs kids through the steps of construction from beginner to free builder. It enables them to discover new characters and objects by solving play formulas in a “magic book”. The “magic book” comes with tutorials and information on challenges that can provide you with rewards. You can also personalize characters by playing with their behavior and outfits. They can also build their own objects with the building blocks despite the prebuilt characters and objects. Unlocking new characters and items is also possible by playing with your creations. More industries are catching interest in AR apps these days, especially after the launch of Pokemon Go in 2016 which managed to exceed $1.8 billion in revenue in the past two years. It is by far the most popular AR game ever to be released. There is also a VR version of the BrickHeadz Builder android app that Lego launched back in October last year. That product also allows children, and adults, to build and play with virtual Lego blocks and characters. For iOS users, the company released LEGO AR-Studio last year in December. The Brickheadz Builder android app is free with no in-app purchases. All you need is the most recent version of ARCore running on Android 8.0 or later. It can The app can also run on few of the qualified phones ( such as Asus Zenfone AR and LG V30 ) running Android 7.0 or later. For more coverage on the BrickHeadz Builder AR app, check out the official LEGO blog. Niantic, of the Pokemon Go fame, releases a preview of its AR platform Adobe glides into Augmented Reality with Adobe Aero Qualcomm announces a new chipset for standalone AR/VR headsets at Augmented World Expo  
Read more
  • 0
  • 0
  • 13036
article-image-llvms-clang-9-0-to-ship-with-experimental-support-for-opencl-c17-asm-goto-initial-support-and-more
Bhagyashree R
17 Sep 2019
2 min read
Save for later

LLVM’s Clang 9.0 to ship with experimental support for OpenCL C++17, asm goto initial support, and more

Bhagyashree R
17 Sep 2019
2 min read
The stable release of LLVM 9.0 is expected to come in the next few weeks along with subprojects like Clang 9.0. As per the release notes, the upcoming Clang 9.0 release will come with experimental support for C++17 features in OpenCL, asm goto support, and much more. Read also: LLVM 9.0 RC3 is now out with official RISC-V support, updates to SystemZ and more What’s new coming in Clang 9.0.0 Experimental support for C++17 features in OpenCL Clang 9.0.0 will have experimental support for C++17 features in OpenCL. The experimental support includes improved address space behavior in the majority of C++ features. There is support for OpenCL-specific types such as images, samplers, events, and pipes. Also, the invoking of global constructors from the host side is possible using a specific, compiler-generated kernel. C language updates in Clang Clang 9.0.0 includes the __FILE_NAME__ macro as a Clang specific extension that is supported in all C-family languages. It is very similar to the __FILE__ macro except that it will always provide the last path component when possible. Another C language-specific update is the initial support for asm goto statements to control flow from inline assembly to labels. This construct will be mainly used by the Linux kernel (CONFIG_JUMP_LABEL=y) and glib. Building Linux kernels with Clang 9.0 With the addition of asm goto support, the mainline Linux kernel for x86_64 is now buildable and bootable with Clang 9. The team adds, “The Android and ChromeOS Linux distributions have moved to building their Linux kernels with Clang, and Google is currently testing Clang built kernels for their production Linux kernels.” Read also: Linux 4.19 kernel releases with open arms and AIO-based polling interface; Linus back to managing the Linux kernel Build system changes Previously, the install-clang-headers target used to install clang’s resource directory headers. With Clang 9.0, this installation will be done by the install-clang-resource-headers target. “Users of the old install-clang-headers target should switch to the new install-clang-resource-headers target. The install-clang-headers target now installs clang’s API headers (corresponding to its libraries), which is consistent with the install-llvm-headers target,” the release notes read. To know what else is coming in Clang 9.0, check out its official release notes. Other news in Programming Core Python team confirms sunsetting Python 2 on January 1, 2020 Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices
Read more
  • 0
  • 0
  • 13029

article-image-following-eu-china-releases-ai-principles
Vincy Davis
03 Jun 2019
5 min read
Save for later

Following EU, China releases AI Principles

Vincy Davis
03 Jun 2019
5 min read
Last week, the Beijing Academy of Artificial Intelligence (BAAI) released 15-point principles calling for Artificial Intelligence to be beneficial and responsible termed as Beijing AI Principles. It has been proposed as an initiative for the research, development, use, governance and long-term planning of AI. The article is a well-described guideline on the principles to be followed for the research and development of AI, the use of AI, and the governance of AI. The Beijing Academy of Artificial Intelligence (BAAI) is an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government. These principles have been developed in collaboration with Peking University, Tsinghua University, the Institute of Automation and Institute of Computing Technology within the Chinese Academy of Sciences, and China’s three big tech firms: Baidu, Alibaba, and Tencent. Research and Development Do Good It states that AI should be developed to benefit all humankind and the environment, and to enhance the well-being of society and ecology. For Humanity AI should always serve humanity and conform to human values as well as the overall interests of humankind. It also specifies that AI should never go against, utilize or harm human beings. Be Responsible Researchers while developing AI should be aware of its potential ethical, legal, and social impacts and risks. They should also be provided with concrete actions to reduce and avoid them. Control Risks AI systems should be developed in a way that ensures the security of data along with the safety and security for the AI system itself. Be Ethical AI systems should be trustworthy, in a way that the system can be traceable, auditable and accountable. Be Diverse and Inclusive The development of AI should reflect diversity and inclusiveness, such that nobody is easily neglected or underrepresented in AI applications. Open and Share An open AI platform will help avoid data/platform monopolies, and share the benefits of AI development. Use of AI Use Wisely and Properly The users of AI systems should have sufficient knowledge and ability to avoid possible misuse and abuse, so as to maximize its benefits and minimize the risks. Informed-consent AI systems should be developed such that in an unexpected circumstance, the users' own rights and interests are not compromised. Education and Training Stakeholders of AI systems should be educated and trained to help them adapt to the impact of AI development in psychological, emotional and technical aspects. Governance of AI Optimizing Employment Developers should have a cautious attitude towards the potential impact of AI on human employment. Explorations on Human-AI coordination and new forms of work should be encouraged. Harmony and Cooperation This should be imbibed in an AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of "Optimizing Symbiosis". Adaptation and Moderation Revisions of AI principles, policies, and regulations should be actively considered to adjust them to the development of AI. This will prove beneficial to society and nature. Subdivision and Implementation Various fields and scenarios of AI applications should be actively researched, so that more specific and detailed guidelines can be formulated. Long-term Planning Constant research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged, which will make AI always beneficial to society and nature in the future. These AI principles are aimed at enabling the healthy development of AI, in such a way that it supports the human community, for a shared future. This will prove beneficial for humankind and nature, in general. China releasing its version of AI principles, has come as a surprise for many. China has always been infamous for using AI to monitor citizens. This move by China comes after the European High-Level Expert Group on AI released ‘Ethics guidelines for trustworthy AI’ , this year. The Beijing AI Principles provided by BAAI, is similar to the AI principles published by Google last year. Google’s AI principles also provided a guideline for AI applications, such that it becomes beneficial for humans. By releasing its own version of AI principles, is China signalling the world that its ready to talk about AI ethics, especially after the U.S. blacklisted China’s telecom giant Huawei over threat to national security. As expected, users are also surprised with China showing this sudden care towards AI ethics. https://twitter.com/sherrying/status/1133804303150305280 https://twitter.com/EBKania/status/1134246833100865536 While others are impressed with this move by China. https://twitter.com/t_gordon/status/1135491979276685312 https://twitter.com/mgmazarakis/status/1134127349392465920 Visit the BAAI website, to read more details of the Beijing AI Principles. Samsung AI lab researchers present a system that can animate heads with one-shot learning What can Artificial Intelligence do for the Aviation industry Packt and Humble Bundle partner for a new set of artificial intelligence eBooks and videos
Read more
  • 0
  • 0
  • 13024

article-image-opus-1-3-a-popular-foss-audio-codec-with-machine-learning-and-vr-support-is-now-generally-available
Amrata Joshi
22 Oct 2018
3 min read
Save for later

Opus 1.3, a popular FOSS audio codec with machine learning and VR support, is now generally available

Amrata Joshi
22 Oct 2018
3 min read
Last week, the team at Opus announced the general availability of Opus Audio Codec version 1.3. Opus 1.3 comes along with a new set of features, namely, a recurrent neural network, reliable speech/music detector, convenience, ambisonics support, efficient memory, compatibility with RFC 6716 and a lot more. Opus is an open and royalty-free audio codec, which is highly useful for all audio applications, right from music streaming and storage to high-quality video-conferencing and VoIP. Six years after its standardization by the IETF, Opus is included in all major browsers and mobile operating systems, used for a wide range of applications and is the default WebRTC codec. New features in Opus Audio Codec 1.3 Reliable speech/music detector powered by machine learning Opus 1.3 promises a new speech/music detector. As it is based on a recurrent neural network, it is way simpler and reliable than the detector used in version 1.1.The speech/music detector in earlier versions was based on a simple (non-recurrent) neural network, followed by an HMM-based layer to combine the neural network results over time. Opus 1.3 introduces a new recurrent neuron which is the Gated Recurrent Unit (GRU). The GRU does not just learn how to use its input and memory at a time, but it also promises to learn, how and when to update its memory. This, in turn, helps it to remember information for a longer period of time. Mixed Content encoding gets better Mixed content encoding, especially at bit rates below 48 kb/s, will get more convenient as the new detector helps in improving the performance of Opus. Developers will experience a great change in speech encoding at lower bit rates, both for mono and stereo. Encode 3D audio soundtracks for VR easily This release comes along with ambisonics support. Ambisonics can be used to encode 3D audio soundtracks for VR and 360 videos. Opus detector won’t take much of your space The Opus detector has just 4986 weights (that fit in less than 5 KB) and takes about 0.02% memory of CPU to run in real-time, instead of thousands of neurons and millions of weights running on a GPU. Additional Updates Improvements in Security/hardening, Voice Activity Detector (VAD), and speech/music classification using an RNN are simply add-ons. The major bug fixes in this release are CELT PLC and bandwidth detection fixes. Read more about the release on Mozilla’s official website. Also, check out a demo for more details. YouTube starts testing AV1 video codec format, launches AV1 Beta Playlist Google releases Oboe, a C++ library to build high-performance Android  audio apps How to perform Audio-Video-Image Scraping with Python
Read more
  • 0
  • 0
  • 13023
article-image-facebook-is-reportedly-working-on-threads-app-an-extension-of-instagrams-close-friends-feature-to-take-on-snapchat
Amrata Joshi
02 Sep 2019
3 min read
Save for later

Facebook is reportedly working on Threads app, an extension of Instagram's 'Close friends' feature to take on Snapchat

Amrata Joshi
02 Sep 2019
3 min read
Facebook is seemingly working on a new messaging app called Threads that would help users to share their photos, videos, location, speed, and battery life with only their close friends, The Verge reported earlier this week. This means users can selectively share content with their friends while not revealing to others the list of close friends with whom the content is shared. The app currently does not display the real-time location but it might notify by stating that a friend is “on the move” as per the report by The Verge. How do Threads work? As per the report by The Verge,  Threads app appears to be similar to the existing messaging product inside the Instagram app. It seems to be an extension of the ‘Close friends’ feature for Instagram stories where users can create a list of close friends and make their stories just visible to them.  With Threads, users who have opted-in for ‘automatic sharing’ of updates will be able to regularly show their status updates and real-time information  in the main feed to their close friends.. The auto-sharing of statuses will be done using the mobile phone sensors.  Also, the messages coming from your friends would appear in a central feed, with a green dot that will indicate which of your friends are currently active/online. If a friend has posted a story recently on Instagram, you will be able to see it even from Threads app. It also features a camera, which can be used to capture photos and videos and send them to close friends. While Threads are currently being tested internally at Facebook, there is no clarity about the launch of Threads. Direct’s revamped version or Snapchat’s potential competitor? With Threads, if Instagram manages to create a niche around the ‘close friends’, it might shift a significant proportion of Snapchat’s users to its platform.  In 2017, the team had experimented with Direct, a standalone camera messaging app, which had many filters that were similar to Snapchat. But this year in May, the company announced that they will no longer be supporting Direct. Threads look like a Facebook’s second attempt to compete with Snapchat. https://twitter.com/MattNavarra/status/1128875881462677504 Threads app focus on strengthening the ‘close friends’ relationships might promote more of personal data sharing including even location and battery life. This begs the question: Is our content really safe? Just three months ago, Instagram was in the news for exposing personal data of millions of influencers online. The exposed data included contact information of Instagram influencers, brands and celebrities https://twitter.com/hak1mlukha/status/1130532898359185409 According to Instagram’s current Terms of Use, it does not get ownership over the information shared on it. But here’s the catch, it also states that it has the right to host, use, distribute, run, modify, copy, publicly perform or translate, display, and create derivative works of user content as per the user’s privacy settings. In essence, the platform has a right to use the content we post.  Facebook open-sources Hyperparameter autotuning for fastText to automatically find best hyperparameters for your dataset Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong protests   Facebook must face privacy class action lawsuit, loses facial recognition appeal, U.S. Court of Appeals rules
Read more
  • 0
  • 0
  • 13015

article-image-google-home-and-amazon-alexa-can-no-longer-invade-your-privacy-thanks-to-project-alias
Savia Lobo
15 Jan 2019
2 min read
Save for later

Google Home and Amazon Alexa can no longer invade your privacy; thanks to Project Alias!

Savia Lobo
15 Jan 2019
2 min read
Project Alias is an open-source, ‘teachable’ parasite that gives users increased control over their smart home assistants in terms of customization and privacy. It also trains the smart home devices to accept custom wake-up names while disturbing their built-in microphone, by simply downloading an app. Once trained, Alias can take control over your home assistant by activating it for you. Tellart designer Bjørn Karmann and Topp designer Tore Knudsen are the brilliant minds behind this experimental project. Knudsen says, “This [fungus] is a vital part of the rain forest, since whenever a species gets too dominant or powerful it has higher chances of getting infected, thus keeping the diversity in balance” He further added, “We wanted to take that as an analogy and show how DIY and open source can be used to create ‘viruses’ for big tech companies.” The hardware part of Project Alias is a plug-powered microphone/speaker unit that sits on top of a user’s smart speaker of choice. It’s powered by a pretty typical Raspberry Pi chipset. Input and output logic of Alias Both Amazon and Google have a poor track record of storing past conversations in the cloud. However, Project Alias promises of privacy.  According to FastCompany the smart home assistants “aren’t meant to listen in to your private conversations, but by nature, the devices must always be listening to a little to be listening at just the right time–and they can always mishear any word as a wake word.” Knudsen says, “If somebody would be ready to invest, we would be ready for collaboration. But initially, we made this project with a goal to encourage people to take action and show how things could be different . . . [to] ask what kind of ‘smart’ we actually want in the future.” To know more about Project Alias in detail, head over to Bjørn Karmann’s website or GitHub. Here’s a short video on the working of Project Alias https://player.vimeo.com/video/306044007 Google’s secret Operating System ‘Fuchsia’ will run Android Applications: 9to5Google Report US government privately advised by top Amazon executive on web portal worth billions to the Amazon; The Guardian reports France to levy digital services tax on big tech companies like Google, Apple, Facebook, Amazon in the new year    
Read more
  • 0
  • 0
  • 13011
Modal Close icon
Modal Close icon