Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-largest-women-in-tech-conference-grace-hopper-celebration-renounces-palantir-as-a-sponsor-due-to-concerns-over-its-work-with-the-ice
Sugandha Lahoti
29 Aug 2019
4 min read
Save for later

Largest ‘women in tech’ conference, Grace Hopper Celebration, renounces Palantir as a sponsor due to concerns over its work with the ICE

Sugandha Lahoti
29 Aug 2019
4 min read
Grace Hopper Celebration conference, which is the world's largest conference for women in tech has said that it has dropped Palantir as a sponsor due to its concerns with the United States' Immigration and Customs Enforcement (ICE). This news came after concerned civilians published a petition on Change.org demanding that AnitaB.org, the organization for women in computing that produces the Grace Hopper Celebration conf, should renounce Palantir as a sponsor. At the time of writing 326 people had signed the petition; their aim being to reach 500. The petition reads, “Funding well-respected and impactful events such as GHC is one of the ways in which Palantir can try to buy positive public sentiment. By accepting Palantir’s money, proudly displaying them as a sponsor, and giving them a platform to recruit, AnitaB.org is legitimizing Palantir's work with ICE to GHC's attendees, enabling ICE’s mission, and helping Palantir minimize its role in human rights abuses.” The petition called on AnitaB.org to: Drop Palantir as a sponsor for GHC 2019 and future conferences Release a statement denouncing the prior sponsorship and Palantir’s involvement with ICE Institute and publicly release an ethics vetting policy for future corporate sponsors and recruiters https://twitter.com/techworkersco/status/1166740206461964288 Several activists and women in tech had urged Grace Hopper Celebration to renounce Palantir as its sponsor. https://twitter.com/jrivanob/status/1166734671624822784 https://twitter.com/sarahmaranara/status/1163231777772703744 https://twitter.com/RileyMancuso/status/1157088427977904131 Following this open opprobrium, AnitaB.org Vice President of Business Development and Partnership Success, Robert Read released a statement yesterday: “At AnitaB.org we do our best to promote the basic rights and dignity of every person in all that we do, including our corporate sponsorship and events program. Palantir has been independently verified as providing direct technical assistance that enables the human rights abuses of asylum seekers and their children at US southern border detention centers. Therefore, at this time, Palantir will no longer be a sponsor of Grace Hopper Celebration 2019.” Prior to Grace Hopper Celebration, UC Berkeley’s Privacy Law Scholars Conference dropped Palantir as a sponsor. This was because of the discomfort of many in the community with the company's practices, including among the program committee that selects papers and awards. Lesbians Who Tech, a leading LGBTQ organization, followed suit, confirming their boycott of Palantir with The Verge. This was also because members of their community approached them to drop Palantir as a sponsor seeing its recent contract work with the US government. “Members of our community (the LGBTQ community) contacted us with concern around Palantir’s participation with the job fair,” a representative of Lesbians in tech said, “because of the recent news that the company’s software has been used to aid ICE in effort to gather, store, and search for data on undocumented immigrants, and reportedly playing a role in workplace raids.” Palantir is involved in conducting raids on immigrant communities as well as in enabling workplace raids: Mijente According to reports, Palantir’s mobile app FALCON is being used by ICE to carry out raids on immigrant communities as well as to enable workplace raids. In May this year, new documents released by Mijente, an advocacy organization, revealed that Palantir was responsible for the 2017 operation that targeted and arrested family members of children crossing the border alone. The documents show a huge contrast to what Palantir said its software was doing. As part of the operation, ICE arrested 443 people solely for being undocumented. Palantir's case management tool (Investigative Case Management) was shown to be used at the border to arrest undocumented people discovered in investigations of children who crossed the border alone, including the sponsors and family members of these children. Several open source communities, activists and developers have been strongly demonstrating against Palantir for their involvement with ICE. This includes Entropic, who is debating the idea of banning Palantir employees from participating in the project. Back in August 2018, the Lerna team had taken a strong stand against ICE by modifying its MIT license to ban companies who have collaborated with ICE from using Lerna. Last month, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Fairphone 3 launches as a sustainable smartphone alternative that supports easy module repairs. #Reactgate forces React leaders to confront the community’s toxic culture head on Stack Overflow faces backlash for removing the “Hot Meta Posts” section; community feels left out of decisions.
Read more
  • 0
  • 0
  • 18004

article-image-neural-network-intelligence-microsofts-open-source-automated-machine-learning-toolkit
Amey Varangaonkar
01 Oct 2018
2 min read
Save for later

Neural Network Intelligence: Microsoft’s open source automated machine learning toolkit

Amey Varangaonkar
01 Oct 2018
2 min read
Google’s Cloud AutoML now has competition; Microsoft have released an open-source automated machine learning toolkit of their own. Dubbed as Neural Network Intelligence, this toolkit will allow data scientists and machine learning developers to perform tasks such as neural architecture search and hyperparameter tuning with relative ease. Per Microsoft’s official page, this toolkit will allow data scientists, machine learning developers and AI researchers with the necessary tools to customize their AutoML models across various training environments. The toolkit was announced in November 2017 and has been in the research phase for a considerable period of time, before it was released for public use recently. Who can use the Neural Network Intelligence toolkit? Microsoft’s highly anticipated toolkit for automated machine learning is perfect for you if: You want to try out different AutoML algorithms for training your machine learning model You want to run AutoML jobs in different training environments, including remote servers and cloud You want to implement your own AutoML algorithms and compare their performance with other algorithms You want to incorporate your AutoML models in your own custom platform With Neural Network Intelligence toolkit, data scientists and machine learning developers can train and customize their machine learning models more effectively. The tool is expected to go head to head with Auto-Keras, another open source AutoML library for deep learning. Auto-Keras has quickly generated quite a traction with more than 3000 stars on GitHub, suggested the growth in popularity of Automated Machine Learning. You can download and learn more about this AutoML toolkit on their official GitHub page. Read more What is Automated Machine Learning (AutoML)? Top AutoML libraries for building your ML pipelines Anatomy of an automated machine learning algorithm (AutoML)
Read more
  • 0
  • 0
  • 17948

article-image-netflix-bring-in-verna-myers-as-new-vp-of-inclusion-strategy-to-boost-cultural-diversity
Natasha Mathur
30 Aug 2018
2 min read
Save for later

Netflix bring in Verna Myers as new VP of Inclusion strategy to boost cultural diversity

Natasha Mathur
30 Aug 2018
2 min read
Netflix announced yesterday that Verna Myers is joining the company as Vice President, inclusion strategy. In her new role, Myers will help with the implementation of strategies that reinforce cultural diversity and inclusion and equity into the varied aspects of Netflix's operations worldwide. https://twitter.com/VernaMyers/status/1034855768682422272 According to Jessica Neal, Netflix Chief Talent Officer, "Having worked closely with Vernā as a consultant on a range of organizational issues, we are thrilled that she has agreed to bring her talents to this new and important role”. Myers, a graduate of Harvard Law School, has spent the past two decades as the head at The Vernā Myers Company. Here, her major role involved providing consultation to major corporations and organizations regarding how to eradicate barriers based on race, ethnicity, gender, sexual orientation and other differences. She has also written several self-help books, been an active TED speaker, and contributed to reputed publications such as Refinery29, The Atlantic, Forbes, etc. Netflix deeply respects cultural diversity and fired its chief communications officer Jonathan Friedland, two months back, for using the N-word in a meeting. “As a global company dedicated to attracting the best people and representing a broad range of perspectives, Vernā will be an invaluable champion of our efforts to build a culture where all employees thrive” added Jessica Neal. “I have been a longtime fan of the inclusive and diverse programming and talent at Netflix. I was so impressed by their mission, their excellence, and decision to take their inclusion and diversity efforts to a higher level. I excited and look forward to collaborating all across Netflix to establish bold innovative frameworks and practices that will attract, and sustain high performing diverse teams” says Myers. For more information, check out the official Netflix blog post. How everyone at Netflix uses Jupyter notebooks from data scientists, machine learning engineers, to data analysts 20 lessons on bias in machine learning systems by Kate Crawford at NIPS 2017 Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’  
Read more
  • 0
  • 0
  • 17941

article-image-handpicked-weekend-reading-1st-dec-2017
Aarthi Kumaraswamy
01 Dec 2017
1 min read
Save for later

Handpicked for your weekend Reading - 1st Dec 2017

Aarthi Kumaraswamy
01 Dec 2017
1 min read
Expert in Focus: Sebastian Raschka On how Machine Learning has become more accessible 3 Things that happened this week in Data Science News Data science announcements at Amazon re:invent 2017 IOTA, the cryptocurrency that uses Tangle instead of blockchain, announces Data Marketplace for Internet of Things Cloudera Altus Analytic DB: Modernizing the cloud-based data warehouses Get hands-on with these Tutorials Building a classification system with logistic regression in OpenCV How to build a Scatterplot in IBM SPSS Do you agree with these Insights & Opinions? Highest Paying Data Science Jobs in 2017 5 Ways Artificial Intelligence is Transforming the Gaming Industry 10 Algorithms every Machine Learning Engineer should know
Read more
  • 0
  • 0
  • 17938

article-image-google-news-initiative-partners-with-google-ai-to-help-deep-fake-audio-detection-research
Amrata Joshi
01 Feb 2019
2 min read
Save for later

Google News Initiative partners with Google AI to help ‘deep fake’ audio detection research

Amrata Joshi
01 Feb 2019
2 min read
As speech synthesis technology has advanced a lot in recent years and with neural networks from DeepMind creating realistic, human-like voices, Google is working in the same direction to advance state-of-the-art research on fake audio detection. Google Maps and Google Home use Google's speech synthesis, or text-to-speech (TTS) technology. The Google News Initiative (GNI) announced last year that it wanted to tackle “deep fakes” and other systems that try to bypass voice authentication systems. Yesterday, Google AI and Google News Initiative (GNI) partnered together for creating a body of synthetic speech containing thousands of phrases spoken by its deep learning text-to-speech (TTS) models. It contains 68 synthetic voices from a large variety of regional accents from English newspaper articles. Malicious actors can synthesize speech in order to fool voice authentication systems, or they can even create forged audio recordings to defame public figures. Deep fakes, audio or video clips generated by deep learning models can be exploited for manipulating trust in media. It then becomes difficult to distinguish real from tampered content. And the bad actors can also claim that authentic data is fake. Because of this issue, there was a need for synthetic speech database. This effort is also in the direction of Google’s AI Principles to ensure “strong safety practices to avoid unintended results that create risks of harm.” Currently, this dataset is available for participants of the 2019 ASVspoof challenge for creating countermeasures against fake speech. The aim is to make the automatic speaker verification (ASV) systems more secure. ASVspoof participants can develop systems that learn to distinguish between the real and computer-generated speech by training models on both. The results for this challenge will be announced in September at the 2019 Interspeech conference in Graz, Austria. Google Cloud Firestore, the serverless, NoSQL document database, is now generally available You can now publish PWAs in the Google Play Store as Chrome 72 for Android ships with Trusted Web Activity feature Former Google Cloud CEO joins Stripe board just as Stripe joins the global Unicorn Club
Read more
  • 0
  • 0
  • 17927

article-image-bitcoin-core-escapes-a-collapse-from-a-denial-of-service-vulnerability
Savia Lobo
21 Sep 2018
2 min read
Save for later

Bitcoin Core escapes a collapse from a Denial-of-Service vulnerability

Savia Lobo
21 Sep 2018
2 min read
A few days back, Bitcoin Core developers discovered a vulnerability in its Bitcoin Core software that would have allowed a miner to insert a ‘poisoned block’ in its blockchain. This would have crashed the nodes running the Bitcoin software around the world. The software patch notes state, “A denial-of-service vulnerability (CVE-2018-17144) exploitable by miners has been discovered in Bitcoin Core versions 0.14.0 up to 0.16.2.” The developers further recommended users to upgrade any of the vulnerable versions to 0.16.3 as soon as possible. CVE-2018-17144: The denial-of-service vulnerability The vulnerability was introduced in Bitcoin Core version 0.14.0, which was first released in March 2017. But the issue wasn't found until just two days ago, prompting contributors of the codebase to take action and ultimately release a tested fix within 24 hours. In a report by The Next Web, “The bug relates to its consensus code. It meant that some miners had the option to send transaction data twice, causing the Bitcoin network to crash when attempting to validate them. As such invalid blocks need to be mined anyway, only those willing to disregard block reward of 12.5BTC ($80,000) could actually do any real damage.” Also, the bug was not only in the Bitcoin protocol but also in its most popular software implementation. Some cryptocurrencies built using Bitcoin Core’s code were also affected. For example, Litecoin patched the same vulnerability on Tuesday. However, the bitcoin is far too decentralized to be brought down by any single entity. TNW also states, “While never convenient, responding appropriately to such potential dangers is crucial to maintaining the integrity of blockchain tech – especially when reversing transactions is not an option.” This vulnerability discovery, however, was a great escape from the Bitcoin collapse. To read about this news in detail, head over to The Next Web’s full coverage. A Guide to safe cryptocurrency trading Apple changes app store guidelines on cryptocurrency mining Crypto-ML, a machine learning powered cryptocurrency platform
Read more
  • 0
  • 0
  • 17883
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-elastic-stack-7-2-0-releases-elastic-siem-and-general-availability-of-elastic-app-search
Vincy Davis
27 Jun 2019
4 min read
Save for later

Elastic Stack 7.2.0 releases Elastic SIEM and general availability of Elastic App Search

Vincy Davis
27 Jun 2019
4 min read
Yesterday, the team behind Elastic Stack announced the release of Elastic Stack 7.2.0. The major highlight of this release is the free availability of Elastic SIEM (Security information and event management) as a part of Elastic’s default distribution. The Elastic SIEM app provides interactivity, ad hoc search, responsive drill downs and packages it into an intuitive product experience. Elastic Stack 7.2.0 also comes with the free availability of the Elastic app search for its users, which was only available as a hosted service up until now. With this release, Elastic has advanced the Kubernetes and container monitoring initiative to include the monitoring of the NATS open source messaging system, CoreDNS, and to support the CRI-O format container logs. https://youtu.be/bmx13X87e2s What is Elastic SIEM? The SIEM app is an interactive UI workspace for security teams to triage events and perform initial investigations. It assigns a Timeline Event Viewer which allows analysts to gather and store evidence of an attack, pin and comment on relevant events, and share their findings all from within Kibana. Kibana is an open source data visualization plugin for Elasticsearch. Elastic SIEM is being introduced as a beta in the 7.2 release of the Elastic Stack. Image Source: Elastic blog The Elastic SIEM app enables analysis of host-related and network-related security events as part of alert investigations or interactive threat hunting, including the following: The Hosts view in the SIEM app provides key metrics regarding host-related security events, and a set of data tables that enable interaction with the Timeline Event Viewer. The Network view in the SIEM app informs analysts of key network activity metrics, facilitates investigation time enrichment, and provides network event tables that enable interaction with the Timeline Event Viewer. Analysts can easily drag objects of interest into the Timeline Event Viewer to create the required query filter to get to the bottom of an alert. With Auto-saving, it is possible to  ensure that the results of the investigation are available for incident response teams. Elastic SIEM is available on the Elasticsearch Service on Elastic Cloud, or for download. Since this a major feature of Elastic Stack, it has got people quite excited. https://twitter.com/cbnetsec/status/1143661272594096128 https://twitter.com/neu5ron/status/1143623893476958208 https://twitter.com/netdogca/status/1143581280837107714 https://twitter.com/tommyyyyyyyy/status/1143791589325725696 General availability of Elastic App Search on-premise With the Elastic Stack 7.2.0 version, the Elastic App Search product is going to be freely available for users as a downloadable, self-managed search solution. Though Elastic App Search has been around for over a decade as a cloud-based solution, users of Elastic will have a greater flexibility to build fluid and engaging search experiences. As part of this release, the below services will be offered in a downloadable form: Simple and focused data ingestion Powerful search APIs and UI frameworks Insightful analytics Intuitive relevance controls Elastic Stack 7.2.0 is also introducing the Metrics Explorer. It will enable users to quickly visualize the most important infrastructure metrics and interact with them using common tags and chart groupings inside the Infrastructure app. With this feature, users can create a chart and  see on the dashboard. Other Highlights Elasticsearch simplifies search-as-you-type, adds a UI around snapshot/restore, gives more control over relevance without sacrificing performance, and much more. Kibana makes it even easier to build a secure, multi-tenant Kibana instance with advanced RBAC for Spaces. Elastic Stack 7.2.0 has also introduced kiosk mode for Canvas, and the maps created in the new Maps app can now be embedded in any Kibana dashboard. There are also new easy-on-your-eyes dark-mode map tiles and much more. Beats improves edge-based processing with a new JavaScript processor, and more. Logstash gets faster with the Java execution pipeline going GA. It now fully supports JMS as an input and output, and more. Users are very impressed with the features introduced in Elastic Stack 7.2.0 https://twitter.com/mikhail_khusid/status/1143695869411307526 https://twitter.com/markcartertm/status/1143652867284189184 Visit the Elastic blog for more details. Core security features of Elastic Stack are now free! Elasticsearch 7.0 rc1 releases with new allocation and security features Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more!
Read more
  • 0
  • 0
  • 17865

article-image-meet-cimon-the-first-ai-robot-to-join-the-astronauts-aboard-iss
Natasha Mathur
03 Jul 2018
3 min read
Save for later

Meet CIMON, the first AI robot to join the astronauts aboard ISS

Natasha Mathur
03 Jul 2018
3 min read
A ball-shaped robot named CIMON (Crew Interactive Mobile Companion) joined the crew members for the SpaceX 15's launch of Falcon 9, aboard the International Space Station (ISS), two days ago. The robot is created to see if a bot with AI capabilities can boost the efficiency and morale of the crew while on longer missions. It has been built by IBM in conjunction with the German Aerospace Center. It is led by Alexander Gerst, DLR Astronaut. Source: SciNews Let’s have a look at CIMON’s functionalities and features. The AI robot comes with a language user interface which enables it to respond to spoken commands. It displays repair instructions on screen with a voice command. This keeps astronaut’s hands- free. It is also capable of displaying procedures for experiments, thereby, serving as the space station’s voice-controlled database. ISS performs tasks and activities which are quite complicated in nature, so AI robot can help with that. The AI bot can be easily called upon by astronauts for assistance. For instance, astronauts can ask the robot to display certain documents and media in their field of view. They can also ask CIMON to record or playback experiments with its onboard camera. CIMON is capable of sensing the tone of conversation among the crew. In fact, its behavior is quite similar to R2D2. It can also quote dialogues from famous movies like E.T. the extraterrestrial. It can move freely and perform rotational movements like shaking the head back and forth indicating disapproval. Key Features: The AI bot is ball-shaped with a flattened surface. It has of no sharp edges which makes it a safe equipment for the crew. It comes with 12 internal fans which allow it move in all directions. It is programmed with an ISTJ personality i.e. introverted, sensing, thinking and judging. It comes equipped with a kill switch. IBM’s Watson Technology is used for CIMON’S AI language and comprehension system. It costs less than 6 million dollars. And it took less than 2 years for the AI bot to develop. The SpaceX 15's launch of Falcon 9 that took place on 29th June was successful, and CIMON is now undergoing astronaut assistant training. To know more, check out the official post by NASA. Adobe to spot fake images using Artificial Intelligence Microsoft start AI School to teach Machine Learning and Artificial Intelligence IBM unveils world’s fastest supercomputer with AI capabilities, Summit  
Read more
  • 0
  • 0
  • 17813

article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 17788

article-image-retails-urgency-addressing-customer-and-inventory-needs-with-data-from-whats-new
Anonymous
14 Dec 2020
5 min read
Save for later

Retail’s urgency: Addressing customer and inventory needs with data from What's New

Anonymous
14 Dec 2020
5 min read
Jeff Huckaby Market Segment Director, Retail and Consumer Goods, Tableau Kristin Adderson December 14, 2020 - 8:37pm December 22, 2020 Retail already changed from being product to more customer-centric influenced by increasing omni-channel initiatives that encourage digital transformation. Then Covid-19 hit. What retailers learned is that they must be willing and able to adapt quickly to internal and external forces. The silver lining is at the heart of digital transformation and adaptability is data.   According to McKinsey & Company, due to Covid-19, companies accelerated digital transformation by seven years. Tableau observed this with our retail customers as most staff were forced to work remotely, curb-side service became a required option for customers, and innovative solutions were needed to protect the safety of employees and customers. And it is no surprise that digital commerce exploded with an increased desire to shop online and limit face-to-face interactions. According to Salesforce, a new global record was hit on Black Friday for digital revenue with over $60B in online spend, a growth of 30 percent over last year.   As our retail and consumer goods customers focus on wrapping up the holiday shopping season and a tumultuous year, we wanted to give them an early preview of an upcoming whitepaper, releasing in January 2021. The visualized data will address common but nagging issues like in-stock position and product availability, online customer journey, competitive pricing, supply chain optimization, and loyalty program analysis, among others. Let’s explore visual analyses that reveal critical inventory and customer location insights, which lead to better site location and marketing opportunities.     On-Shelf Availability Dashboards Photo by Jeff Huckaby at a grocery store on March 11, 2020.Empty shelves were a typical scene in March, posing problems for stores and customers. Need toilet paper or baby formula? There was none. They flew off the shelves as quickly as they were stocked. These dashboards connect inventory and availability to grocers, suppliers, stores, and warehouses, so the fast-moving consumer goods (FMCG) industry can act to eliminate out-of-stocks. Here’s to more availability of toilet paper in 2021! “Stock in Trade” Dashboard This visual analysis created by Tableau partner, Atheon Analytics, helps retailers and their suppliers quickly and easily see where inventory is under- or over-stocked by grocer and store location. As a supplier, further examine product availability in warehouses (depots in the UK) to know where stock must be allocated, ensuring availability at certain stores. Unifying retailers, suppliers, and manufacturers around this near real-time data is essential going forward to support constantly changing customer demands.   In the next example, see the product data, category, or sub-category rolled up to the individual grocer. Visualized on the right is current demand compared with stock levels, so you know when you are approaching dangerously low or no inventory to support customers.  Atheon Analytics brings together this critical information from suppliers and retailers in Snowflake to effectively work from one operational canvas and act in unison.  Customer Location and Site Selection Dashboards With lockdowns and work from home mandates leading to a reduction in commuting, many retailers observed a dramatic change in customer flows. They should take a fresh, on-going look at current customer location data and competitors to quickly and confidently know the changing dynamics of their local markets and how customer composition changes throughout the day.  Leading that charge is Tableau partner, Lovelytics, which created a “Customer Location and Site Selection” dashboard, powered by global location provider, Foursquare. It analyzes the Foursquare Visits data feed using geospatial analysis, offers an option to add your own customer demographics and traffic data, and enables businesses to pinpoint an optimal site for opening or where to use an existing location, helping inform customer marketing and targeting.  Evaluate via spatial analysis the number of visitors, the amount of foot traffic, and how the flow of customers changes. This information could easily be combined with real-time sales and loyalty data, and allow for restaurants, in this example, to use Salesforce Einstein for creating a churn analysis, predicting customers they may lose, and knowing when to activate a new retention campaign within Salesforce Marketing Cloud.    This location view specifically analyzes more than 1.3 million site visits to various restaurant chains in the Denver, Colorado area with the option to look closely by store location, day, and hour. In Tableau, it is easy to “playback” how local areas are changing and how that impacts existing stores.  It is also an incredible way to ensure new site selection won’t cannibalize existing locations and that you allocate the correct labor to offer a safe, high-quality experience for customers. Benefits of inventory and customer clarity for retail Demystifying inventory availability and ensuring grocers, suppliers, and warehouses (or depots) are aligned ensures that the right inventory gets to the right stores as customer demands and traffic change on a dime. This same data can help remove the guesswork with new store construction builds or help prioritize remodels.   We look forward to sharing the remaining dashboards next month—and all interactive examples will be free to access on Tableau Public. Have a very safe and enjoyable holiday season!     Join the discussion  Join over 3,500 retail and consumer goods customers to discuss and talk about retail analytics, ask questions, and provide help.     About the Partners We want to thank Atheon Analytics and Lovelytics for their participation. To learn more about these incredible examples highlighted, please connect with them.
Read more
  • 0
  • 0
  • 17762
article-image-facebook-open-sources-pytext-a-pytorch-based-nlp-modeling-framework
Amrata Joshi
17 Dec 2018
4 min read
Save for later

Facebook open-sources PyText, a PyTorch based NLP modeling framework

Amrata Joshi
17 Dec 2018
4 min read
Last week, the team at Facebook AI Research announced that they are open sourcing  PyText NLP framework. PyText, a deep-learning based NLP modeling framework, is built on PyTorch. Facebook is outsourcing some of the conversational AI techs for powering the Portal video chat display and M suggestions on Facebook Messenger. https://twitter.com/fb_engineering/status/1073629026072256512 How is PyText useful for Facebook The PyText framework is used for tasks like document classification, semantic parsing, sequence tagging and multitask modeling. This framework easily fits into research and production workflows and emphasizes on robustness and low-latency to meet Facebook’s real-time NLP needs. PyText is also responsible for models powering more than a billion daily predictions at Facebook. This framework addresses the conflicting requirements of enabling rapid experimentation and serving models at scale by providing simple interfaces and abstractions for model components. It uses PyTorch’s capabilities of exporting models for inference through optimized Caffe2 execution engine. Features of PyText PyText features production-ready models for various NLP/NLU tasks such as text classifiers, sequence taggers, etc. PyText comes with a distributed-training support, built on the new C10d backend in PyTorch 1.0. It comes with training support and also features extensible components that help in creating new models and tasks. The framework’s modularity, allows it to create new pipelines from scratch and modify the existing workflows. It comes with a simplified workflow for faster experimentation. It gives an access to a rich set of prebuilt model architectures for text processing and vocabulary management. Serve as an end-to-end platform for developers. Its modular structure helps engineers to incorporate individual components into existing systems. Added support for string tensors to work efficiently with text in both training and inference. PyText for NLP development PyText improves the workflow for NLP and supports distributed training for speeding up NLP experiments that require multiple runs. Easily portable The PyText models can be easily shared across different organizations in the AI community. Prebuilt models With a model focused on NLP tasks, such as text classification, word tagging, semantic parsing, and language modeling, this framework makes it possible to use pre-built models on new data, easily. Contextual models For improving the conversational understanding in various NLP tasks, PyText uses the contextual information, such as an earlier part of a conversation thread. There are two contextual models in PyText, a SeqNN model for intent labeling tasks and a Contextual Intent Slot model for joint training on both tasks. PyText exports models to Caffe2 PyText uses PyTorch 1.0’s capability to export models for inference through the optimized Caffe2 execution engine. Native PyTorch models require Python runtime, which is not scalable because of the multithreading limitations of Python’s Global Interpreter Lock. Exporting to Caffe2 provides efficient multithreaded C++ backend for serving huge volumes of traffic efficiently. PyText’s capabilities to test new state-of-the-art models will be improved further in the next release. Since, putting sophisticated NLP models on mobile devices is a big challenge, the team at Facebook AI research will work towards building an end-to-end workflow for on-device models. The team plans to include supporting multilingual modeling and other modeling capabilities. They also plan to make models easier to debug, and might also add further optimizations for distributed training. “PyText has been a collaborative effort across Facebook AI, including researchers and engineers focused on NLP and conversational AI, and we look forward to working together to enhance its capabilities,” said the Facebook AI research team. Users are excited about this news and want to explore more. https://twitter.com/ezylryb_/status/1073893067705409538 https://twitter.com/deliprao/status/1073671060585799680 To know about this in detail, check out the release notes on GitHub. Facebook contributes to MLPerf and open sources Mask R-CNN2Go, its CV framework for embedded and mobile devices Facebook retires its open source contribution to Nuclide, Atom IDE, and other associated repos Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior
Read more
  • 0
  • 0
  • 17631

article-image-googles-cloud-robotics-platform-to-be-launched-in-2019-will-combine-the-power-of-ai-robotics-and-the-cloud
Melisha Dsouza
25 Oct 2018
3 min read
Save for later

Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud

Melisha Dsouza
25 Oct 2018
3 min read
Earlier this week, Google announced its plans to launch a ‘Cloud Robotics platform’ for developers in 2019. Since the early onset of ‘cloud robotics’ in the year 2010, Google has explored various aspects of the cloud robotics field. Now, with the launch of Cloud Robotics platform, Google will combine the power of AI, robotics and the cloud to deploy cloud-connected collaborative robots. The platform will encourage efficient robotic automation in highly dynamic environments. The core infrastructure of the Platform will be open source and users will pay only for what services they use. Features of Cloud Robotics platform: #1 Critical infrastructure The platform will introduce secure and robust connectivity between robots and the cloud. Kubernetes will be used for the management and distribution of digital assets. Stackdriver will assist with the logging, monitoring, alerting, and dashboarding processes. Developers will gain access to Google’s data management and AI capabilities, ranging from Cloud Bigtable to Cloud AutoML. The standardized data types and open APIs will help developers build reusable automation components. Moreover, open APIs support interoperability, which means integrators can compose end-to-end solutions with collaborative robots from different vendors. #2 Specialized tools The tools provided with this platform will help developers to build, test, and deploy software for robots with ease. Composing and deploying automation solutions in customers’ environments through system integrators can be done easily. Operators can monitor robot fleets and ongoing missions, as well. Plus, users have to only pay for the services they use. That being said, if a user decides to move to another cloud provider, they can take their data with them! #3 Fostering powerful first-party services and third-partyy innovation Google’s initial Cloud Robotics services can be applied to various use cases like robot localization and object tracking. The services will process sensor data from multiple sources and use machine learning to obtain information and insights about the state of the physical world. It will encourage an ecosystem of hardware, and applications, that can be used and re-used for collaborative automation. #4 Industrial Automation made easy Industrial automation requires extensive custom integration. Collaborative robots can help improve flexibility of the overall process.  It will help save costs and vendor lock ins. That being said, it is difficult to program robots to understand and react to the unpredictable changes of the physical human world. The Google Cloud platform will solve these issues by providing flexible automation services like Cartographer service, Spatial Intelligence service and Object Intelligence service Watch this video to know more about these services: https://www.youtube.com/watch?v=eo8MzGIYGzs&feature=youtu.be Alternatively, head over to Google's Blog to know more about this announcement. What’s new in Google Cloud Functions serverless platform Cloud Filestore: A new high performance storage option by Google Cloud Platform Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence  
Read more
  • 0
  • 0
  • 17607

article-image-three-ways-to-quickly-analyze-fundraising-performance-with-tableau-from-whats-new
Anonymous
10 Dec 2020
6 min read
Save for later

Three Ways to Quickly Analyze Fundraising Performance with Tableau from What's New

Anonymous
10 Dec 2020
6 min read
Jarrett O’Brien Nonprofit Cloud Product Marketing Director, Salesforce Kristin Adderson December 10, 2020 - 4:00pm December 10, 2020 2020 has delivered so many unknowns in terms of how nonprofits operate, convene supporters, and plan for the future. All these unknowns have come with financial uncertainty—which forces leadership to make tough decisions and challenging pivots to keep their organizations thriving. These decisions are best made with strong conviction and a foundation of rock-solid data.  Eric Dayton, Director of Data at buildOn, faced many unknowns when he came home from Malawi on March 3rd, a week before the COVID shutdown. Fortunately, buildOn’s ongoing investment in their digital transformation helped the organization shift gears smoothly and chart a course of action during the early days of the pandemic.  Dayton shared how their investment in digital helped: “Data and transparency are leading tenets for buildOn and the communities we serve. If we hadn't dug deep and done all the work to digitize our mission over the last few years, we wouldn't be so well set up to succeed.” Dayton was able to help his organization quickly pivot their fundraising and programs to continue to achieve their goals. For many organizations, the right technology, data, and strategy can make all the difference. A misplaced metric can erode trust in a board or funder meeting, but the right one can get your program funded. Using robust data to inform your decisions can help your nonprofit become more agile—while lack of data can hold nonprofits back from making decisions at all. “We grew up in the 1990s and early 2000s. We were spreadsheet-based, with simple, digestible KPIs presented to main stakeholders in a basic Excel file,” shared Dayton. “That spreadsheet grew into a system teams relied on, but it didn’t function or scale well. Nonprofits think these manual systems are helping their business, but they’re actually the source of 90% of the organization’s problems, especially when it comes to analyzing large amounts of data.”  Digital-first thinking from buildOn supports a data-driven culture that empowers staff to lead with confidence and navigate uncertain times. And they’re not alone in finding success in a digital-forward approach: In our 3rd edition of the Nonprofit Trends Report, we saw 27% of organizations with high digital maturity exceed their fundraising goals during the pandemic, compared to organizations with low digital maturity exceeding only 7% of their goals.   Tableau Dashboard 3rd Edition of Nonprofit Trends Report showing nonprofit organizations that exceeded goals by digital maturity. In an environment that’s shifting and evolving constantly, nonprofit fundraising leaders are seeking answers to these urgent questions:  What is our revenue health, and how is it trending? Which effective fundraising strategies should we pursue? How are campaigns performing, based on actual dollars raised? To help fundraising professionals get answers to these questions, we are excited to share with you our new Tableau Dashboards for Nonprofit Fundraising, which leverage the power of Tableau and the Nonprofit Success Pack (NPSP). Product Manager Mike Best had clear directives for this initiative. “Our customers told us they wanted a holistic picture of their fundraising effectiveness that was not only easy to understand, but—just as important—easy to implement.”  Best worked with colleagues who previously worked in development operations, our customers, and analytics experts to help fundraising professionals get the information they needed deployed more quickly in their work.   For Eric Dayton at buildOn, that meant being able to move full speed ahead with Tableau for Fundraising to unlock their donor data. “We were able to quickly deploy Tableau Starter Dashboards for Salesforce Nonprofit Cloud to unlock our donor data, forecast more effectively, and visualize revenue performance. Analytics allow us to make data-driven decisions across teams, which will allow us to navigate 2021 with greater impact.” Here are three ways the Tableau Dashboards for Nonprofit Fundraising can help you unlock your data and make decisions for the future of your organization, with confidence.  1. Understand & visualize revenue health Are you fielding questions like, “How’s the forecast?” or “Are our average donor contributions going up or down?” Being able to easily visualize and share this information builds toward a more data-driven culture. Clean data that’s easy to see and interpret helps to streamline the decision-making process in times of uncertainty, and navigate that uncertainty with more ease.  The fundraising overview dashboard helps you quickly scan revenue from a benchmark of last year or your revenue goals for the end of this year, uncovering both risks and opportunities. You can then dive into monthly performance, based on the number of donors, location, average gift amount, and the value of each donor or type of campaign.   Tableau Dashboards for Nonprofit Fundraising Revenue overview. 2. Create effective fundraising strategies For example, once you know that revenue is going down in July, or you’re off course to hit an important goal, the next step is to figure out where you might be able to shift strategies and get back on track.  The next tab helps you compare key statistics around new, retained, reactivated, recurring, and lapsed donors. This worksheet gives you a quick snapshot of donors, revenue, average gift amount, and more. Fundraising leaders can even zone in on a type of donation—say mid-level—to scan where that money is coming from. Tableau Dashboards for Nonprofit Fundraising donor acquisition, retention, and churn. 3. Drive campaign performance Once you have a strategy to reach those donors, it’s incredibly valuable to have the capacity to see how those campaigns or changes to channel tactics are performing.  With the campaign efficacy dashboard, you can understand which campaigns drive the most revenue, benchmark against campaigns with similar strategies or launched in close proximity to each other, and glean campaign trends over time. Tableau Dashboards for Nonprofit Fundraising campaign efficacy. With Tableau Dashboards for Nonprofit Fundraising, you can democratize data and drive decisions that help your mission thrive in the new normal. Tableau helps you take the next step to give your people the power of data, whether they’re a fundraising leader reading a report on their phone or a funder visiting your website.  To learn more about Tableau and ways to integrate and scale your analytics, check out the Tableau Basics for Nonprofits trail on Trailhead.
Read more
  • 0
  • 0
  • 17532
article-image-msbuild2019-microsoft-launches-new-products-to-secure-elections-and-political-campaigns
Sugandha Lahoti
07 May 2019
2 min read
Save for later

#MSBuild2019: Microsoft launches new products to secure elections and political campaigns

Sugandha Lahoti
07 May 2019
2 min read
It seems big tech giants are getting pretty serious about protecting election integrity and adopting data protection measures. At the ongoing Microsoft Build 2019 developer conference, CEO Satya Nadella announced ElectionGuard, a free open-source software development kit (SDK) as an extension of Microsoft’s Defending Democracy Program. ElectionGuard SDK It is an open-source SDK and voting system reference implementation that was developed in partnership with Galois. This SDK will provide voting system vendors with the ability to enable end-to-end verifiability and improved risk-limiting audit capabilities for elections in their systems. It will be offered free to voting system vendors either to integrate into their existing systems or to use to build all-new election systems. “One of the things we want to ensure is real transparency and verifiability in election systems. And so this is an open source project that will be alive on GitHub by the end of this month, which will even bring some new technology from Microsoft Research around homomorphic encryption, so that you can have the software stack that can modernize all of the election infrastructure everywhere in the world,” CEO Satya Nadella said onstage today at Microsoft’s annual Build developer conference in Seattle. The ElectionGuard SDK and reference implementation will be available on GitHub in June, just ahead of the EU elections. Microsoft 365 for Campaigns Micrsoft365 for Campaigns provides security capabilities of Microsoft 365 Business to political parties and individual candidates. M365 for Campaigns will be rolled out to customers this summer for $5 per user per month. Any campaign using M365 for Campaigns will have free access to Microsoft’s AccountGuard service. Microsoft claims it'll be affordable, naturally, and "preconfigured to optimize for the unique operating environments campaigns face." Starting next month, M365 for Campaigns will be available for all federal election campaign candidates, federal candidate committees, and national party committees in the United States Microsoft Build is in its 6th year and will continue till 8th May. The conference hosts over 6,000 attendees with early 500 student-age developers and over 2,600 customers and partners in attendance. Watch this space for more coverage of Microsoft Build 2019. Microsoft introduces Remote Development extensions to make remote development easier on VS Code Docker announces a collaboration with Microsoft’s .NET at DockerCon 2019 How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsered by Microsoft]
Read more
  • 0
  • 0
  • 17528

article-image-paper-two-minutes-novel-method-resource-efficient-image-classification
Sugandha Lahoti
23 Mar 2018
4 min read
Save for later

Paper in Two minutes: A novel method for resource efficient image classification

Sugandha Lahoti
23 Mar 2018
4 min read
This ICLR 2018 accepted paper, Multi-Scale Dense Networks for Resource Efficient Image Classification, introduces a new model to perform image classification with limited computational resources at test time. This paper is authored by Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Weinberger. The 6th annual ICLR conference is scheduled to happen between April 30 - May 03, 2018. Using a multi-scale convolutional neural network for resource efficient image classification What problem is the paper attempting to solve? Recent years have witnessed a surge in demand for applications of visual object recognition, for instance, in self-driving cars and content-based image search. This demand is because of the astonishing progress of convolutional networks (CNNs) where state-of-the-art models may have even surpassed human-level performance. However, most are complex models which have high computational demands at inference time. In real-world applications, computation is never free; it directly translates into power consumption, which should be minimized for environmental and economic reasons. Ideally, all systems should automatically use small networks when test images are easy or computational resources are limited and use big networks when test images are hard or computation is abundant. In order to develop resource-efficient image recognition, the authors aim to develop CNNs that slice the computation and process these slices one-by-one, stopping the evaluation once the CPU time is depleted or the classification sufficiently certain. Unfortunately, CNNs learn the data representation and the classifier jointly, which leads to two problems The features in the last layer are extracted directly to be used by the classifier, whereas earlier features are not. The features in different layers of the network may have a different scale. Typically, the first layers of deep nets operate on a fine scale (to extract low-level features), whereas later layers transition to coarse scales that allow global context to enter the classifier. The authors propose a novel network architecture that addresses both problems through careful design changes, allowing for resource-efficient image classification. Paper summary The model is based on a multi-scale convolutional neural network similar to the neural fabric, but with dense connections and with a classifier at each layer.  This novel network architecture, called Multi-Scale DenseNet (MSDNet), address both of the problems described above (of classifiers altering the internal representation and the lack of coarse-scale features in early layers) for resource-efficient image classification. The network uses a cascade of intermediate classifiers throughout the network. The first problem is addressed through the introduction of dense connectivity. By connecting all layers to all classifiers, features are no longer dominated by the most imminent early exit and the trade-off between early or later classification can be performed elegantly as part of the loss function. The second problem is addressed by adopting a multi-scale network structure. At each layer, features of all scales (fine-to-coarse) are produced, which facilitates good classification early on but also extracts low-level features that only become useful after several more layers of processing. Key Takeaways MSDNet, is a novel convolutional network architecture optimized to incorporate CPU budgets at test-time. The design is based on two high-level design principles, to generate and maintain coarse level features throughout the network and to interconnect the layers with dense connectivity. The final network design is a two-dimensional array of horizontal and vertical layers, which decouples depth and feature coarseness. Whereas in traditional convolutional networks features only become coarser with increasing depth, the MSDNet generates features of all resolutions from the first layer on and maintains them throughout. Through experiments, the authors show that their network outperforms all competitive baselines on an impressive range of budgets ranging from highly limited CPU constraints to almost unconstrained settings. Reviewer feedback summary Overall Score: 25/30 Average Score: 8.33 The reviewers found the approach to be natural and effective with good results. They found the presentation to be clear and easy to follow. The structure of the network was clearly justified. The reviewers found the use of dense connectivity to avoid the loss of performance of using early-exit classifier interesting. They appreciated the results and found them to be quite promising, with 5x speed-ups and same or better accuracy than previous models.  However, some reviewers pointed out that the results about the more efficient densenet* could be shown in the main paper.
Read more
  • 0
  • 0
  • 17433
Modal Close icon
Modal Close icon