Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-ibm-halt-sales-of-watson-ai-tool-for-drug-discovery-amid-tepid-growth-stat-report
Fatema Patrawala
19 Apr 2019
3 min read
Save for later

IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report

Fatema Patrawala
19 Apr 2019
3 min read
STAT reported yesterday that IBM is halting the sales of their “Watson for Drug Discovery” machine learning/AI tool, according to sources within the company. According to STAT report, IBM is giving up its efforts to develop and flog its Drug Discovery technology due to “sluggish sales,”. But no one seems to have told IBM’s website programming team, because the pages of the product information are still up on the IBM website. They’re worth taking a look as to how the product has been over-promised by IBM. Apparently, IBM Watson Health uses AI software to help companies reveal the connection and relationship among genes, drugs, diseases, and other entities by analyzing multiple sets of life sciences knowledge. But according to the IEEE Spectrum report, IBM’s entire foray into health care has been marked by the familiar combination of overpromising and under-delivery. However, the service isn’t completely shutting down. IBM spokesperson Ed Barbini told to The Register: “We are not discontinuing our Watson for Drug Discovery offering, and we remain committed to its continued success for our clients currently using the technology. We are focusing our resources within Watson Health to double down on the adjacent field of clinical development where we see an even greater market need for our data and AI capabilities.” In other words, it appears the product won’t be sold to any new customers, however, organizations that want to continue using the system will still be supported. “The offering is staying on the market, and we'll work with clients who want to team with IBM in this area. But our future efforts will be more focused on clinical trials – it's a much bigger market and better use of our technology and tools.”, according to IBM The Drug Discovery service is made up of lots of different products or "modules," such as a search engine that allows chemists to crawl scientific abstracts to find information on a specific gene or chemical compound. There’s also a knowledge network that describes relationships between drugs and diseases. IBM’s Health division has been crumbling for a while. IBM Watson Health’s Oncology AI software dished out incorrect and unsafe recommendations during beta testing. And to add to their worry, in October last year Deborah DiSanzo, IBM’s head of Watson Health, stepped down from her position too. IBM CEO, Ginni Rometty, on bringing HR evolution with AI and its predictive attrition AI IBM announces the launch of Blockchain World Wire, a global blockchain network for cross-border payments Diversity in Faces: IBM Research’s new dataset to help build facial recognition systems that are fair
Read more
  • 0
  • 0
  • 17038

article-image-red-hat-drops-mongodb-over-concerns-related-to-its-server-side-public-license-sspl
Natasha Mathur
17 Jan 2019
3 min read
Save for later

Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL)

Natasha Mathur
17 Jan 2019
3 min read
It was last year in October when MongoDB announced that it’s switching to Server Side Public License (SSPL). Now, the news of Red Hat removing MongoDB from its Red Hat Enterprise Linux and Fedora over its SSPL license has been gaining attention. Tom Callaway, University outreach Team lead, Red Hat, mentioned in a note, earlier this week, that Fedora does not consider MongoDB’s Server Side Public License v1 (SSPL) as a Free Software License. He further explained that SSPL is “intentionally crafted to be aggressively discriminatory towards a specific class of users. To consider the SSPL to be "Free" or "Open Source" causes that shadow to be cast across all other licenses in the FOSS ecosystem, even though none of them carry that risk”. The first instance of Red Hat removing MongoDB happened back in November 2018 when its RHEL 8.0 beta was released. RHEL 8.0 beta release notes explicitly mentioned that the reason behind the removal of MongoDB in RHEL 8.0 beta is because of SSPL. Apart from Red Hat, Debian also dropped MongoDB from its Debian archive last month due to similar concerns over MongoDB’s SSPL. “For clarity, we will not consider any other version of the SSPL beyond version one. The SSPL is clearly not in the spirit of the DFSG (Debian’s free software guidelines), let alone complimentary to the Debian's goals of promoting software or user freedom”, mentioned Chirs Lamb, Debian Project Leader. Also, Debian developer, Apollon Oikonomopoulos, mentioned that MongoDB 3.6 and 4.0 will be supported longer but that Debian will not be distributing any SSPL-licensed software. He also mentioned how keeping the last AGPL-licensed version (3.6.8 or 4.0.3) without the ability to “cherry-pick upstream fixes is not a viable option”. That being said, MongoDB 3.4 will be a part of Debian as long as it is AGPL-licensed (MongoDB’s previous license). MongoDB’s decision to move to SSPL license was due to cloud providers exploiting its open source code. SSPL clearly specifies an explicit condition that companies wanting to use, review, modify or redistribute MongoDB as a service, would have to open source the software that they’re using. This, in turn, led to a debate among the industry and the open source community, as they started to question whether MongoDB is open source anymore. https://twitter.com/mjasay/status/1082428001558482944 Also, MongoDB’s adoption SSPL forces companies to either go open source or choose MongoDB’s commercial products. “It seems clear that the intent of the license author is to cause Fear, Uncertainty, and Doubt towards commercial users of software under that license” mentioned Callaway. https://twitter.com/mjasay/status/1083853227286683649 MongoDB acquires mLab to transform the global cloud database market and scale MongoDB Atlas MongoDB Sharding: Sharding clusters and choosing the right shard key [Tutorial] MongoDB 4.0 now generally available with support for multi-platform, mobile, ACID transactions and more
Read more
  • 0
  • 0
  • 17026

article-image-mapr-dataops-platform-6-0
Sugandha Lahoti
22 Nov 2017
3 min read
Save for later

New MapR Platform 6.0 powers DataOps

Sugandha Lahoti
22 Nov 2017
3 min read
MapR Technologies Inc announced the release of a new version of their Converged Data Platform. The new MapR Platform 6.0 is focused on DataOps to increase the value of data by bringing together functions from across an enterprise. DataOps is an approach to improve quality and reduce life cycle time of data analytics for Big data applications. MapR Platform 6.0, offers the entire DataOps team (data scientists, data engineers, systems administrators, and cluster operators) in an organization, a unified management solution. Some top releases and features of the platform include: The MapR Control System (MCS), a new centralized control system that converges all data sources and types from multiple backends. It is built on the Spyglass Initiative and provides a unified management solution for the data stored in the MapR platform. This includes files, JSON-tables, and streaming data. MapR 6.0 MCS also comes in with: A quick glance cluster dashboard Resource utilization by node and by service Capacity planning using storage utilization trends and per-tenant usage Easy to set up replication, snapshots, and mirrors The ability to manage cluster events with related metrics and expert recommendations Direct access to default metrics and pre-filtered logs The power to manage MapR Streams and configure replicas Access to MapR DB tables, indexes, and change logs Intuitive mechanisms to set up volume, table, and stream ACEs for access control MapR Monitoring uses MapR Streams in the core architecture to build a customizable, scalable, and extensible monitoring framework. The MapR platform also includes the latest release of MapR-DB 6.0. It is a multi-model database built for data-intensive applications such as real-time streaming, operational workloads, and analytical applications. The MapR Data Science Refinery provides scalable data science tools to organizations to help them generate insights from their data and convert them into operational applications. It provides an access to all platform assets including app servers, web servers, and other client nodes and apps. The MapR Data Science Refinery also comes with 8 visualization libraries, including MatPlotLib and GGPlot2. In addition, Apache Spark connectors are provided for interacting with both MapR-DB and MapR-ES. MapR also includes a preconfigured Docker Container to use MapR as a data store. The Stateful Containers offer easy deployment solutions apart from being secure and extensible. Organizations can also create real-time pipelines for machine learning applications and apply ML models to real-time data by the native integration between MapR-ES and ML libraries. In addition, The MapR platform 6.0 also includes single-click security enhancements, cloud-scale multi-tenancy, and MapR volume metrics, available via an extensible volume dashboard in Grafana.   The MapR Platform 6.0 is available now. However, for Cloud providers such as Microsoft Azure, Amazon Web Services, and Oracle Cloud, the version 6.0 would be available before the end of this year. For more information about the product, you can visit the official documentation here.
Read more
  • 0
  • 0
  • 17016

article-image-google-gives-artificial-intelligence-full-control-over-cooling-its-data-centers
Sugandha Lahoti
20 Aug 2018
2 min read
Save for later

Google gives Artificial Intelligence full control over cooling its data centers

Sugandha Lahoti
20 Aug 2018
2 min read
Google in collaboration with DeepMind is giving the control of cooling several of its data centers completely to an AI algorithm. Since 2016, they have been using an AI-powered recommendation system (developed by Google and DeepMind) to improve the energy efficiency of Google’s data centers. This system made recommendations to data center managers, leading to energy savings of around 40 percent in those cooling systems. Now, Google is completely handing the control over to cloud-based AI systems. https://twitter.com/mustafasuleymn/status/1030442412861218817 How Google’s safety-first AI system works Google’s previous AI engine required too much operator effort and supervision to implement the recommendations. So they explored a new system that could give similar energy savings without manual implementation. Here’s how the algorithm does it. A large number of sensors are embedded in the cooling center. The cloud-based AI system monitors the data centers and every five minutes pulls a snapshot of the data center. It then feeds this snapshot into deep neural networks, which predict how different combinations of potential actions will affect future energy consumption. The AI system then identifies which actions will minimize the energy consumption while satisfying safety constraints. Those actions are sent back to the data center, where the actions are verified by the local control system and then implemented. To ensure safety and reliability, the system uses eight different mechanisms to ensure it behaves as intended at all times and improve energy savings. The system is already delivering consistent energy savings of around 30 percent on average, with further expected improvements. Source: DeepMind Blog In the long term, Google wants to apply this technology in other industrial settings, and help tackle climate change on an even grander scale. You can read more about their Safety-first AI on DeepMind’s Blog. DeepMind Artificial Intelligence can spot over 50 sight-threatening eye diseases with expert accuracy. Why DeepMind made Sonnet open source. How Google’s DeepMind is creating images with artificial intelligence.
Read more
  • 0
  • 0
  • 17005

article-image-a-new-geometric-deep-learning-extension-library-for-pytorch-releases
Sunith Shetty
19 Jun 2018
2 min read
Save for later

A new geometric deep learning extension library for PyTorch releases!

Sunith Shetty
19 Jun 2018
2 min read
PyTorch Geometric is a new geometric deep learning extension library for PyTorch. With this library, you will be able to perform deep learning on graphs and other irregular graph structures using various methods and features offered by the library. Additionally, it also offers an easy-to-use mini-batch loader and helpful transforms to perform complex operations. In order to create your own simple interfaces, you can use a range of a large number of datasets offered by PyTorch Geometric library. You can use all these sets of features for performing operations on both arbitrary graphs as well as on 3D meshes or point clouds. You can find the following list of methods that are currently implemented in the library: SplineConv, Spline based CNNs which are used for irregular structured and geometric input (For eg: Graphs or meshes). You can refer to the research paper for more details. GCNConv provides a scalable approach using semi-supervised learning on graph-structured data. You can refer to the research paper for more details. ChebConv uses a generalized CNN model with fast localized spectral filtering on graphs. You can refer to the research paper for more details. NNConv uses a neural message passing algorithm for Quantum chemistry. You can refer to the research paper for more details. GATConv uses graph attention networks that operate on graph-structured data. You can refer to the research paper for more details. AGNNProp uses attention-based graph neural networks for graph-based semi-supervised learning. You can refer to the research paper for more details. SAGEConv uses representation learning on large graphs thus achieving great results in a variety of prediction tasks. You can refer to the research paper for more details. Graclus Pooling uses weighted graph cuts without Eigenvectors. You can refer to the research paper for more details. Voxel Grid Pooling In order to learn more about the installation, data handling mechanisms and a full list of implemented methods and datasets, you can refer the documentation. If you want simple hands-on examples to practice you can refer the examples/ directory. The library is currently in its first Alpha release. You can contribute to the project by raising an issue request if you notice anything unexpected. Read more Can a production ready Pytorch 1.0 give TensorFlow a tough time? Is Facebook-backed PyTorch better than Google’s TensorFlow? Python, Tensorflow, Excel and more – Data professionals reveal their top tools
Read more
  • 0
  • 0
  • 16996

article-image-elastic-stack-6-7-releases-with-elastic-maps-elastic-update-and-much-more
Amrata Joshi
27 Mar 2019
3 min read
Save for later

Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more!

Amrata Joshi
27 Mar 2019
3 min read
Yesterday, the team at Elastic released Elastic Stack 6.7 a group of open source products from Elastic designed to help users take data from any type of source and visualize that data in real time. What’s new in Elastic Stack 6.7? Elastic Maps Elastic Maps is a new dedicated solution used for mapping, querying, and visualizing geospatial data in Kibana. They expand on existing geospatial visualization options in Kibana with features such as visualization of multiple layers and data sources in the same map. It also includes features like dynamic data-driven styling on vector layers on maps, mapping of both aggregate and document-level data and much more. Elastic Maps also embeds the query bar with autocomplete for real-time ad-hoc search. Elastic Uptime This release comes with Elastic Uptime, that makes it easy to detect when application services are down or they are responding slowly. It notifies users about problems way before those services are called by the application. Cross Cluster Replication (CCR) Cross Cluster Replication (CCR) now has a variety of use cases that include cross-datacenter and cross-region replication and it is generally available. Index Lifecycle Management (ILM) With this release, Index lifecycle management (ILM) is now generally available and also ready for production use. ILM helps Elasticsearch admins with defining and automating lifecycle management policies, such as how data is to be managed and moved between phases like hot, warm, cold, and deletion phases while it ages. Elasticsearch SQL Elasticsearch SQL, helps users with interacting and querying their Elasticsearch data using SQL. Elasticsearch SQL functionality includes the JDBC and ODBC clients, which allows third-party tools to connect to Elasticsearch as a backend datastore. With this release, Elasticsearch SQL gets generally available. Canvas Canvas that helps users to showcase and present live data from Elasticsearch with pixel-perfect precision, becomes generally available with this release. Kibana localization In this release, Kibana’s first localization, which is now available in simplified Chinese. Kibana also introduces a new localization framework that provides support for additional languages. Functionbeat Functionbeat is a Beat that deploys as a function in serverless computing frameworks, as well as streams, cloud infrastructure logs, and metrics into Elasticsearch. The Functionbeat is now generally available and it supports the AWS Lambda framework and can stream data from CloudWatch Logs, SQS, and Kinesis. Upgrade Assistant The Upgrade Assistant in this release will help users in preparing their existing Elastic Stack environment for the upgrade to 7.0. The Upgrade Assistant includes both APIs and UIs and works as an important cluster checkup tool to help plan the upgrade. It also helps in identifying things like deprecation warnings to enable a smoother upgrade experience. To know more about this release, check out Elastic’s blog post. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Core CPython developer unveils a new project that can analyze his phone’s ‘silent connections’ How to handle backup and recovery with PostgreSQL 11 [Tutorial]  
Read more
  • 0
  • 0
  • 16976
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-video-to-video-synthesis-gan-nvidia-mit-csail-open-source
Fatema Patrawala
23 Aug 2018
2 min read
Save for later

Video-to-video synthesis method: A GAN by NVIDIA & MIT CSAIL is now Open source

Fatema Patrawala
23 Aug 2018
2 min read
Nvidia and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have open-sourced their video-to-video synthesis model. A generative adversarial learning framework is used as a method to generate high-resolution, photorealistic and temporally coherent results with various input format, including segmentation masks, sketches and poses. There has been less research into video to video synthesis compared to image to image translation. Video to video synthesis aims to solve the problem of low visual quality and incoherency of video results in existing image synthesis approach. The research group proposed a novel video-to-video synthesis approach capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long. An extensive experimental validation was performed on various datasets by the authors and the model showed better results than existing approaches in quantitative and qualitative perspectives. When this method was extended to multimodal video synthesis with identical input data, it produced new visual properties with high resolution and coherency. Researchers suggested the model may be improved in the future by adding additional 3D cues such as depth maps to better synthesize turning cars. We can use object tracking to ensure an object maintains its colour and appearance throughout the video; and training with coarser semantic labels to solve issues in semantic manipulation. The Video-to-Video Synthesis paper is on arxiv, the team’s model and data can be found on the Github page. NVIDIA shows off GeForce RTX, real-time raytracing GPUs, as the holy grail of computer graphics to gamers Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU” Baidu announces ClariNet, a neural network for text-to-speech synthesis
Read more
  • 0
  • 0
  • 16949

article-image-sap-creates-ai-ethics-guidelines-and-forms-an-advisory-panel
Prasad Ramesh
20 Sep 2018
3 min read
Save for later

SAP creates AI ethics guidelines and forms an advisory panel

Prasad Ramesh
20 Sep 2018
3 min read
“The danger of AI is much greater than the danger of nuclear warheads, by a lot”—Elon Musk SAP, a market leader enterprise software, became the first European technology company to create an AI ethics advisory panel when they made an announcement on Tuesday. They have has announced a set of guiding principles and have developed an external artificial intelligence (AI) ethics advisory panel of five board members. What are the guidelines? The guidelines revolve around recognizing AI’s significant impact on people and society. SAP says that they have designed these guidelines to “help the world run better and improve people’s lives”. The seven guidelines as stated on the SAP website are: We are driven by our values. We design for people. We enable business beyond bias. We strive for transparency and integrity in all that we do. We uphold quality and safety standards. We place data protection and privacy at our core. We engage with the wider societal challenges of artificial intelligence Who is in the AI ethics advisory board? The advisory board panel comprises of experts from various fields like academia, politics and industry. The panel is present for ensuring the adoption of the principles and to develop the principles further collaboratively with the ‘SAP AI steering committee’. The AI ethics panel consists of members who are theology professors, chairmen, law and policy professors, IT professors, and scholars, researchers. The members are: Dr. theol. Peter Dabrock, Chair of Systematic Theology (Ethics), University of Erlangen-Nuernberg Dr. Henning Kagermann, Chairman, acatechBoard of Trustees; acatech Senator Susan Liautaud, Lecturer in Public Policy and Law, Stanford. Founder and Managing Director, Susan Liautaud & Associates Limited (SLAL) Dr. Helen Nissenbaum, Professor, Cornell Tech Information Science Nicholas Wright, Consultant, Intelligent Biology. Affiliated Scholar with Pellegrino Center for Clinical Bioethics Georgetown University Medical Center. An Honorary Research Associate in Institute of Cognitive Neuroscience, University College London Together with the guidelines, SAP’s internal committee and the formed external panel, SAP aims to ensure that the AI capabilities in SAP Leonardo Machine Learning are used to maintain ‘integrity and trust’ in all its solutions. Implementation of AI ethics SAP thinks that the guiding principles also contribute to the AI debate in Europe. Markus Noga, senior vice president, Machine Learning, SAP, is appointed to the high level AI expert group by the European Commission. This European AI expert group was created to design an AI strategy and purpose with ethical guidelines relating to fairness, safety, transparency, by early 2019. Luka Mucic, Chief Financial Officer and member of the Executive Board of SAP Se. stated “SAP considers the ethical use of data a core value. We want to create software that enables the intelligent enterprise and actually improves people’s lives. Such principles will serve as the basis to make AI a technology that augments human talent.” For more information visit the SAP website and read their guiding principles for artificial intelligence. SapFix and Sapienz: Facebook’s hybrid AI tools to automatically find and fix software bugs Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms What makes functional programming a viable choice for artificial intelligence projects?
Read more
  • 0
  • 0
  • 16924

article-image-facebook-ai-powered-video-calling-devices-built-with-privacy-security
Sugandha Lahoti
09 Oct 2018
4 min read
Save for later

Facebook introduces two new AI-powered video calling devices “built with Privacy + Security in mind”

Sugandha Lahoti
09 Oct 2018
4 min read
Yesterday, Facebook launched two brand new video communication devices. Named Portal and Portal+, these devices let you video call anyone, with more richer, hands-free experiences. The Portal features a 10-inch 1280 x 800 display, while Portal+ features 15-inch 1920 x 1080.  Both devices are powered by Artificial Intelligence. This includes a Smart Camera and a Smart Sound technology. Smart Camera stays with the action and automatically pans and zooms to keep everyone in view. Smart Sound minimizes background noise and enhances the voice of whoever is talking, no matter where they move. Source: Facebook Portal can also be used to call Facebook friends and connections on Messenger even if they don’t have Portal. It also supports group calls of up to seven people at the same time. Portal also offers hands-free voice control with Amazon Alexa built-in which can be used to track sports scores, check the weather, control smart home devices, order groceries, and more.  Facebook has also enabled shared activities in its Portal devices by partnering with Spotify Premium, Pandora, iHeartRadio, Facebook Watch, Food Network, and Newsy. Keeping in mind, it’s security breach that affected 50 million users two weeks ago, Facebook says it has paid a lot of attention to privacy and security features. Per their website, “We designed Portal with tools that give you control: You can completely disable the camera and microphone with a single tap. Portal and Portal+ also come with a camera cover, so you can easily block your camera’s lens at any time and still receive incoming calls and notifications, plus use voice commands. To manage Portal access within your home, you can set a four- to 12-digit passcode to keep the screen locked. Changing the passcode requires your Facebook password. We also want to be upfront about what information Portal collects, help people understand how Facebook will use that information and explain the steps we take to keep it private and secure: Facebook doesn’t listen to, view, or keep the contents of your Portal video calls. In addition, video calls on Portal are encrypted. For added security, Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t use facial recognition and doesn’t identify who you are. Like other voice-enabled devices, Portal only sends voice commands to Facebook servers after you say, “Hey Portal.” You can delete your Portal’s voice history in your Facebook Activity Log at any time.” In all the above, Facebook seems quite cryptic about audio data. It also doesn’t really explain how it will use the information it collects from users. The voice data is stored on the Facebook server by default, probably to improve the Portal’s understanding on the user’s language quirks and to understand the user’s needs from the data. But it does make one wonder, should this be an opt-in and not an opt-out by default? Another jarring aspect is the need for one’s Facebook password to change the device’s passcode. This just feels like the new devices are yet another way for Facebook to add users to Facebook, not to mention the fact that Facebook just had a data breach on its site, the repercussions of which they are still investigating. In an interesting poll conducted by Dr. Jen Golbeck, Professor at UMD, on Twitter, over 63% of respondents said that they will not trust Facebook to responsibly operate a surveillance device in their home. https://twitter.com/jengolbeck/status/1049343277110054912 Read more about the devices on Facebook’s announcement. Facebook Dating app to release as a test version in Colombia. Facebook’s Glow, a machine learning compiler, to be supported by Intel, Qualcomm and others How Facebook is advancing artificial intelligence [Video]
Read more
  • 0
  • 0
  • 16869

article-image-the-case-for-data-communities-why-it-takes-a-village-to-sustain-a-data-driven-business-from-whats-new
Anonymous
04 Dec 2020
9 min read
Save for later

The case for data communities: Why it takes a village to sustain a data-driven business from What's New

Anonymous
04 Dec 2020
9 min read
Forbes BrandVoice Kristin Adderson December 4, 2020 - 7:01pm December 5, 2020 Editor's note: This article originally appeared in Forbes. Data is inseparable from the future of work as more organizations embrace data to make decisions, track progress against goals and innovate their products and offerings. But to generate data insights that are truly valuable, people need to become fluent in data—to understand the data they see and participate in conversations where data is the lingua franca. Just as a professional who takes a job abroad needs to immerse herself in the native tongue, businesses who value data literacy need ways to immerse their people in the language of data.  “The best way to learn Spanish is to go to Spain for three weeks,” said Stephanie Richardson, vice president of Tableau Community. “It is similar when you’re learning the language of data. In a data community, beginners can work alongside people who know data and know how to analyze it. You’re going to have people around you that are excited. You’re going to see the language being used at its best. You’re going to see the potential.” Data communities—networks of engaged data users within an organization—represent a way for businesses to create conditions where people can immerse themselves in the language of data, encouraging data literacy and fueling excitement around data and analytics.   The best data communities provide access to data and support its use with training sessions and technical assistance, but they also build enthusiasm through programs like internal competitions, user group meetings and lunch-and-learns. Community brings people together from across the organization to share learnings, ideas and successes. These exchanges build confidence and camaraderie, lifting morale and creating them around a shared mission for improving the business with data. Those who have already invested in data communities are reaping the benefits, even during a global pandemic. People have the data training they need to act quickly in crisis and know where to go when they have questions about data sources or visualizations, speeding up communications cycles. If building a new data community seems daunting during this time, there are small steps you can take to set a foundation for larger initiatives in the future.   Data communities in a work-from-home world Before Covid-19, organizations knew collaboration was important. But now, when many work remotely, people are disconnected and further removed from business priorities. Data and analytics communities can be a unifying force that focuses people on the same goals and gives them a dedicated space to connect. For businesses wanting to keep their people active, engaged and innovating with their colleagues, data communities are a sound investment.   “Community doesn’t have to be face-to-face activities and big events,” said Audrey Strohm, enterprise communities specialist at Tableau. “You participate in a community when you post a question to your organization’s internal discussion forum—whenever you take an action to be in the loop.”  Data communities are well suited for remote collaboration and virtual connection. Some traits of a thriving data community—fresh content, frequent recognition and small, attainable incentives for participation—apply no matter where its members reside. Data communities can also spark participation by providing a virtual venue, such as an internal chat channel or forum, where members can discuss challenges or share advice. Instead of spending hours spinning in circles, employees can log on and ask a question, access resources or find the right point of contact—all in a protected setting. Inside a data community at JP Morgan Chase JPMorgan Chase developed a data community to support data activities and to nurture a data culture. It emphasized immersion, rapid feedback and a gamified structure with skill belts—a concept similar to how students of the martial arts advance through the ranks. Its story shows that, sometimes, a focus on skills is not enough—oftentimes, you need community support. Speaking at Tableau Conference 2019, Heather Gough, a software engineer at the financial services company, shared three tips based on the data community at JPMorgan Chase: 1. Encourage learners to develop skills with any kind of data. Training approaches that center on projects challenge learners to show off their skills with a data set that reflects their personal interests. This gives learners a chance to inject their own passion and keeps the projects interesting for the trainers who evaluate their skills. 2. Not everyone will reach the mountain top, and that’s okay. Most participants don’t reach the top skill tier. Even those who only advance partway through a skill belt or other data literacy program still learn valuable new skills they can talk about and share with others. That’s the real goal, not the accumulation of credentials. 3. Sharing must be included in the design. Part of the progression through the ranks includes spending time sharing newly learned data skills with others. This practice scales as learners become more sophisticated, from fielding just a few questions at low levels to exchanging knowledge with other learners at the top tiers.  How to foster data communities and literacy While you may not be able to completely shift your priorities to fully invest in a data community right now, you can lay the groundwork for a community by taking a few steps, starting with these: 1. Focus on business needs The most effective way to stir excitement and adoption of data collaboration is to connect analytics training and community-related activities to relevant business needs. Teach people how to access the most critical data sources, and showcase dashboards from across the company to show how other teams are using data.  Struggling to adapt to new challenges? Bring people together from across business units to innovate and share expertise. Are your data resources going unused? Imagine if people in your organization were excited about using data to inform their decision making. They would seek those resources rather than begrudgingly look once or twice. Are people still not finding useful insights in their data after being trained? Your people might need to see a more direct connection to their work.  “Foundational data skills create a competitive advantage for individuals and organizations,” said Courtney Totten, director of academic programs at Tableau.  When these efforts are supported by community initiatives, you can address business needs faster because you’re all trained to look at the same metrics and work together to solve business challenges. 2. Empower Your Existing Data Leaders The future leaders of your data community shouldn’t be hard to find. Chances are, they are already in your organization, advocating for more opportunities to explore, understand and communicate with data. Leaders responsible for building a data community do not have to be the organization’s top data experts, but they should provide empathic guidance and inject enthusiasm. These people may have already set up informal structures to promote data internally, such as a peer-driven messaging channel. Natural enthusiasm and energy are extremely valuable to create an authentic and thriving community. Find the people who have already volunteered to help others on their data journeys and give them a stake in the development and management of the community. A reliable leader will need to maintain the community platform and ensure that it keeps its momentum over time. 3. Treat Community Like a Strategic Investment Data communities can foster more engagement with data assets—data sources, dashboards and workbooks. But they can only make a significant impact when they’re properly supported. “People often neglect some of the infrastructure that helps maximize the impact of engagement activities,” Strohm said. “Community needs to be thought of as a strategic investment.”  Data communities need a centralized resource hub that makes it easy to connect from anywhere, share a wide variety of resources and participate in learning modules. Other investments include freeing up a small amount of people’s time to engage in the community and assigning a dedicated community leader. Some communities fail when people don’t feel as though they can take time away from the immediate task at hand to really connect and collaborate. Also, communities aren’t sustainable when they’re entirely run by volunteers. If you can’t invest in a fully dedicated community leader at this time, consider opening up a small portion of someone’s role so they can help build or run community programs. 4. Promote Participation at Every Level Executive leadership needs to do more than just sponsor data communities and mandate data literacy. They need to be visible, model members. That doesn’t mean fighting to the top of every skill tree. Executives should, however, engage in discussions about being accountable for data-driven decisions and be open to fielding tough questions about their own use of data. “If you’re expecting your people to be vulnerable, to reach out with questions, to see data as approachable, you can help in this by also being vulnerable and asking questions when you have them,” said Strohm. 5. Adopt a Data Literacy Framework Decide what your contributors need to know for them to be considered data literate. The criteria may include learning database fundamentals and understanding the mathematical and statistical underpinnings of correlation and causation. Ready-made programs such as Tableau’s Data Literacy for All provide foundational training across all skill levels. Data communities give everyone in your organization a venue to collaborate on complex business challenges and reduce uncertainty. Ask your passionate data advocates what they need to communicate more effectively with their colleagues. Recruit participants who are eager to learn and share. And don’t be afraid to pose difficult questions about business recovery and growth, especially as everyone continues to grapple with the pandemic. Communities rally around a common cause. Visit Tableau.com to learn how to develop data communities and explore stories of data-driven collaboration.  
Read more
  • 0
  • 0
  • 16867
article-image-google-launches-enterprise-edition-dialogflow
Sugandha Lahoti
20 Nov 2017
3 min read
Save for later

Google launches the Enterprise edition of Dialogflow, its chatbot API

Sugandha Lahoti
20 Nov 2017
3 min read
Google has recently announced the enterprise edition of Dialogflow, its Chatbot API. Dialogflow is Google’s API for building chatbots as well as other conversational interfaces for mobile applications, websites, messaging platforms, and IoT devices. It uses machine learning and natural language processing in the backend to power it’s conversational interfaces. It also has a built-in speech recognition support and features new analytics capabilities. Now they have extended the API to the enterprises, allowing organizations to build these conversational apps for a large scale usage. According to Google, Dialogflow Enterprise Edition is a premium pay-as-you-go service. It is targeted at organizations in need of enterprise-grade services that can withstand changes based on user demands. As opposed to the small and medium business owners and individual developers for whom the standard version suffices. The enterprise edition also boasts of 24/7 support, SLAs, enterprise-level terms of service and complete data protection which is why companies are willing to pay a fee for adopting it. Here’s a quick overview of the differences between the standard and the enterprise version of Dialogflow: Source: https://cloud.google.com/dialogflow-enterprise/docs/editions Apart from this, the API is also a part of Google Cloud. So, it comes with the same support options as provided to cloud platform customers. The enterprise edition also supports unlimited text and voice interactions and higher usage quotas as compared to the free version. It's Enterprise Edition agent can be created using the Google Cloud Platform Console. Adding, editing or removing entities and intents to the agent can be done using console.dialogflow.com, or with the Dialogflow V2 API. Here’s a quick glance at some top features: Natural language Understanding, allows quick extraction and response of a user’s intent to implement natural and rich interactions between users and businesses. Over 30+ pre-built agents for quick and easy identification of custom entity types. An integrated code editor, to build native serverless applications linked with conversational interfaces through Cloud Functions for Firebase. Integration with Google Cloud Speech,  for voice interactions, support in a single API Cross-Platform and Multi-Language Agent, with 20+ languages supported over 14 different platforms. Uniqlo has used Dialogflow to create their shopping chatbot. Here are the views of Shinya Matsuyama, Director of Global digital commerce, Uniqlo: “Our shopping chatbot was developed using Dialogflow to offer a new type of shopping experience through a messaging interface, and responses are constantly being improved through machine learning. Going forward, we are also looking to expand the functionality to include voice recognition and multiple languages. ” According to the official documentation, the project is still in beta stage. Hence, it is not intended for real-time usage in critical applications. You can learn more about the project along with Quickstarts, How-to guides, and Tutorials here.
Read more
  • 0
  • 0
  • 16864

article-image-developers-lives-matter-chinese-developers-protest-over-the-996-work-schedule-on-github
Natasha Mathur
29 Mar 2019
3 min read
Save for later

'Developers' lives matter': Chinese developers protest over the “996 work schedule” on GitHub

Natasha Mathur
29 Mar 2019
3 min read
Working long hours at a company, devoid of any work-life balance, is rife in China’s tech industry. Earlier this week on Tuesday, a Github user with the name “996icu” created a webpage that he shared on GitHub, to protest against the “996” work culture in Chinese tech companies. The “996” work culture is an unofficial work schedule that requires employees to work from 9 am to 9 pm, 6 days a week, totaling up to 60 hours of work per week. The 99icu webpage mentions the Labor Law of the People’s Republic of China, according to which, an employer can ask its employees to work long hours due to needs of production or businesses. But, the work time to be prolonged should not exceed 36 hours a month. Also, as per the Labor Law, employees following the "996" work schedule should be paid 2.275 times of their base salary. However, this is not the case in reality and Chinese employees following the 996 work rule rarely get paid that much. GitHub users also called out to companies like Youzan and Jingdong, who both follow the 996 work rule. The webpage cites example of a Jingdong PR who posted on their maimai ( Chinese business social network) account that "(Our culture is to devote ourselves with all our hearts (to achieve the business objectives)". 996 work schedule started to gain popularity in recent years but has been a “secret practice” for quite a while. The 996icu webpage went viral online and ranked first on GitHub’s trending page on Thursday. It currently has amassed more than 90,000 stars (a post bookmarking tool). The post is also being widely shared on Chinese social media platforms such as Weibo and WeChat, where many users are talking about their experiences as tech workers who followed the 996 schedule. This gladiatorial work environment in Chinese firms has long been a bone of contention. South China Morning Post writer Zheping Huang published a post sharing stories of different Chinese tech employees who shed light on the grotesque reality of China’s Silicon Valley. One such example is of a 33-year-old Beijing native, Yang, who works as a product manager in a Chinese internet company. Yang wakes up at 6 am every day to get through a two-and-a-half-hour commute to reach work. Another example is of Bu, a 20-something marketing specialist who relocated to an old complex near her workplace. She pays high rent, shares room with two other women, and no longer has access to coffee shops or good restaurants. A user named “discordance” on Hacker News commented regarding the GitHub protest, asking developers in China to move to better companies. “Leave your company, take your colleagues and start one with better conditions. You are some of the best engineers I've worked with and deserve better”. Another user “ceohockey60”  commented: “The Chinese colloquial term for a developer is "码农". Its literal English translation is "code peasants" -- not the most flattering or respectful way to call software engineers. I've recently heard horror stories, where 9-9-6 is no longer enough inside one of the Chinese tech giants, and 10-10-7 is expected (10am-10pm, 7 days/week)”. The 996icu webpage states that people who “consistently follow the "996" work schedule.. run the risk of getting..into the Intensive Care Unit. Developers' lives matter”. What the US-China tech and AI arms race means for the world – Frederick Kempe at Davos 2019 China’s Huawei technologies accused of stealing Apple’s trade secrets, reports The Information Is China’s facial recognition powered airport kiosks an attempt to invade privacy via an easy flight experience
Read more
  • 0
  • 0
  • 16841

article-image-near-real-time-nrt-applications-work
Amarabha Banerjee
10 Nov 2017
6 min read
Save for later

How Near Real Time (NRT) Applications work

Amarabha Banerjee
10 Nov 2017
6 min read
[box type="note" align="" class="" width=""]In this article by Shilpi Saxena and Saurabh Gupta from their book Practical Real-time data Processing and Analytics we shall explore what a near real time architecture looks like and how an NRT app works. [/box] It's very important to understand the key aspects where the traditional monolithic application systems are falling short to serve the need of the hour: Backend DB: Single point monolithic data access. Ingestion flow: The pipelines are complex and tend to induce latency in end to end flow. Systems are failure prone, but the recovery approach is difficult and complex. Synchronization and state capture: It's very difficult to capture and maintain the state of facts and transactions in the system. Getting diversely distributed systems and real-time system failures further complicate the design and maintenance of such systems. The answer to the above issues is an architecture that supports streaming and thus provides its end users access to actionable insights in real-time over ever flowing in-streams of real-time fact data. Local state and consistency of the system for large scale high velocity systems Data doesn't arrive at intervals, it keeps flowing in, and it's streaming in all the time No single state of truth in the form of backend database, instead the applications subscribe or tap into stream of fact data Before we delve further, it's worthwhile to understand the notation of time: Looking at this figure, it's very clear to correlate the SLAs with each type of implementation (batch, near real-time, and real-time) and the kinds of use cases each implementation caters to. For instance, batch implementations have SLAs ranging from a couple of hours to days and such solutions are predominantly deployed for canned/pre-generated reports and trends. The real-time solutions have an SLA of a magnitude of few seconds to hours and cater to situations requiring ad-hoc queries, mid-resolution aggregators, and so on. The real-time application's most mission-critical in terms of SLA and resolutions are where each event accounts for and the results have to return within an order of milliseconds to seconds. Near real time (NRT) Architecture In its essence, NRT Architecture consists of four main components/layers, as depicted in the following figure: The message transport pipeline The stream processing component The low-latency data store Visualization and analytical tools The first step is the collection of data from the source and providing for the same to the "data pipeline", which actually is a logical pipeline that collects the continuous events or streaming data from various producers and provides the same to the consumer stream processing applications. These applications transform, collate, correlate, aggregate, and perform a variety of other operations on this live streaming data and then finally store the results in the low-latency data store. Then, there is a variety of analytical, business intelligence, and visualization tools and dashboards that read this data from the data store and present it to the business user. Data collection This is the beginning of the journey of all data processing, be it batch or real time the foremost and most forthright is the challenge to get the data from its source to the systems for our processing. If I can look at the processing unit as a black box and a data source, and at consumers as publishers and subscribers. It's captured in the following diagram: The key aspects that come under the criteria for data collection tools in the general context of big data and real-time specifically are as follows: Performance and low latency Scalability Ability to handle structured and unstructured data Apart from this, the data collection tool should be able to cater to data from a variety of sources such as: Data from traditional transactional systems: To duplicate the ETL process of these traditional systems and tap the data from the source Tap the data from these ETL systems The third and a better approach is to go the virtual data lake architecture for data replication. Structured data from IoT/ Sensors/Devices, or CDRs: This is the data that comes at a very high velocity and in a fixed format – the data can be from a variety of sensors and telecom devices. Unstructured data from media files, text data, social media, and so on: This is the most complex of all incoming data where the complexity is due to the dimensions of volume, velocity, variety, and structure. Stream processing The stream processing component itself consists of three main sub-components, which are: The Broker: that collects and holds the events or data streams from the data collection agents. The "Processing Engine": that actually transforms, correlates, aggregates the data, and performs the other necessary operations The "Distributed Cache": that actually serves as a mechanism for maintaining common data set across all distributed components of the processing engine The same aspects of the stream processing component are zoomed out and depicted in the diagram as follows: There are few key attributes that should be catered to by the stream processing component: Distributed components thus offering resilience to failures Scalability to cater to growing need of the application or sudden surge of traffic Low latency to handle the overall SLAs expected from such application Easy operationalization of use case to be able to support the evolving use cases Build for failures, the system should be able to recover from inevitable failures without any event loss, and should be able to reprocess from the point it failed Easy integration points with respect to off-heap/distributed cache or data stores A wide variety of operations, extensions, and functions to work with business requirements of the use case Analytical layer - serve it to the end user The analytical layer is the most creative and interesting of all the components of an NRT application. So far, all we have talked about is backend processing, but this is the layer where we actually present the output/insights to the end user graphically, visually in form of an actionable item. A few of the challenges these visualization systems should be capable of handling are: Need for speed Understanding the data and presenting it in the right context Dealing with outliers The figure depicts the flow of information from event producers to the collection agents, followed by the brokers and processing engine (transformation, aggregation, and so on) and then the long-term storage. From the storage unit, the visualization tools reap the insights and present them in form of graphs, alerts, charts, Excel sheets, dashboards, or maps, to the business owners who can assimilate the information and take some action based upon it. The above was an excerpt from the book Practical Real-time data Processing and Analytics.
Read more
  • 0
  • 0
  • 16833
article-image-baidu-adds-paddle-lite-2-0-new-development-kits-easydl-pro-and-other-upgrades-to-its-paddlepaddle-deep-learning-platform
Vincy Davis
15 Nov 2019
3 min read
Save for later

Baidu adds Paddle Lite 2.0, new development kits, EasyDL Pro, and other upgrades to its PaddlePaddle deep learning platform

Vincy Davis
15 Nov 2019
3 min read
Yesterday, Baidu’s deep learning open platform PaddlePaddle (PArallel Distributed Deep LEarning), released its latest version with 21 new products such as Paddle Lite 2.0, four end-to-end development kits including ERNIE for semantic understanding (NLP), toolkits and other new upgrades. PaddlePaddle is an easy-to-use, flexible and scalable deep learning platform developed for applying deep learning to many products at Baidu. Paddle Lite 2.0 The main goal of Paddle Lite is to maintain low latency and high-efficiency of AI applications when they are running on resource-constrained devices. Launched last year, Paddle Lite is customized for inference on mobile, embedded, and IoT devices. It is also compatible with PaddlePaddle and other pre-trained models. With enhanced usability in Paddle Lite 2.0, developers can deploy ResNet-50 with seven lines of code. The new version has added support for more hardware units such as edge-based FPGA and also permits low-precision inference using operators with the INT8 data type. New development kits Development kits aim to continuously reduce the development threshold for low-cost and rapid model constructions. ERNIE for semantic understanding (NLP): ERNIE (Enhanced Representation through kNowledge IntEgration) is a continual pre-training framework for semantic understanding. Earlier this year in July, Baidu had open sourced ERNIE 2.0 model and revealed that ERNIE 2.0 outperformed BERT and XLNet in 16 NLP tasks, including English tasks on GLUE benchmarks and several Chinese tasks. PaddleDetection: It has more than 60 easy-to-use object detection models. PaddleSeg for computer vision (CV): It is an end-to-end image segmentation library that supports data augmentation, modular design, and end-to-end deployment. Elastic CTR for recommendation: Elastic CTR is a newly released solution that provides process documentation for distributed training on Kubernetes (k8s) clusters. It also provides the distributed parameter deployment forecasts as a one-click solution. EasyDL Pro EasyDL is an AI platform for novice developers to train and build custom models via a drag-and-drop interface. EasyDL Pro is a one-stop AI development platform for algorithm engineers to deploy AI models with fewer lines of code. Master mode The Master mode will help developers customize models for specific tasks. It has a large library of pre-trained models and tools for transfer learning. Other new upgrades New toolkits like graph, federated and multi-task learning. API’s upgraded for flexibility, usability, and improved documentation. A new PaddlePaddle module for model compression called PaddleSlim is added to enable a quantitative training function and a hardware-based small model search capability. Paddle2ONNX and X2Paddle are upgraded for improved conversion of trained models from PaddlePaddle to other frameworks. Head over to Baidu’s blog for more details. Baidu open sources ‘OpenEdge’ to create a ‘lightweight, secure, reliable and scalable edge computing community’ Unity and Baidu collaborate for simulating the development of autonomous vehicles CNCF announces Helm 3, a Kubernetes package manager and tool to manage charts and libraries GitHub Universe 2019: GitHub for mobile, GitHub Archive Program and more announced amid protests against GitHub’s ICE contract Brave 1.0 releases with focus on user privacy, crypto currency-centric private ads and payment platform
Read more
  • 0
  • 0
  • 16826

article-image-i-code-in-my-dreams-too-say-developers-in-jetbrains-state-of-developer-ecosystem-2019-survey
Fatema Patrawala
19 Jun 2019
5 min read
Save for later

‘I code in my dreams too’, say developers in Jetbrains State of Developer Ecosystem 2019 Survey

Fatema Patrawala
19 Jun 2019
5 min read
Last week, Jetbrains published its annual survey results known as The State of Developer Ecosystem 2019. More than 19,000 people participated in this developer ecosystem survey. But responses from only 7000 developers from 17 countries were included in the report. The survey had over 150 questions and key results from the survey are published, complete results along with the raw data will be shared later. Jetbrains prepared an infographics based on the survey answers they received. Let us take a look at their key takeaways: Key takeaways from the survey Java is the most popular primary programming language. Python is the most studied language in 2019. Cloud services are getting more popular. The share of local and private servers dropped 8% and 3%, respectively, compared to 2018. Machine learning professionals have less fear that AI will replace developers one day. 44% of JavaScript developers use TypeScript regularly. In total, a quarter of all developers are using it in 2019, compared to 17% last year. The use of containerized environments by PHP developers is growing steadily by 12% per year. 73% of Rust devs use a Unix / Linux development environment, though Linux is not a primary environment for most of them. Go Modules appeared recently, but already 40% of Go developers use it and 17% want to migrate to it. 71% of Kotlin developers use Kotlin for work, mainly for new projects (96%), but more than a third are also migrating their existing projects to it. The popularity of Vue.js is growing year on year: it gained 11 percentage points since last year and has almost doubled its share since 2017. The most frequently used tools for developers involved in infrastructure development is Docker + Terraform + Ansible. The more people code at work, the more likely they are to code in their dreams. Developers choose Java as their primary language The participants were asked 3 questions about their language preference. Firstly, they were asked about the language they used last year, second they were asked about their primary language preference and, finally, they were asked to rank them. The most loved programming languages among developers are Java and Python. Second place is a tie between C# and JavaScript. Common secondary languages include HTML, SQL, and Shell scripting. A lot of software developers have some practice with these secondary languages, but very few work with them as their major language. For example, while 56% practice SQL, only 19% called it their primary language and only 1.5% rank it as their first language. Java, on the other hand, is the leading ‘solo’ language. 44% of its users use only Java or use Java first. The next top solo language is JavaScript, with a mere 17%. Android and React Native remain popular among mobile developers, Flutter gains momentum For mobile operating system preference 83% participants said they used Android as their preferred operating system followed by iOS which is 59%. Two thirds of mobile developers use native tools to develop for mobile OS. Every other developer uses cross-platform technologies or frameworks. 42% said they use React native as a cross platform mobile framework. Interestingly Flutter was at the 2nd place with 30% of audience preferring to use. Other included Cordova, Ionic, Xamarin, Unity etc. Other takeaways from the survey and few fun facts The most interesting question asked in this year’s survey was if developers code in their dreams. 52% responded Yes to this question which means the more people code at work (as a primary activity), the more likely they are to code in their dreams. Another really interesting fact was revealed when they were asked if AI will replace developers in future. 57% of participants responded that partially AI may replace programmers, but those who do Machine learning professionally were more skeptical about AI than those who do it as a hobby. 27% think that AI will never replace developers, while 6% agreed that it will fully replace programmers and another 11% were not sure. There were other questions like which is the most preferred operating system for the development environment. 57% of participants said they prefer Windows, followed by 49% for macOS and 48% for Unix/Linux. When asked about what types of applications do developers prefer to develop. Major chunk went to Web based Back-end applications, followed by Web front-end, mobile applications, libraries and frameworks, desktop applications, etc. 41% responded No to the question about if they contributed to open-source projects on a regular basis. Only 11% said they contribute to open source on a regular basis that is every month. 71% have Unit tests in their projects and 16% responded that they do not have any tests in their projects that is about among the fully employed senior developers. Source code collaboration tool is used regularly among the developers with 80% preference to it. Other tools like Standalone IDE, Lightweight Desktop Editor, Continuous Integration or Continuous Delivery tool, Issue tracker etc are also used by developers regularly. Demographics of the survey The demographics of the survey had 69% of people who are fully employed with a company or an organization. 75% were developer/programmer/software engineer. 1 in 14 people who were polled occupied a senior leadership role. Two thirds of the developers practice pair programming. The survey also revealed that the more experienced people spent less time on learning new tools / technologies / programming languages. The gender ratio participants is not revealed. Check out the infographic to know more about the survey results. What the Python Software Foundation & Jetbrains 2017 Python Developer Survey had to reveal Python Software foundation and JetBrains’ Python Developers Survey 2018 PyCon 2019 highlights: Python Steering Council discusses the changes in the current Python governance structure
Read more
  • 0
  • 0
  • 16806
Modal Close icon
Modal Close icon