Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-deepmind-introduces-openspiel-a-reinforcement-learning-based-framework-for-video-games
Savia Lobo
28 Aug 2019
3 min read
Save for later

DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games

Savia Lobo
28 Aug 2019
3 min read
A few days ago, researchers at DeepMind introduced OpenSpiel, a framework for writing games and algorithms for research in general reinforcement learning and search/planning in games. The core API and games are implemented in C++ and exposed to Python. Algorithms and tools are written both in C++ and Python. It also includes a branch of pure Swift in the swift subdirectory. In their paper, the researchers write, “We hope that OpenSpiel could have a similar effect on general RL in games as the Atari Learning Environment has had on single-agent RL.” Read Also: Google Research Football Environment: A Reinforcement Learning environment for AI agents to master football OpenSpiel allows evaluating written games and algorithms on a variety of benchmark games as it includes implementations of over 20 different games types including simultaneous move, perfect and imperfect information games, gridworld games, an auction game, and several normal-form / matrix games, etc. It includes tools to analyze learning dynamics and other common evaluation metrics. It also supports n-player (single- and multi-agent) zero-sum, cooperative and general-sum, one-shot and sequential games, etc. OpenSpiel has been tested on Linux (Debian 10 and Ubuntu 19.04). However, the researchers have not tested the framework on MacOS or Windows. “since the code uses freely available tools, we do not anticipate any (major) problems compiling and running under other major platforms,” the researchers added. The purpose of OpenSpiel is to promote “general multiagent reinforcement learning across many different game types, in a similar way as general game-playing but with a heavy emphasis on learning and not in competition form,”  the researcher paper mentions. This framework is “designed to be easy to install and use, easy to understand, easy to extend (“hackable”), and general/broad.” Read Also: DeepMind’s AI uses reinforcement learning to defeat humans in multiplayer games Design constraints for OpenSpiel The two main design criteria that OpenSpiel is based on include: Simplicity: OpenSpiel provides easy-to-read, easy-to-use code that can be used to learn from and to build a prototype rather than a fully-optimized code that would require additional assumptions. Dependency-free: Researchers say, “dependencies can be problematic for long-term compatibility, maintenance, and ease-of-use.” Hence, the OpenSpiel framework does not introduce dependencies thus keeping it portable and easy to install. Swift OpenSpiel: A port to use Swift for TensorFlow The swift/ folder contains a port of OpenSpiel to use Swift for TensorFlow. This Swift port explores using a single programming language for the entire OpenSpiel environment, from game implementations to the algorithms and deep learning models. This Swift port is intended for serious research use. As the Swift for TensorFlow platform matures and gains additional capabilities (e.g. distributed training), expect the kinds of algorithms that are expressible and tractable to train to grow significantly. While OpenSpiel has some tools for visualization and evaluation, the α-Rank algorithm is also a tool. The α-Rank algorithm leverages evolutionary game theory to rank AI agents interacting in multiplayer games. OpenSpiel currently supports using α-Rank for both single-population (symmetric) and multi-population games. Developers are excited about this release and want to try out this framework. https://twitter.com/SMBrocklehurst/status/1166435811581202443 https://twitter.com/sharky6000/status/1166349178412261376 To know more about this news in detail, head over to the research paper. You can also check out the GitHub page. Terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise is going viral on YouTube DeepCode, the AI startup for code review, raises $4M seed funding; will be free for educational use and enterprise teams with 30 developers Baidu open sources ERNIE 2.0, a continual pre-training NLP model that outperforms BERT and XLNet on 16 NLP tasks
Read more
  • 0
  • 0
  • 19898

article-image-numpy-1-15-0-release-is-out
Savia Lobo
24 Jul 2018
2 min read
Save for later

NumPy 1.15.0 release is out!

Savia Lobo
24 Jul 2018
2 min read
NumPy 1.15.0 is out and is said to include a lot of changes including several cleanups, deprecations of old functions. It also includes improvements to many existing functions. The Python versions supported by NumPy 1.15.0 are 2.7, 3.4 to 3.7. Some of the highlights in this release include NumPy has switched to pytest for testing as this version no longer contains the maintained nose framework. However, the old nose based interface is still available for downstream projects. A new numpy.printoptions context manager can now set print options temporarily for the scope of the with block:: with np.printoptions(precision=2): ... print(np.array([2.0]) / 3) [0.67] Improvements to the histogram functions. This version includes numpy.histogram_bin_edges, a function to get the edges of the bins used by a histogram without needing to calculate the histogram. Support for unicode field names in python 2.7. Improved support for PyPy. Fixes and improvements to numpy.einsum, which evaluates Einstein summation convention on the operands. New features in the NumPy 1.15.0 Added np.gcd and np.lcm ufuncs for integer and objects types Both np.gcd and np.lcm used for computing the greatest common divisor, and the lowest common multiple respectively. These work on all the numpy integer types, as well as the built in arbitrary-precision Decimal and long types. Support for cross-platform builds for iOS The build system in this version has been modified to add support for the _PYTHON_HOST_PLATFORM environment variable, used by distutils when compiling on one platform for another platform. This makes it possible to compile NumPy for iOS targets. Addition of return_indices keyword for np.intersect1d New keyword return_indices returns the indices of the two input arrays that correspond to the common elements. Build system This version has an added experimental support for the 64-bit RISC-V architecture. Future Changes expected in the further versions Both NumPy 1.16 and NumPy 1.17 will be dropping support for Python 3.4 and Python 2.7 respectively. Read more about this release in detail on its GitHub Page Implementing matrix operations using SciPy and NumPy NumPy: Commonly Used Functions Installing NumPy, SciPy, matplotlib, and IPython  
Read more
  • 0
  • 0
  • 19841

article-image-looking-back-a-year-or-two-in-review-of-tableau-web-authoring-from-whats-new
Anonymous
29 Dec 2020
6 min read
Save for later

Looking back: A year (or two) in review of Tableau web authoring from What's New

Anonymous
29 Dec 2020
6 min read
Kevin Mason Senior Product Manager, Tableau Kristin Adderson December 29, 2020 - 8:23pm December 29, 2020 We are wrapping up 2020 with some well-timed holiday goodies. Tableau 2020.4 marks a very special milestone on our web authoring journey with the completion of the most requested web feature requests from the last two years, as well as the exciting release of Tableau Prep Builder on the web!  Our dev teams have been hard at work building Tableau into a true SaaS solution. With the majority of people working from home, web authoring has been particularly important to quickly give analysts access to the right data from anywhere without requiring a top-of-the-line computer to run analyses—a simple browser and reliable internet connection will do.  Tableau 2020.4 DemoWith the year coming to an end, we thought it would be fun to reflect and celebrate how far we’ve come on this journey to the web.  Humble beginnings During the early 2010s, the benefits of SaaS began to bear fruit. Tableau Desktop was our bread and butter, but required IT to install the software directly onto folks’ computers and maintain each individual license. This is where Tableau began to invest in Tableau Server, Tableau Online, and web authoring, though it was pretty limited in the early days. Can you believe web authoring didn’t even have dashboards until 2016?! Oh how time flies.  Old Tableau DemoMuch to our excitement, customers like Oldcastle saw the potential web authoring could bring to its organization. Oldcastle shared how it was encouraging employees to ask more data-driven questions and dig deeper using web authoring at TC15. As a pioneer for effectively using web authoring (even before dashboard editing!), Oldcastle’s TC talk is still relevant today. Oldcastle TC15 DemoAs part of Tableau Server and Tableau Online, web authoring offers a lot of benefits. It can be centrally managed, which simplifies deployment, license management, and version control. This means: Everyone in the organization gets the latest version during a Server or Online update, no individual Desktop updates needed.   Since all workbooks are stored on the Server, IT professionals have more visibility into what people are creating which helps with data governance and resource management.  IT teams don’t have to worry about managing multiple individual licenses—with web authoring, they can maintain licenses, upgrades, and content all on Tableau Server or Tableau Online. Analysts don’t have to context switch back to Desktop to make small changes. It can all be done in the same, single place.  An end-to-end experience in the browser Since then, we have been hard at work bringing much-loved Desktop features into the browser—we’re talking full home remodeling, down to the studs (basically Extreme Makeover: Tableau Edition). Our 2018.1 release saw the biggest change, with the ability to connect to data from the web, plus our new role-based pricing model. Parameters (2019.1), tooltips (2020.1), and filters (2020.3) were soon to follow. Finally, Tableau 2020.4 was extra-special, bringing the last of the most requested features you have patiently been waiting for to the web: actions, sets, and extracts. We heard the cries, demands, and pleas for the last three years, and I’m thrilled to say that web authoring has achieved parity with the Tableau Desktop you know and love! 2020.4 also includes Apply Filters to Selected Worksheets!During this journey, early adopters continued to share their success stories. At TC18, DISH Network illustrated how a few teams rolled out web authoring broadly in the organization and set up specific training sessions for new users. By setting up Web Authoring for analysts across the organization, DISH dramatically reduced the number of ad hoc requests its primary analytics teams would receive. As a result, the primary teams can focus on the larger, org-wide projects while everyday analysts are able to self-serve their own ad hoc requests for query and visualization changes. DISH still serves as an excellent example of how to create a data-driven culture.  Try it out yourself this new year Oldcastle and DISH are just two examples of the many customers finding success with web authoring. Even our sales team uses web authoring to build dashboards for the majority of their demos! Over the last 18 months, more customers are asking how to use web authoring to help expand their use of data throughout their business.  If you are curious to learn more, including some best practices, check out my Tableau Community post. I collected all the various resources with real customer examples and numerous videos from Tableau Conferences.  Or jump right in! Create a new workbook from scratch right on the web by clicking “New” > “Workbook” on the Explore page.  If you have the right permissions, you can edit existing workbooks by clicking the “Edit” button on the toolbar.  We’ve certainly come a long way, together As we close 2020, we would like to thank you. We really appreciate your patience as we rebuilt much-loved Desktop features and we cannot thank you enough for helping us identify which ones were most important to you.  Thank you to the 20,000+ that have participated in beta programs, posted on the Community Forums, and shared candid feedback while our PMs and user researchers pestered you with questions. It’s a little corny, but it’s true: you are what makes this #DataFam as special as it is. With your help, we were able to prioritize these web features among new analytics capabilities like viz in tooltip, nested sorting, spatial joins, and set actions!  We are excited to see what you build on the web using Tableau 2020.4. And I’m even more excited to show you what’s coming in 2021. In the coming releases, you will see more web-first features. After all, web applications are, well, web applications—so we expect them to behave a little differently, and certainly faster, than ye ol’ Desktop. I can’t share exact details, but you can expect investments that will make Tableau an exceptional web experience. And don’t worry—we are still delivering the very few remaining Desktop-loved features to the browser. We are just adding some special web-first considerations to them! Happy Holidays from all of us, to you. Here’s to 2021!  
Read more
  • 0
  • 0
  • 19731

article-image-percona-announces-percona-distribution-for-postgresql-to-support-open-source-databases
Amrata Joshi
18 Sep 2019
3 min read
Save for later

Percona announces Percona Distribution for PostgreSQL to support open source databases 

Amrata Joshi
18 Sep 2019
3 min read
Yesterday, the team at Percona, an open-source database software, and services provider announced Percona Distribution for PostgreSQL to offer expanded support for open source databases. It provides a fully supported distribution of the database and management tools to the organizations so that running applications based on PostgreSQL can deliver higher performance. Based on v11.5 of PostgreSQL, Percona Distribution for PostgreSQL provides support of database for cloud or on-premises deployments. This new database distribution will be unveiled at Percona Live Europe in Amsterdam(30th September- 2nd). Percona Distribution for PostgreSQL includes the following open-source tools to manage database instances and ensure that the data is available, secure, and backed up for recovery: pg_repack, a third-party extension rebuilds PostgreSQL database objects without the need of a table lock. pgaudit is a third-party extension that gives in-depth session and/or object audit logging via the standard logging facility in PostgreSQL. This helps the PostgreSQL users in providing detailed audit logs for compliance and certification purposes. pgBackRest is a backup tool that is responsible for replacing the built-in PostgreSQL backup offering. pgBackRest can scale up for handling large database workloads and can help companies minimize storage requirements by using streaming compression. It uses delta restores to lower the amount of time required to complete a restore. Patroni, a high availability solution for PostgreSQL implementations can be used in production deployments. This list also includes additional extensions that are supported by the PostgreSQL Global Development Group. This new database distribution will provide users with enterprise support, services as well as consulting for their open-source database instances across multiple distributions, across on-premises and cloud deployments. The team further announced that Percona Monitoring and Management will now support PostgreSQL. Peter Zaitsev, co-founder, and CEO of Percona said, “Companies are creating more data than ever, and they have to store and manage this data effectively.” Zaitsev further added, “Open source databases are becoming the platforms of choice for many organizations, and Percona provides the consultancy and support services that these companies rely on to be successful. Adding a distribution of PostgreSQL alongside our current options for MySQL and MongoDB helps our customers leverage the best of open source for their applications as well as get reliable and efficient support.” To know more about Percona Distribution for PostgreSQL, check out the official page. Other interesting news in data Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations $100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons
Read more
  • 0
  • 0
  • 19446

article-image-six-topics-on-its-mind-for-scaling-analytics-next-year-from-whats-new
Anonymous
22 Dec 2020
5 min read
Save for later

Six topics on IT's mind for scaling analytics next year from What's New

Anonymous
22 Dec 2020
5 min read
Brian Matsubara RVP of Global Technology Alliances Kristin Adderson December 22, 2020 - 9:46pm December 23, 2020 We recently wrapped up participation in the all-virtual AWS re:Invent 2020 where we shared our experiences from scaling Tableau Public ten-fold this year. What an informative few weeks! It wasn’t surprising that the theme of scalability was mentioned throughout many sessions; as IT leaders and professionals, you’re working hard to support remote workforces and evolving business needs in our current situation. This includes offering broader access to data and analytics and embracing the cloud to better adapt, innovate, and grow more resilient while facing the unexpected. As you welcome more individuals to the promising world of modern BI, you must ensure systems and processes are equipped to support higher demand, and empower everyone in the organization to make the most of your data and analytics investments. Let’s take a closer look at what’s top of mind for IT to best enable the business while scaling your analytics program.  Supporting your data infrastructure Many organizations say remote work is here to stay, while new data and analytics use cases are constantly emerging to address the massive amounts of data that organizations collect. IT must enable an elastic environment where it's easier, faster, more reliable, and more secure to ingest, store, analyze, and share data among a dispersed workforce.  1. Deploying flexible infrastructure With benefits including greater flexibility and more predictable operating expenses, cloud-based infrastructure can help you get analytics pipelines up and running fast. And attractive, on-demand pricing makes it easier to scale resources up and down, supporting growing needs. If you're considering moving your organization’s on-premises analytics to the cloud, you can accelerate your migration and time to value by leveraging the resources and expertise of a strategic partner. Hear from Experian who is deploying and scaling its analytics in the cloud and recently benefited from this infrastructure.  Experian turned to Tableau and AWS for support powering its new Experian Safeguard dashboard, a free analytics tool that helps public organizations use data to pinpoint and protect vulnerable communities. Accessibility and scalability of the dashboard resulted in faster time to market and adoption by nearly 70 local authorities, emergency services, and charities now using “data for good.”  2. Optimizing costs According to IDC research, analytics spend in the cloud is growing eight times faster than other deployment types. You’ve probably purchased a data warehouse to meet the highest demand timeframes of the organization, but don’t need the 24/7 support that can result in unused capacity and wasted dollars. Monitor cloud costs and use patterns to make better operating, governance, and risk management decisions around your cloud deployment as it grows, and to protect your investment —especially when leadership is looking for every chance to maximize resources and keep spending streamlined. Supporting your people Since IT’s responsibilities are more and more aligned with business objectives—like revenue growth, customer retention, and even developing new business models—it’s critical to measure success beyond deploying modern BI technology. It’s equally important to empower the business to adopt and use analytics to discover opportunities, create efficiencies, and drive change. 3. Onboarding and license management As your analytics deployment grows, it's not scalable to have individuals submit one-off requests for software licenses that you then have to manually assign, configure, and track. You can take advantage of the groups you’ve already established in your identity and access management solution to automate the licensing process for your analytics program. This can also reduce unused licenses, helping lines of business to save a little extra budget.  4. Ensuring responsible use Another big concern as analytics programs grow is maintaining data security and governance in a self-service model. Fortunately, you can address this while streamlining user onboarding even further by automatically configuring user permissions based on their group memberships. Coupled with well-structured analytics content, you’ll not only reduce IT administrative work, but you’ll help people get faster, secure access to trusted data that matters most to their jobs. 5. Enabling access from anywhere When your organization is increasingly relying on data to make decisions, 24/7 support and access to customized analytics is business-critical. With secure, mobile access to analytics and an at-a-glance view of important KPIs, your users can keep a pulse on their business no matter where they are. 6. Growing data literacy When everyone in the organization is equipped and encouraged to explore, understand, and communicate with data, you’ll see amazing impact from more informed decision-making. But foundational data skills are necessary to get people engaged and using data and analytics properly. Customers have shown us creative and fun ways that IT helps build data literacy, from formal training to community-building activities. For example, St. Mary’s Bank holds regular Tableau office hours, is investing more time and energy in trainings, and has games that test employees on their Tableau knowledge.  Want to learn more?  If you missed AWS re:Invent 2020, you’re not out of luck! You can still register and watch on-demand content, including our own discussion of scaling Tableau Public tenfold to support customers and their growing needs for sharing COVID-19 data (featuring SVP of Product Development, Ellie Fields, and Director of Software Engineering, Jared Scott). You’ll learn about how we reacted to customer demands—especially from governments reporting localized data to keep constituents safe and informed during the pandemic—including shifts from on-premises to the cloud, hosting vizzes that could handle thousands, even millions, of daily hits. Data-driven transformation is an ongoing journey. Today, the organizations that are successfully navigating uncertainty are those leaning into data and analytics to solve challenges and innovate together. No matter where you are—evaluating, deploying, or scaling—the benefits of the cloud and modern BI are available to you. You can start by learning more about how we partner with AWS.
Read more
  • 0
  • 0
  • 19430

article-image-amazon-patents-ai-powered-drones-to-provide-surveillance-as-a-service
Savia Lobo
21 Jun 2019
7 min read
Save for later

Amazon patents AI-powered drones to provide ‘surveillance as a service’

Savia Lobo
21 Jun 2019
7 min read
At the first re:MARS event early this month Amazon proposed its plans to further digitize its delivery services by making the AI-powered drones deliver orders. Amazon was recently granted a US patent on June 4 for these ‘unmanned aerial vehicles (UAV) or drones’ to provide “surveillance as a service.” The patent which was filed on June 12, 2015, mentions how Amazon’s UAVs could keep an eye on customers’ property between deliveries while supposedly maintaining their privacy. “The property may be defined by a geo-fence, which may be a virtual perimeter or boundary around a real-world geographic area. The UAV may image the property to generate surveillance images, and the surveillance images may include image data of objects inside the geo-fence and image data of objects outside the geo-fence,” the patent states. A diagram from the patent shows how delivery drones could be diverted to survey a location. Source: USPTO According to The Telegraph, “The drones would look for signs of break-ins, such as smashed windows, doors left open, and intruders lurking on people’s property. Anything unusual could then be photographed and passed on to the customer and the police”. “Drones have long been used for surveillance, particularly by the military, but companies are now beginning to explore how they might be used for home security”, The Verge reports. Amazon’s competitor, Alphabet Inc.’s Wing, became the first drone to win an FAA approval to operate as a small airline, in April. However, Amazon received an approval to start making drone deliveries only in remote parts of the United States. Amazon says it hopes to launch a commercial service “in a matter of months.” The drones could be programmed to trigger automated text or phone alerts if the system’s computer-vision algorithms spot something that could be a concern. Those alerts might go to the subscriber, or directly to the authorities. “For example, if the surveillance event is the determination that a garage door was left open, an alert may be a text message to a user, while if the surveillance event is a fire, an alert may be a text message or telephone call to a security provider or fire department,” the inventors write. But this raises a lot of data privacy concerns as this may allow drones to peep into people’s houses and collect information they are not supposed to. However, Amazon’s patent stating that, “Geo-clipped surveillance images may be generated by physically constraining a sensor of the UAV, by performing pre-image capture processing, or post-image capture processing. Geo-clipped surveillance images may be limited to authorized property, so privacy is ensured for private persons and property.” Amazon has been curating a lot of user data using various products including the smart doorbell made by Ring, which Amazon bought for more than $1 billion in February last year. This smart doorbell sends a video feed customers can check and answer from their smartphone. Amazon launched Neighbors, a crime-reporting social network that encourages users to upload videos straight from their Ring security cameras and tag posts with labels like “Crime,” “Safety,” and “Suspicious.” Over 50 local US police departments have partnered with Ring to gain access to its owners’ security footage. Amazon’s Key allows Prime members to have packages delivered straight into their homes—if they install its smart lock on their door and Amazon security cameras inside their homes. Last month, the US House Oversight and Reform Committee held its first hearing on examining the use of ‘Facial Recognition Technology’. The hearing included discussion on the use of facial recognition by government and commercial entities, flaws in the technology, lack of regulation and its impact on citizen’s civil rights and liberties. Joy Buolamwini, founder of Algorithmic Justice League highlighted one of the major pressing points for the failure of this technology as ‘misidentification’, that can lead to false arrests and accusations, a risk especially for marginalized communities. Earlier this year in January, activist shareholders proposed a resolution to limit the sale of Amazon’s facial recognition tech called Rekognition to law enforcement and government agencies. Rekognition was found to be biased and inaccurate and is regarded as an enabler of racial discrimination of minorities. Rekognition, runs image and video analysis of faces, has been sold to two states; Amazon has also pitched it to Immigration and Customs Enforcement. Last month, Amazon shareholders rejected the proposal on ban of selling its facial recognition tech to governments. Amazon pushed back the claims that the technology is inaccurate, and called on the U.S. Securities and Exchange Commission to block the shareholder proposal prior to its annual shareholder meeting. While ACLU blocked Amazon’s efforts to stop the vote, amid growing scrutiny of its product. According to an Amazon spokeswoman, the resolutions failed by a wide margin. Amazon has defended its work and said all users must follow the law. It also added a web portal for people to report any abuse of the service. The votes were non-binding, thus, allowing the company to reject the outcome of the vote. In April, Bloomberg reported that Amazon workers “listen to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands”. Also, this month, two lawsuits were filed in Seattle alleging that Amazon is recording voiceprints of children using its Alexa devices without their consent. This shows Amazon may be secretly collecting user’s data and now, with the surveillance drones they can gain access to user’s home on the whole. What more can a company driven on user data ask for? We’ll have to see if Amazon stays true to what they have stated in their patent. While drones hovering over for surveillance seems interesting, it is actually collecting large volumes of user data, and a lot of private information. Black hat hackers who use their skills to break into systems and access data and programs without the permission of the owners may gain access to this data, which is a risk. They can further sell the data to 3rd party buyers including advertisement companies who may further use it to forward advertisements on particular products they use. Amazon employees managing the data from these drones may also have certain access to this data. As a network administrator or security professional, the rights and privileges allow them access most of the data on the systems of user’s network. Also, one can easily decrypt the data if they have access to the recovery agent account. This creates an alarming state whether this extra private is data safe or not? On what level can intruders misuse this? According to The Verge, “Amazon has patented some pretty eccentric drone technologies over the years that have never made it to market; including a floating airship that could act as a warehouse for delivery drones, a parachute shipping label, and a system that lets a drone understand when you shout or wave at it”. https://twitter.com/drewharwell/status/1141712282184867840 https://twitter.com/drewharwell/status/1141793761283989504 To know more about ‘surveillance as a service’ read the patent. Amazon announces general availability of Amazon Personalize, an AI-based recommendation service US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact
Read more
  • 0
  • 0
  • 19359
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-open-ai-researchers-advance-multi-agent-competition-by-training-ai-agents-in-a-simple-hide-and-seek-environment
Sugandha Lahoti
18 Sep 2019
5 min read
Save for later

Open AI researchers advance multi-agent competition by training AI agents in a simple hide and seek environment

Sugandha Lahoti
18 Sep 2019
5 min read
Open AI researchers have built a simple hide and seek game environment for multi-agent competition where they observed that AI agents can learn complex strategies and skills on their own as the game progresses. In fact, these AI agents built six distinct strategies and counterstrategies, some of which were not even supported by the training environment. The researchers concluded that such multi-agent co-adaptation may one day produce extremely complex and intelligent behavior. The Hide and Seek training environment AI agents play a team-based hide-and-seek game in a physics-based environment. Hiders (blue) avoid the line of sight from the seekers, and the seekers keep the vision of the hiders. The environment has various objects (walls, ramps, blocks) that agents can grab and also lock in place. There are also randomly generated immovable rooms and walls that the agents must learn to navigate. Before the game, hiders are given a preparation time to run away or change their environment and the seekers are immobilized. Agents are given a team-based reward; hiders are given a reward of +1 if all hiders are hidden and -1 if any hider is seen by a seeker. Seekers are given the opposite reward, -1 if all hiders are hidden and +1 otherwise. There are no explicit incentives for agents to interact with objects in the environment; they are penalized if they go too far outside the play area. Agent characteristics The agents can move by setting a force on themselves in the x and y directions as well as rotate along the z-axis. The agents can see objects in their line of sight and within a frontal cone. The agents can sense the distance to objects, walls, and other agents around them using a lidar-like sensor. The agents can grab and move objects in front of them. The agents can lock objects in place. Only the team that locked an object can unlock it. AI agents develop six distinct strategies Agents are trained using self-play and agent policies are composed of two separate networks with different parameters. This includes a policy network that produces an action distribution and a critic network that predicts the discounted future returns. Policies are optimized using Proximal Policy Optimization (PPO) and Generalized Advantage Estimation (GAE); training is performed using OpenAI’s rapid, it’s general-purpose RL training system. The researchers noticed that as agents train against each other in hide-and-seek, six distinct strategies emerge. Initially, hiders and seekers learn to crudely run away and chase. After approximately 25 million episodes of hide-and-seek, the hiders learn to use the tools at their disposal and intentionally modify their environment. After another 75 million episodes, the seekers also learn rudimentary tool use; they learn to move and use ramps to jump over obstacles, etc. 10 million episodes later, the hiders learn to defend against this strategy; the hiders learn to bring the ramps to the edge of the play area and lock them in place, seemingly removing the only tool the seekers have at their disposal. After 380 million total episodes of training, the seekers learn to bring a box to the edge of the play area where the hiders have locked the ramps. The seekers then jump on top of the box and surf it to the hiders’ shelter In response, the hiders learn to lock all of the boxes in place before building their shelter. https://youtu.be/kopoLzvh5jY They also found some surprising behavior by these AI agents. Box surfing: Since agents move by applying forces to themselves, they can grab a box while on top of it and “surf” it to the hider’s location. Endless running: Without adding explicit negative rewards for agents leaving the play area, in rare cases, hiders will learn to take a box and endlessly run with it. Ramp exploitation (hiders): Hiders abuse contact physics and remove ramps from the play area. Ramp exploitation (seekers): Seekers learn that if they run at a wall with a ramp at the right angle, they can launch themselves upward. The researchers concluded that complex human-relevant strategies and skills can emerge from multi-agent competition and standard reinforcement learning algorithms at scale. They state, “our results with hide-and-seek should be viewed as a proof of concept showing that multi-agent auto-curricula can lead to physically grounded and human-relevant behavior.” This research was well appreciated by readers. Many people took to Hacker News to congratulate the researchers. Here are a few comments. “Amazing. Very cool to see this sort of multi-agent emergent behavior. Along with the videos, I can't help but get a very 'Portal' vibe from it all. "Thank you for helping us help you help us all." “This is incredible. The various emergent behaviors are fascinating. It seems that OpenAI has a great little game simulated for their agents to play in. The next step to make this even cooler would be to use physical, robotic agents learning to overcome challenges in real meatspace!” “I'm completely amazed by that. The hint of a simulated world seems so matrix-like as well, imagine some intelligent thing evolving out of that. Wow.” Read the research paper for a deeper analysis. The code is available on GitHub. More news in Artificial Intelligence Google researchers present Weight Agnostic Neural Networks (WANNs) that perform tasks without learning weight parameters DeepMind introduces OpenSpiel, a reinforcement learning-based framework for video games Google open sources an on-device, real-time hand gesture recognition algorithm built with MediaPipe
Read more
  • 0
  • 0
  • 19338

article-image-michelangelo-pyml-introducing-ubers-platform-for-rapid-machine-learning-development
Amey Varangaonkar
25 Oct 2018
3 min read
Save for later

Michelangelo PyML: Introducing Uber’s platform for rapid machine learning development

Amey Varangaonkar
25 Oct 2018
3 min read
Transportation network giants Uber have developed Michelangelo PyML - a Python-powered platform for rapid prototyping of machine learning models. The aim of this platform is to offer machine learning as a service that democratizes machine learning and makes it possible to scale the AI models to meet business needs efficiently. Michelangelo PyML is an integration of Michelangelo - which Uber developed for large-scale machine learning in 2017. This will make it possible for their data scientists and engineers to build intelligent Python-based models that run at scale for online as well as offline tasks. Why Uber chose PyML for Michelangelo Uber developed Michelangelo in September 2017 with a clear focus of high performance and scalability. It currently enables Uber’s product teams to design, build, deploy and maintain machine learning solutions at scale and powers roughly close to 1 million predictions per second. However, that also came at the cost of flexibility. Users mainly were faced with 2 critical issues: It was possible to train the models using the algorithms that were only natively supported by Michelangelo. To run unsupported algorithms, the platform’s capability had to be extended to include additional training and deployment components. This caused a lot of inconvenience at times. The users could not use any feature transformations apart from those offered by Michelangelo’s DSL (Domain Specific Language) Apart from these constraints, Uber also observed that data scientists usually preferred Python over other programming language, given the rich suite of libraries and frameworks available in Python for effective analytics and machine learning. Also, many data scientists gathered and worked with data locally using tools such as pandas, scikit-learn and Tensorflow, as opposed to Big Data tools such as Apache Spark and Hive, while spending hours in setting them up. How PyML improves Michelangelo Based on the challenges faced in using Michelangelo, Uber decided to revamp the platform by integrating PyML to make it more flexible. PyML provides a concrete framework for data scientists to build and train machine learning models that can be deployed quickly, safely and reliably across different environments. This, without any restriction on the types of data they can use or the algorithms they can choose to build the model, makes it an ideal choice of tool to integrate with a platform like Michelangelo. By integrating Python-based models that can operate at scale with Michelangelo, Uber will now be able to handle online as well as offline queries and give smart predictions quite easily. This could be a potential masterstroke by Uber, as they try to boost their business and revenue growth after it slowed down over the last year. Read more Why did Uber created Hudi, an open source incremental processing framework on Apache Hadoop? Uber’s Head of corporate development, Cameron Poetzscher, resigns following a report on a 2017 investigation into sexual misconduct Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop
Read more
  • 0
  • 0
  • 19249

article-image-openai-introduces-musenet-a-deep-neural-network-for-generating-musical-compositions
Bhagyashree R
26 Apr 2019
4 min read
Save for later

OpenAI introduces MuseNet: A deep neural network for generating musical compositions

Bhagyashree R
26 Apr 2019
4 min read
OpenAI has built a new deep neural network called MuseNet for composing music, the details of which it shared in a blog post yesterday. The research organization has made a prototype of MuseNet-powered co-composer available for users to try till May 12th. https://twitter.com/OpenAI/status/1121457782312460288 What is MuseNet? MuseNet uses the same general-purpose unsupervised technology as OpenAI’s GPT-2 language model, Sparse Transformer. This transformer allows MuseNet to predict the next note based on the given set of notes. To enable this behavior, Sparse Transformer uses something called “Sparse Attention”, where each of the output position computes weightings from a subset of input positions. For audio pieces, a 72-layer network with 24 attention heads is trained using the recompute and optimized kernels of Sparse Transformer. This provides the model long context that enables it to remember long term structure in a piece. For training the model, the researchers have collected training data from various sources. The dataset includes the MIDI files donated by ClassicalArchives and BitMidi. The dataset also includes data from online collections, including Jazz, Pop, African, Indian, and Arabic styles. The model is capable of generating 4-minute musical compositions with 10 different instruments and is aware of different music styles from composers like Bach, Mozart, Beatles, and more. It can also convincingly blend different music styles to create a completely new music piece. The MuseNet prototype, which is made available for users to try, only comes with a small subset of options. It supports two modes: In simple mode, users can listen to the uncurated samples generated by OpenAI. To generate a music piece yourself, you just need to choose a composer or style and an optional start of a famous piece. In advanced mode, users can directly interact with the model. Generating music in this mode will take much longer but will give an entirely new piece. Here’s how the advanced mode looks like: Source: OpenAI What are its limitations? The music generation tool is still a prototype so it does has some limitations: To generate each note, MuseNet calculates the probabilities across all possible notes and instruments. Though the model gives more priority to your instrument choices, there is a possibility that it will choose something else. MuseNet finds it difficult to generate a music piece in case of odd pairings of styles and instruments. The generated music will sound more natural if you pick instruments closest to the composer or band’s usual style. Many users have already started testing out the model. While some users are pretty impressed by the AI-generated music, some think that it is quite evident that the music is machine generated and lacks the emotional factor. Here’s an opinion shared by a Redditor for different music styles: “My take on the classical parts of it, as a classical pianist. Overall: stylistic coherency on the scale of ~15 seconds. Better than anything I've heard so far. Seems to have an attachment to pedal notes. Mozart: I would say Mozart's distinguishing characteristic as a composer is that every measure "sounds right". Even without knowing the piece, you can usually tell when a performer has made a mistake and deviated from the score. The Mozart samples sound... wrong. There are parallel 5ths everywhere. Bach: (I heard a bach sample in the live concert) - It had roughly the right consistency in the melody, but zero counterpoint, which is Bach's defining feature. Conditioning maybe not strong enough? Rachmaninoff: Known for lush musical textures and hauntingly beautiful melodies. The samples got the texture approximately right, although I would describe them more as murky more than lush. No melody to be heard.” Another user commented, “This may be academically interesting, but the music still sounds fake enough to be unpleasant (i.e. there's no way I'd spend any time listening to this voluntarily).” Though this model is in the early stages, an important question that comes in mind is who will own the generated music. “When discussing this with my friends, an interesting question came up: Who owns the music this produces? Couldn't one generate music and upload that to Spotify and get paid based off the number of listens?.” another user added. To know more in detail, visit the OpenAI’s official website. Also, check out an experimental concert by MuseNet that was live-streamed on Twitch. OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence OpenAI Five bots destroyed human Dota 2 players this weekend OpenAI Five beats pro Dota 2 players; wins 2-1 against the gamers
Read more
  • 0
  • 0
  • 19232

article-image-december-developer-platform-news-personal-access-tokens-update-auto-disabling-webhooks-and-jupyterlab-integration-from-whats-new
Anonymous
18 Dec 2020
5 min read
Save for later

December Developer Platform news: Personal Access Tokens update, auto-disabling Webhooks, and JupyterLab integration from What's New

Anonymous
18 Dec 2020
5 min read
Geraldine Zanolli Developer Evangelist Kristin Adderson December 18, 2020 - 3:42am December 18, 2020 Every month is like Christmas for Developer Program members because we strive to delight our members as we showcase the latest projects from our internal developer platform and tools engineers. For the last Sprint Demos, we featured some exciting updates: Personal Access Token impersonation, auto-disabling Webhooks, new Webhooks payload for Slack, and JupyterLab integration for the Hyper API. Check out the gifts of increased communication, time, and security that these updates will bring. Personal Access Token (PAT) impersonation One of the use cases for the REST API is to query available content (e.g. projects, workbooks, data sources) for certain users. For embedding scenarios specifically, we often want to load up end-user-specific content within the application. The way to do this today is via impersonation, by which a server admin can impersonate a user, query as that user, and retrieve content that user has access to based on permissions within Tableau. Today, server admins can already impersonate users by sending over the user’s unique userID as part of the sign-in request, however, in order to do this, they need to hardcode their username and password in any scripts requiring impersonation.  Over a year ago, we released Personal Access Tokens (PATs), which are long-lived authentication tokens that allow users to run automation with the Tableau REST API without hard-coding credentials or requiring an interactive login. In the 2021.1 release, we are going to introduce user impersonation support for PATs, the last piece of functionality previously supported only by hard-coded credentials in REST API scripts. So, why not update all your scripts to use PATs today? Auto-disable Webhooks Webhooks is a notification service that allows you to integrate Tableau with any external server. Anytime that an event is happening on Tableau, Tableau is sending an HTTP POST request to the external server. Once the external server is receiving the request, it can respond to the event. But what is happening when the Webhook fails? You might have created multiple Webhooks on your site for testing that are no longer set properly, which means you’ll want to manually disable them or delete them. Today, the way that a Webhook works is that every time a Webhook is triggered, it is going to attempt to connect to the external server up to four times. After four times, it is going to count as a failed delivery attempt.  In our upcoming product releases, after four failed delivery attempts, the Webook will be automatically disabled and an email will be sent to the Webhook owner. But don't worry: If you have a successful delivery attempt before reaching a fourth failed attempt, the counter will be reset to zero. As always, you can configure these options on Tableau Server. Slack: New payload for Webhooks Since the release of Webhooks, we noticed that one of the most popular use cases is Slack. Tableau users want to be notified on Slack when an event is happening on Tableau. Today, this use case doesn’t work out of the box. You need to set up middleware in order to send Webhooks from Tableau to Slack—so yes, the payload that we’re sending from Tableau has a different format than the payload that Slack is asking for. (It's like speaking French to someone who only speaks German: you need a translator in the middle.)  In the upcoming 2021.1 release, you’ll be able to create new Webhooks to Slack with no need for middleware! We’re going to add an additional field to the payload.  Hyper API: JupyterLab integration Hyper API is a powerful tool, but with the new command-line interface around Hyper API, will it be even more powerful?  It will indeed! We added the command-line interface around HyperAPI to our hyper-api-samples in our open-source repository, so you can directly run SQL queries against Hyper. We integrated with an existing command-line interface infrastructure—the Jupyter infrastructure—giving you the ability to use HyperAPI directly within JupyterLab. If you’re not familiar with JupyterLab, it’s a web-based IDE mostly used by data scientists.  With the JupyterLab integration, it has never been easier for you to prototype new functionalities:  You can run your SQL queries and check the results without having to write a complete program around Hyper API.  Debugging is also becoming easier: You can isolate your queries to find the root cause of your issue.  Don’t forget about all the ad hoc, analytical queries that you can now run on data directly from your console. Get started using JupyterLab in a few minutes. Updates from the #DataDev Community The #DataDev community continues to share their knowledge with others and drive innovation: Robert Crocker (twitter @robcrock) published two tutorials on the JavaScript API Elliott Stam (twitter @elliottstam) launched a YouTube Channel and published multiple videos on the Tableau REST APIs Andre de Vries (twitter @andre347_) also shared on YouTube a video explaining Trusted Authentication Anya Prosvetova (twitter @Anyalitica) inspired from the Brain Dates at TC-ish launched a monthly DataDev Happy Hours to chat about APIs and Developer Tools Join the #DataDev community to get your invitation to our exclusive Sprint Demos and be the first to know about the Developer Platform updates—directly from the engineering team. See you next year!
Read more
  • 0
  • 0
  • 19216
article-image-facebook-ai-open-sources-pytorch-biggraph-for-faster-embeddings-in-large-graphs
Natasha Mathur
03 Apr 2019
3 min read
Save for later

Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs

Natasha Mathur
03 Apr 2019
3 min read
The Facebook AI team yesterday announced, the open-sourcing of PyTorch-BigGraph (PBG), a tool that enables faster and easier production of graph embeddings for large graphs.   With PyTorch-BigGraph, anyone can take a large graph and produce high-quality embeddings with the help of a single machine or multiple machines in parallel. PBG is written in PyTorch, allowing researchers and engineers to easily swap in their own loss functions, models, and other components. Other than that, PBG can also compute the gradients and is automatically scalable. Facebook AI team states that the standard graph embedding methods don’t scale well and are not able to operate on large graphs consisting of billions of nodes and edges. Also, many graphs exceed the memory capacity of commodity servers creating problems for the embedding systems. But, PBG helps prevent that issue. PBG performs block partitioning of the graph that helps overcome the memory limitations of graph embeddings. Also, nodes are randomly divided into P partitions ensuring the two partitions fit easily in memory. The edges are then further divided into P2 buckets depending on their source and the destination node. After this partitioning, training can be performed on one bucket at a time. PBG offers two different ways to train embeddings of partitioned graph data, namely, single machine and distributed training. In a single-machine training, embeddings and edges are swapped out in case they are not being used. In distributed training, PBG uses PyTorch parallelization primitives and embeddings are distributed across the memory of multiple machines. Facebook AI team also made several modifications to the standard negative sampling, which is necessary for large graphs. “We took advantage of the linearity of the functional form to reuse a single batch of N random nodes to produce corrupted negative samples for N training edges..this allows us to train on many negative examples per true edge at a little computational cost”, says the Facebook AI team. To produce embeddings useful in different downstream tasks, Facebook AI team found an effective approach that involves corrupting edges with a mix of 50 percent nodes sampled uniformly from the nodes, and with 50 percent nodes sampled based on their number of edges. Apart from that, to analyze PBG’s performance, Facebook AI used the publicly available Freebase knowledge graph comprising more than 120 million nodes and 2.7 billion edges. A smaller subset of the Freebase graph, known as FB15k. was also used. As a result, PBG performed comparably to other state-of-the-art embedding methods for the FB15k data set. PBG was also used to train embeddings for the full Freebase graph where PBG’s partitioning scheme reduced both memory usage and training time. PBG embeddings were also evaluated for several publicly available social graph data sets and it was found that PBG outperformed all the competing methods. “We..hope that PBG will be a useful tool for smaller companies and organizations that may have large graph data sets but not the tools to apply this data to their ML applications. We hope that this encourages practitioners to release and experiment with even larger data sets”, states the Facebook AI team. For more information, check out the official Facebook AI blog. PyTorch 1.0 is here with JIT, C++ API, and new distributed packages PyTorch 1.0 preview release is production ready with torch.jit, c10d distributed library, C++ API PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn
Read more
  • 0
  • 0
  • 19176

article-image-unity-introduces-guiding-principles-for-ethical-ai-to-promote-responsible-use-of-ai
Natasha Mathur
03 Dec 2018
3 min read
Save for later

Unity introduces guiding Principles for ethical AI to promote responsible use of AI

Natasha Mathur
03 Dec 2018
3 min read
The Unity team announced guidelines to Ethical AI, last week, to promote more responsible use of Artificial Intelligence for its developers, community, and the company. Unity’s guide to Ethical AI comprises six guiding AI principles. Unity’s six guiding AI principles Be unbiased This principle focuses on designing AI tools in a way that complements the human experience in a positive way. To achieve this, it is important to take into consideration all types of diverse human experiences that can, in turn, lead to AI complementing experiences for everybody. Be Accountable This principle puts an emphasis on keeping in mind the potential negative consequences, risks, and dangers of the AI tools while building them. It focuses on assessing the factors that might cause “direct or indirect harm” so that they can be avoided. This ensures accountability. Be fair This principle focuses on ensuring that the kind of AI tools developed does not interfere with “normal, functioning democratic systems of government”. So, the development of an AI tool that can lead to the suppression of human rights (such as free expression), as defined by the Universal Declaration, should be avoided. Be responsible This principle stresses the importance of developing products responsibly. It ensures that AI developers don’t take undue advantage of the vast capabilities of AI while building a product. Be Honest This principle focuses on building trust among the users of a technology by being clear and transparent about the product so that they can better understand its purpose. This, in turn, will lead to users making better and more informed decisions regarding the product. Be Trustworthy This principle emphasizes the importance of protecting the AI derived user data. “Guard the AI derived data as if it were handed to you by your customer directly in trust to only be used as directed under the other principles found in this guide” reads the Unity blog. “We expect to develop these principles more fully and to add to them over time as our community of developers, regulators, and partners continue to debate best practices in advancing this new technology. With this guide, we are committed to implementing the ethical use of AI across all aspects of our company’s interactions, development, and creation”, says the Unity team. For more information, check out the official Unity blog post. EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018 Teaching AI ethics – Trick or Treat? SAP creates AI ethics guidelines and forms an advisory panel
Read more
  • 0
  • 0
  • 19126

article-image-how-googles-deepmind-is-creating-images-with-artificial-intelligence
Sugandha Lahoti
28 Mar 2018
2 min read
Save for later

How Google’s DeepMind is creating images with artificial intelligence

Sugandha Lahoti
28 Mar 2018
2 min read
The research team at DeepMind have been using deep reinforcement learning agents to generate images as humans do. DeepMind’s AI Agents understand how digits, characters, and portraits are actually constructed instead of analyzing pixels that represent it on a screen. DeepMind’s AI agents interact with the computer paint program, placing strokes on digital canvas and changing the brush size, pressure and color. How does DeepMind generate images? As a part of the initial training process, the agent starts by drawing random strokes with no visible intent or structure. Following the reinforcement learning approach, the agent is then ‘rewarded’. This ‘encourages’ it to produce meaningful drawings. To monitor the performance of the first network, DeepMind trained a second neural network, called the discriminator. This discriminator predicts whether a particular drawing was produced by the agent, or if it was sampled from a dataset of real photographs. The painting agent is rewarded by how much it manages to “fool” the discriminator into thinking that the drawings are real. Most importantly, DeepMind’s AI agents produce images by writing graphics programs to interact with a paint environment. This is different from how a GAN works where the generator in GAN setups directly output pixels.  Moreover, the model can also apply what it has learned on the simulated paint program to re-create characters in other similar environments. This is because the framework is interpretable in the sense that it produces a sequence of motions that control a simulated brush. Training DeepMind AI agents This agent was trained to generate images resembling MNIST digits: it was shown what the digits look like, but not how they are drawn. By attempting to generate images that fool the discriminator, the agent learned to control the brush and to maneuver it to fit the style of different digits. This model was also trained to reproduce specific images on real datasets. When trained to paint celebrity faces, the agent is capable of capturing the main traits of the face, such as shape, tone, and hairstyle, much like a street artist would when painting a portrait with a limited number of brush strokes. Source: DeepMind Blog For further details on methodology and experimentation, read the research paper.
Read more
  • 0
  • 0
  • 19106
article-image-you-can-now-make-music-with-ai-thanks-to-magenta-js
Richard Gall
04 May 2018
3 min read
Save for later

You can now make music with AI thanks to Magenta.js

Richard Gall
04 May 2018
3 min read
Google Brain's Magenta project has released Magenta.js, a tool that could open up new opportunities in developing music and art with AI. The Magenta team have been exploring a range of ways to create with machine learning, but with Magenta.js, they have developed a tool that's going to open up the very domain they've been exploring to new people. Let's take a look at how the tool works, what the aims are, and how you can get involved. How does Magenta.js work? Magenta.js is a JavaScript suite that runs on TensorFlow.js, which means it can run machine learning models in the browser. The team explains that JavaScript has been a crucial part of their project, as they have been eager to make sure they bridge the gap between the complex research they are doing and their end users. They want their research to result in tools that can actually be used. As they've said before: "...we often face conflicting desires: as researchers we want to push forward the boundaries of what is possible with machine learning, but as tool-makers, we want our models to be understandable and controllable by artists and musicians." As they note, JavaScript has informed a number of projects that have preceded Magenta.js, such as Latent Loops, Beat Blender and Melody Mixer. These tools were all built using MusicVAE, a machine learning model that forms an important part of the Magenta.js suite. The first package you'll want to pay attention to in Magenta.js is @magenta/music. This package features a number of Magenta's machine learning models for music including MusicVAE and DrumsRNN. Thanks to Magenta.js you'll be able to quickly get started. You can use a number of the project's pre-trained models which you can find on GitHub here. What next for Magenta.js? The Magenta team are keen for people to start using the tools they develop. They want a community of engineers, artists and creatives to help them drive the project forward. They're encouraging anyone who develops using Magenta.js to contribute to the GitHub repo. Clearly, this is a project where openness is going to be a huge bonus. We're excited to not only see what the Magenta team come up with next, but also the range of projects that are built using it. Perhaps we'll begin to see a whole new creative movement emerge? Read more on the project site here.
Read more
  • 0
  • 0
  • 19029

article-image-mongodb-relational-4-0-release
Amey Varangaonkar
16 Apr 2018
2 min read
Save for later

MongoDB going relational with 4.0 release

Amey Varangaonkar
16 Apr 2018
2 min read
MongoDB is, without a doubt, the most popular NoSQL database today. Per the Stack Overflow Developer Survey, more developers have been wanting to work with MongoDB than any other database over the last two years. With the upcoming MongoDB 4.0 release, it plans to up the ante by adding support for multi-document transactions and ACID-based features (Atomicity Consistency Integrity and Durability). Poised to be released this summer, MongoDB 4.0 will combine the speed, flexibility and the efficiency of document models - features which make MongoDB such a great database to use - with the assurance of transactional integrity. This new addition should give the database a more relational feel, and would suit large applications with high data integrity needs regardless of how the data is modeled. It has also ensured that the support for multi-document transactions will not affect the overall speed and performance of the unrelated workloads running concurrently. MongoDB have been working on this transactional integrity feature for over 3 years now, ever since they incorporated the WiredTiger storage engine. The MongoDB 4.0 release should also see the introduction of some other important features such as snapshot isolation, a consistent view of data, ability to roll-back transactions and other ACID features. Per the 4.0 product roadmap, 85% of the work is already done, and the release seems to be on time to hit the market. You can read more about the announcement on MongoDB’s official page.You can also join the beta program to test out the newly added features in 4.0.  
Read more
  • 0
  • 0
  • 18986
Modal Close icon
Modal Close icon