Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-google-employees-quit-over-companys-continued-ai-ties-with-the-pentagon
Amey Varangaonkar
16 May 2018
2 min read
Save for later

Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon

Amey Varangaonkar
16 May 2018
2 min read
Raising ethical concerns over Google’s continued involvement in developing Artificial Intelligence for military and warfare purposes, about a dozen Google employees have reportedly resigned. Since inception, many Googlers have been against Project Maven - Google’s project with the Pentagon, regarding the supply of machine learning technologies for image recognition and object detection purposes in the military drones. Earlier in April, Google employees had signed a petition, urging Google CEO Sundar Pichai to dissociate themselves from the Department of Defence by pulling out of Project Maven. They were of the opinion that humans, not AI algorithms, should be responsible for the sensitive and potentially life-threatening military work, and Google should invest in the betterment of human lives, not in war. Google had reassured their employees that the technology would be used in a non-offensive manner, and that policies were in effect regarding the use of AI in military projects. However, the resigning employees are of the view that these policies were not being strictly followed. The employees also felt that Google were less transparent about communicating controversial business decisions and were not receptive of the employee feedback like before. One of the employees who has resigned said, “Over the last couple of months, I’ve been less and less impressed with Google’s response and the way our concerns are being listened to.” The resignation of the employees sheds a bad light on Google’s employee retention strategy, and their reputation as a whole. These resignations might encourage more employees to evaluate their position within the company, given the lack of grievance redressal from Google’s end. Surrounded by fierce competition, losing talent to their rivals should be the last thing on Google’s agenda right now, and it will be interesting to see what Google’s plan of action will be in this regard. On the other hand, rivals Microsoft and Amazon have also signed partnerships with the US government, offering the required infrastructure and services to improve the defence functionalities. While there has been no reports of protests by their employees, Google seem to have found themselves in a soup, on ethical and moral grounds. Google Employees Protest against the use of Artificial Intelligence in Military Google News’ AI revolution strikes balance between personalization and the bigger picture Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 29545

article-image-q-101-getting-know-basics-microsofts-new-quantum-computing-language
Sugandha Lahoti
14 Dec 2017
5 min read
Save for later

Q# 101: Getting to know the basics of Microsoft’s new quantum computing language

Sugandha Lahoti
14 Dec 2017
5 min read
A few days back we posted about the preview of Microsoft‘s development toolkit with a new quantum programming language, simulator, and supporting tools. The development kit contains the tools which allow developers to build their own quantum computing programs and experiments. A major component of the Quantum Development Kit preview is the Q# programming language. According to Microsoft “Q# is a domain-specific programming language used for expressing quantum algorithms. It is to be used for writing sub-programs that execute on an adjunct quantum processor, under the control of a classical host program and computer.” The Q# programming language is foundational for any developer of quantum software. It is deeply integrated with the Microsoft Visual Studio and hence programming quantum computers is easy for developers who are well-versed with Microsoft Visual Studio. An interesting feature of Q# is the fact that it supports a basic procedural model (read loops and if/then statements) for writing programs. The top-level constructs in Q# are user-defined types, operations, and functions. The Type models Q# provides several type models. There are the primitive types such as the Qubit type or the Pauli type. The Qubit type represents a quantum bit or qubit. A quantum computer stores information in the form of qubits as both 1s and 0s at the same time.  Qubits can either be tested for identity (equality) or passed to another operation. Actions on Qubits are implemented by calling operations in the Q# standard library. The Pauli type represents an element of the single-qubit Pauli group. The Pauli group on 1 qubit is the 16-element matrix group consisting of the 2 × 2 identity matrix and all of the Pauli matrices. This type has four possible values: PauliI, PauliX, PauliY, and PauliZ. There are also array and tuple types for creating new, structured types. It is possible to create arrays of tuples, tuples of arrays, tuples of sub-tuples, etc. Tuple instances are immutable i.e. the contents of a tuple can’t be changed once created. Q# does not include support for rectangular multi-dimensional arrays. Q# also has User-defined types. User-defined types may be used anywhere. It is possible to define an array of a user-defined type and to include a user-defined type as an element of a tuple type. newtype TypeA = (Int, TypeB); newtype TypeB = (Double, TypeC); newtype TypeC = (TypeA, Range); Operations and Functions A Q# operation is a quantum subroutine, which means it is a callable routine that contains quantum operations. A Q# function is the traditional subroutine used within a quantum algorithm. It has no quantum operations. You may pass operations or qubits to Functions for processing. However, they can’t allocate or borrow qubits or call operations. Operations and functions are together known as callables. A functor in Q# is a factory that specifies a new operation from another operation. An important feature of the function is the fact, that they have access to the implementation of the base operation when defining the implementation of the new operation. This means that functors can perform more complex functions than classical complex functions. Comments Comments begin with two forward slashes, //, and continue until the end of line.  A comment may appear anywhere in a Q# source file, including where statements are not valid.  However, end of line comments in the middle of an expression is not supported, although the expression can be multi-lined. Comments can also begin with three forward slashes, ///. Their contents are considered as documentation for the defined callable or user-defined type when they appear immediately before an operation, function, or type definition. Namespaces Q# follows the same rules for namespace as other .NET languages. Every Q# operation, function, and user-defined type is defined within a namespace.  However, Q# does not support nested namespaces. Control Flow The control flow consists of For-Loop, Repeat-Until-Success Loop, Return statement, and the Conditional statement. For-Loop Like the traditional for loop, Q# uses the for statement for iteration through an integer range. The statement consists of the keyword for, followed by an identifier, the keyword in, a Range expression, and a statement block. for (index in 0 .. n-2) { set results[index] = Measure([PauliX], [qubits[index]]); } Repeat-until-success Loop The repeat statement supports the quantum “repeat until success” pattern. It consists of the keyword repeat, followed by a statement block (the loop body), the keyword until, a Boolean expression, the keyword fixup, and another statement block (the fixup). using ancilla = Qubit[1] {    repeat {        let anc = ancilla[0];        H(anc);        T(anc);        CNOT(target,anc);        H(anc);        let result = M([anc],[PauliZ]);    } until result == Zero    fixup {        ();    } }  The Conditional statement Similar to the if-then conditional statement in most programming languages, the if statement in Q# supports conditional execution. It consists of the keyword if, followed by a Boolean expression and a statement block (the then block). This may be followed by any number of else-if clauses, each of which consists of the keyword elif, followed by a Boolean expression and a statement block (the else-if block). if (result == One) {    X(target); } else {    Z(target); }  Return Statement The return statement ends execution of an operation or function and returns a value to the caller.  It consists of the keyword return, followed by an expression of the appropriate type, and a terminating semicolon. return 1; OR return (); OR return (results, qubits); File Structure A Q# file consists of one or more namespace declarations. Each namespace declaration contains definitions for user-defined types, operations, and functions. You can download the Quantum Development Kit here. You can learn more about the features of the Q# language here.
Read more
  • 0
  • 0
  • 29270

article-image-microsoft-ignite-2018-highlights-from-day-1
Savia Lobo
25 Sep 2018
7 min read
Save for later

Microsoft Ignite 2018: Highlights from day 1

Savia Lobo
25 Sep 2018
7 min read
Microsoft Ignite 2018 got started yesterday on the 24th of September 2018 in, Orlando, Florida. The event will run until the 28th of September 2018 and will host more than 26,000 Microsoft developers from more than 100 countries. Day 1 of Microsoft Ignite was full of exciting news and announcements including Microsoft Authenticator, AI-enabled updates to Microsoft 365, and much more! Let’s take a look at some of the most important announcements from Orlando. Microsoft puts an end to passwords via its Microsoft Authenticator app Microsoft security helps protect hundreds of thousands of line-of-business and SaaS apps as they connect to Azure AD. They plan to deliver new support for password-less sign-in to Azure AD-connected apps via Microsoft Authenticator. The Microsoft Authenticator app replaces your password with a more secure multi-factor sign-in that combines your phone and your fingerprint, face, or PIN. Using a multi-factor sign-in method, users can reduce compromise by 99.9%. Not only is it more secure, but it also improves user experience by eliminating passwords. The age of the password might be reaching its end thanks to Microsoft. Azure IoT Central is now generally available Microsoft announced the public preview of Azure IoT Central in December 2017. At Ignite yesterday, Azure made its IoT Central generally available. Azure IoT Central is a fully managed software-as-a-service (SaaS) offering, which enables customers and partners to provision an IoT solution in seconds. Users can customize it in just a few hours, and go to production the same day—all without requiring any cloud solution development expertise. Azure IoT Central is built on the hyperscale and enterprise-grade services provided by Azure IoT. In theory, it should match the security and scalability needs of Azure users. Microsoft has also collaborated with MultiTech, a leading provider of communications hardware for the Internet of Things, to integrate IoT Central functionality into the MultiConnect Conduit programmable gateway. This integration enables out-of-the-box connectivity from Modbus-connected equipment directly into IoT Central for unparalleled simplicity from proof of concept through wide-scale deployments. To know more about Azure IoT central, visit its blog. Microsoft Azure introduces Azure Digital Twins, the next evolution in IoT Azure Digital Twins allows customers and partners to create a comprehensive digital model of any physical environment, including people, places, and things, as well as the relationships and processes that bind them. Azure Digital Twins uses Azure IoT Hub to connect the IoT devices and sensors that keep the digital model up to date with the physical world. This will enable two powerful capabilities: Users can respond to changes in the digital model in an event-driven and serverless way to implement business logic and workflows for the physical environment. For instance, in a conference room when a presentation is started in PowerPoint, the environment could automatically dim the lights and lower the blinds. After the meeting, when everyone has left, the lights are turned off and the air conditioning is lowered. Azure Digital Twins also integrates seamlessly with Azure data and analytics services, enabling users to track the past and predict the future of their digital model. Azure Digital Twins will be available for preview on October 15 with additional capabilities. To know more, visit its webpage. Azure Sphere, a solution for creating highly secure MCU devices In order to help organizations seize connected device opportunities while meeting the challenge of IoT risks, Microsoft developed Azure Sphere, a solution for creating highly secure MCU devices. At Ignite 2018, Microsoft announced that Azure Sphere development kits are universally available and that the Azure Sphere OS, Azure Sphere Security Service, and Visual Studio development tools have entered public preview. Together, these tools provide everything needed to start prototyping new products and experiences with Azure Sphere. Azure Sphere allows manufacturers to build highly secure, internet-enabled MCU devices that stay protected even in an evolving threat landscape. Azure Sphere’s unique mix of three components works in unison to reduce risk, no matter how the threats facing organizations change: The Azure Sphere MCU includes built-in hardware-based security. The purpose-built Azure Sphere OS adds a four-layer defense, in-depth software environment. The Azure Sphere Security Service renews security to protect against new and emerging threats. Adobe, Microsoft, and SAP announced the Open Data Initiative At the Ignite conference, the CEOs of Adobe, Microsoft, and SAP introduced an Open Data Initiative to help companies connect, understand and use all their data to create amazing experiences for their customers with AI. Together, the three long-standing partners are reimagining customer experience management (CXM) by empowering companies to derive more value from their data and deliver world-class customer experiences in real-time. The Open Data Initiative is based on three guiding principles: Every organization owns and maintains complete, direct control of all their data. Customers can enable AI-driven business processes to derive insights and intelligence from unified behavioral and operational data. A broad partner ecosystem should be able to easily leverage an open and extensible data model to extend the solution. Microsoft now lets businesses rent a virtual Windows 10 desktop in Azure Until now, virtual Windows 10 desktops were the domain of third-party service providers. However, from now on, Microsoft itself will offer these desktops. The company argues that this is the first time users will get a multiuser virtualized Windows 10 desktop in the cloud. Most of the employees don’t necessarily always work from the same desktop or laptop. This virtualized solution will allow organizations to offer them a full Windows 10 desktop in the cloud, with all the Office apps they know, without the cost of having to provision and manage a physical machine. A universal search feature across Bing and Office.com Microsoft announced that it is rolling out a universal search feature across Bing and Office.com. The Search feature will be later supported in Edge, Windows, and Office. The Search feature will be able to index internal documents to make it easier to find files. Search is going to be moved to a prominent and consistent place across the apps that are used every day whether it is Outlook, PowerPoint, Excel, Teams, etc.  Also, personalized results will appear in the search box so that users can see documents that they worked on recently. Here’s a small video to know more about the universal search feature. https://youtu.be/mtjJdltMoWU New AutoML capabilities in Azure Machine Learning service Microsoft also announced new capabilities for its Azure Machine Learning service, a technology that allows anyone to build and train machine learning models to make predictions from data. These models can then be deployed anywhere – in the cloud, on-premises or at the edge.  At the center of the update is automated machine learning, an AI capability that automatically selects, tests and tweaks machine learning models that power many of today’s AI systems. The capability is aimed at making AI development more accessible to a broader set of customers. Preview announcement of SQL Server 2019 Microsoft announced the first public preview of SQL Server 2019 at Ignite 2018. This new release of SQL Server, businesses will be able to manage their relational and non-relational data workloads in a single database management system. Few expectations at the SQL Server 2019 include: Microsoft SQL Server 2019 will run either on-premise or on the Microsoft Azure stack Microsoft announced the Azure SQL Database Managed Instance, which will allow businesses to port their database to the cloud without any code changes Microsoft announced new database connectors that will allow organizations to integrate SQL Server with other databases such as Oracle, Cosmos DB, MongoDB, and Teradata To know more about SQL Server 2019, read, ‘Microsoft announces the first public preview of SQL Server 2019 at Ignite 2018’ Microsoft Ignite 2018: New Azure announcements you need to know Azure Functions 2.0 launches with better workload support for serverless Microsoft, Adobe and SAP announce Open Data Initiative, a joint vision to reimagine customer experience, at Ignite 2018  
Read more
  • 0
  • 0
  • 28930

article-image-deepminds-ai-uses-reinforcement-learning-to-defeat-humans-in-multiplayer-games
Savia Lobo
03 Jun 2019
3 min read
Save for later

DeepMind's AI uses reinforcement learning to defeat humans in multiplayer games

Savia Lobo
03 Jun 2019
3 min read
Recently, researchers from DeepMind released their research where they designed AI agents that can team up to play Quake III Arena’s Capture the Flag mode. The highlight of this research is, these agents were able to team up against human players or play alongside them, tailoring their behavior accordingly. We have previously seen instances of an AI agent beating humans in video games like StarCraft II and Dota 2. However, these games did not involve agents playing in a complex environment or required teamwork and interaction between multiple players. In their research paper titled, “Human-level performance in 3D multiplayer games with population-based reinforcement learning”, a group of 30 AIs were collectively trained to play five-minute rounds of Capture the Flag, a game mode in which teams must retrieve flags from their opponents while retaining their own. https://youtu.be/OjVxXyp7Bxw While playing the rounds in Capture the Flag the DeepMind AI was able to outperform human teammates, with the reaction time slowed down to that of a typical human player. Rather than a number of AIs teaming up on a group of human players in a game of Dota 2, the AI was able to play alongside them as well. Using Reinforcement learning, the AI taught itself the skill which helped it to pick up the rules of the game over thousands of matches in randomly generated environments. “No one has told [the AI] how to play the game — only if they’ve beaten their opponent or not. The beauty of using [an] approach like this is that you never know what kind of behaviors will emerge as the agents learn,” said Max Jaderberg, a research scientist at DeepMind who recently worked on AlphaStar, a machine learning system that recently bested a human team of professionals at StarCraft II. Greg Brockman, a researcher at OpenAI told The New York Times, “Games have always been a benchmark for A.I. If you can’t solve games, you can’t expect to solve anything else.” According to The New York Times, “such skills could benefit warehouse robots as they work in groups to move goods from place to place, or help self-driving cars navigate en masse through heavy traffic.” Talking about limitations, the researchers say, “Limitations of the current framework, which should be addressed in future work, include the difficulty of maintaining diversity in agent populations, the greedy nature of the meta-optimization performed by PBT, and the variance from temporal credit assignment in the proposed RL updates.” “Our work combines techniques to train agents that can achieve human-level performance at previously insurmountable tasks. When trained in a sufficiently rich multiagent world, complex and surprising high-level intelligent artificial behavior emerged”, the paper states. To know more about this news in detail, visit the official research paper on Science. OpenAI Five beats pro Dota 2 players; wins 2-1 against the gamers Samsung AI lab researchers present a system that can animate heads with one-shot learning Amazon is reportedly building a video game streaming service, says Information  
Read more
  • 0
  • 0
  • 28925

article-image-google-news-ai-revolution-strikes-balance-between-personalization-and-bigger-picture
Richard Gall
10 May 2018
4 min read
Save for later

Google News' AI revolution strikes balance between personalization and the bigger picture

Richard Gall
10 May 2018
4 min read
Google has launched a major revamp to its news feature at Google I/O 2018. 15 years after its launch, Google News is to offer more personalization with the help of AI. Perhaps that's surprising - surely Google has always been using AI across every feature? Well yes, to some extent. But this update brings artificial intelligence fully into the fold. It may feel strange talking about AI and news at the moment. Concern over 'echo chambers' and 'fake news' has become particularly pronounced recently. The Facebook and Cambridge Analytica scandal have thrown the spotlight on the relationship between platforms, publishers, and our data. That might explain why Google seems to be trying to counter balance the move towards greater personalization with a new feature called Full Coverage. Full Coverage has been designed by Google as a means to tackle current concerns around 'echo chambers' and polarization in discourse. Such a move highlights a greater awareness of the impact the platform can have on politics and society. It suggests by using AI in context, there's a way to get the balance right. "In order to make it easier to keep up and make sense of [today's constant flow of news and information from different sources and media][, we set out to bring our news products into one unified experience", explained Trystan Uphill in a blog post. Personalizing Google News with AI By making use of advanced machine learning and AI techniques, Google will now offer you a more personalized way to read the news. With a new 'For You' tab, Google will organize a feed of news based on everything that the search engine knows about you. This will be based on a range of things, from your browsing habits to your location. "The more you use the app, the better the app gets" Upstill explains. In a new feature called 'Newscasts' Google News will make use of natural language processing techniques to bring together wide range of sources on a single topic. It seems strange to think that Google wasn't doing this before, but in actual fact it says a lot about how the platform dictates how we understand the scope of a debate or the way a news cycle is reported and presented. With newscasts it should be easier to illustrate the sheer range of voices currently out there. Fundamentally, Google News is making its news feature smarter - where previously it relied upon keywords, there is an added dimension whereby Google's AI algorithms become much more adept at understanding how different news stories evolve, and how different things relate to one another. https://www.youtube.com/watch?v=wArETCVkS4g Tackling the impact of personalization With Full Coverage, Google News will provide a range of perspectives on a given news story. This seems to be a move to directly challenge the increased concern around online 'echo chambers.' Here's what Upstill says: "Having a productive conversation or debate requires everyone to have access to the same information. That’s why content in Full Coverage is the same for everyone—it’s an unpersonalized view of events from a range of trusted news sources." Essentially, it's about ensuring people have access to a broad overview of stories. Of course, Google is here acting a lot like a publisher or curator of news - even when giving a broad picture around a news story there still will be an element of editorializing (whether that's human or algorithmic). However, it nevertheless demonstrates that Google has some awareness of the issues around online discourse and how its artificial intelligence systems can lead to a certain degree of polarization. It's now easier to subscribe and follow your favourite news sources The evolution of digital publishing has seen the rise of subscription models for many publishers. But that hasn't always been that well-aligned for readers searching Google. However, it will now be easier to read and follow your favorite news sources on Google News. Not only will you now be able to subscribe to news sources through your Google account, you'll also be able to see paywalled content your subscribed to in your Google News feeds. That will certainly be a better reading experience. In turn, that means Google is helping to cement themselves as the go-to place for news. Of course, Google could hardly be said to be under threat. But as native applications and social media platforms have come to define the news experience for many readers in recent years, this is a way of Google staking a claim in an area in which it may be ever so slightly vulnerable.
Read more
  • 0
  • 0
  • 28581

article-image-what-is-meta-learning
Sugandha Lahoti
21 Mar 2018
5 min read
Save for later

What is Meta Learning?

Sugandha Lahoti
21 Mar 2018
5 min read
Meta Learning, an original concept of cognitive psychology, is now applied to machine learning techniques. If we go by the social psychology definition, meta learning is the state of being aware of and taking control of one's own learning. Similar concepts, when applied to the machine learning theory states that a meta learning algorithm uses prior experience to change certain aspects of an algorithm, such that the modified algorithm is better than the original algorithm. To explain in simple terms, meta-learning is how the algorithm learns how to learn. Meta Learning: Making a versatile AI agent Current AI Systems excel at mastering a single skill, playing Go, holding human-like conversations, predicting a disaster, etc. However, now that AI and machine learning is possibly being integrated in everyday tasks, we need a single AI system to solve a variety of problems. Currently, a Go Player, will not be able to navigate the roads or find new places. Or an AI navigation controller won’t be able to hold a perfect human-like conversation. What machine learning algorithms need to do is develop versatility – the capability of doing many different things. Versatility is achieved by intelligent amalgamation of Meta Learning along with related techniques such as reinforcement learning (finding suitable actions to maximize a reward), transfer learning (re-purposing a trained model for a specific task on a second related task), and active learning (learning algorithm chooses the data it wants to learn from). Such different learning techniques provides an AI agent with the brains to do multiple tasks without the need to learn every new task from scratch. Thereby making it capable of adapting intelligently to a wide variety of new, unseen situations. Apart from creating versatile agents, recent researches also focus on using meta learning for hyperparameter and neural network optimization, fast reinforcement learning, finding good network architectures and for specific cases such as few-shot image recognition. Using Meta Learning, AI agents learn how to learn new tasks by reusing prior experience, rather than examining each new task in isolation. Various approaches to Meta Learning algorithms A wide variety of approaches come under the umbrella of Meta-Learning. Let's have a quick glance at these algorithms and techniques: Algorithm Learning (selection) Algorithm selection or learning, selects learning algorithms on the basis of characteristics of the instance. For example, you have a set of ML algos (Random Forest, SVM, DNN), data sets as the instances and the error rate as the cost metric. Now, the goal of Algorithm Selection is to predict which machine learning algorithm will have a small error on each data set. Hyper-parameter Optimization Many machine learning algorithms have numerous hyper-parameters that can be optimized. The choice of selecting these hyper-parameters for learning algorithms determines how well the algorithm learns.  A recent paper, "Evolving Deep Neural Networks", provides a meta learning algorithm for optimizing deep learning architectures through evolution. Ensemble Methods Ensemble methods usually combine several models or approaches to achieve better predictive performance. There are 3 basic types – Bagging, Boosting, and Stacked Generalization. In Bagging, each model runs independently and then aggregates the outputs at the end without preference to any model. Boosting refers to a group of algorithms that utilize weighted averages to make weak learners into stronger learners. Boosting is all about “teamwork”. Stacked generalization, has a layered architecture. Each set of base-classifiers is trained on a dataset. Successive layers receive as input the predictions of the immediately preceding layer and the output is passed on to the next layer. A single classifier at the topmost level produces the final prediction. Dynamic bias selection In Dynamic Bias selection, we adjust the bias of the learning algorithm dynamically to suit the new problem instance. The performance of a base learner can trigger the need to explore additional hypothesis spaces, normally through small variations of the current hypothesis space. The bias selection can either be a form of data variation or a time-dependent feature. Inductive Transfer Inductive transfer describes learning using previous knowledge from related tasks. This is done by transferring meta-knowledge across domains or tasks; a process known as inductive transfer. The goal here is to incorporate the meta-knowledge into the new learning task rather than matching meta-features with a meta-knowledge base. Adding Enhancements to Meta Learning algorithms Supervised meta-learning:  When the meta-learner is trained with supervised learning. In supervised learning we have both input and output variables and the algorithm learns the mapping function from the input to the output. RL meta-learning: This algorithm talks about using standard deep RL techniques to train a recurrent neural network in such a way that the recurrent network can then implement its own Reinforcement learning procedure. Model-agnostic meta-learning: MAML trains over a wide range of tasks, for a representation that can be quickly adapted to a new task, via a few gradient steps. The meta-learner seeks an initialization that is not only useful for adapting to various problems, but also can be adapted quickly. The ultimate goal of any meta learning algorithm and its variations is to be fully self-referential. This means it can automatically inspect and improve every part of its own code. A regenerative meta learning algorithm, on the lines of how a lizard regenerates its limbs, would not only blur the distinction between the different variations as described above but will also lead to better future performance and versatility of machine learning algorithms.
Read more
  • 0
  • 0
  • 28374
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-pluribus-an-ai-bot-built-by-facebook-and-cmu-researchers-has-beaten-professionals-at-six-player-no-limit-texas-hold-em-poker
Sugandha Lahoti
12 Jul 2019
5 min read
Save for later

Pluribus, an AI bot built by Facebook and CMU researchers, has beaten professionals at six-player no-limit Texas Hold ’Em Poker

Sugandha Lahoti
12 Jul 2019
5 min read
Researchers from Facebook and Carnegie Mellon University have developed an AI bot that has defeated human professionals in six-player no-limit Texas Hold’em poker.   Pluribus defeated pro players in both “five AIs + one human player” format and a “one AI + five human players” format. Pluribus was tested in 10,000 games against five human players, as well as in 10,000 rounds where five copies of the AI  played against one professional. This is the first time an AI bot has beaten top human players in a complex game with more than two players or two teams. Pluribus was developed by Noam Brown of Facebook AI Research and Tuomas Sandholm of Carnegie Mellon University. Pluribus builds on Libratus, their previous poker-playing AI which defeated professionals at Heads-Up Texas Hold ’Em, a two-player game in 2017. Mastering 6-player Poker for AI bots is difficult considering the number of possible actions. First, obviously since this involves six players, the games have a lot more variables and the bot can’t figure out a perfect strategy for each game - as it would do for a two player game. Second, Poker involves hidden information, in which a player only has access to the cards that they see. AI has to take into account how it would act with different cards so it isn’t obvious when it has a good hand. Brown wrote on a Hacker News thread, “So much of early AI research was focused on beating humans at chess and later Go. But those techniques don't directly carry over to an imperfect-information game like poker. The challenge of hidden information was kind of neglected by the AI community. This line of research really has its origins in the game theory community actually (which is why the notation is completely different from reinforcement learning). Fortunately, these techniques now work really really well for poker.” What went behind Pluribus? Initially, Pluribus engages in self-play by playing against copies of itself, without any data from human or prior AI play used as input. The AI starts from scratch by playing randomly, and gradually improves as it determines which actions, and which probability distribution over those actions, lead to better outcomes against earlier versions of its strategy. Pluribus’s self-play produces a strategy for the entire game offline, called the blueprint strategy. This online search algorithm can efficiently evaluate its options by searching just a few moves ahead rather than only to the end of the game. Pluribus improves upon the blueprint strategy by searching for a better strategy in real time for the situations it finds itself in during the game. Real-time search The blueprint strategy in Pluribus was computed using a variant of counterfactual regret minimization (CFR). The researchers used Monte Carlo CFR (MCCFR) that samples actions in the game tree rather than traversing the entire game tree on each iteration. Pluribus only plays according to this blueprint strategy in the first betting round (of four), where the number of decision points is small enough that the blueprint strategy can afford to not use information abstraction and have a lot of actions in the action abstraction. After the first round, Pluribus instead conducts a real-time search to determine a better, finer-grained strategy for the current situation it is in. https://youtu.be/BDF528wSKl8 What is astonishing is that Pluribus uses very little processing power and memory, less than $150 worth of cloud computing resources. The researchers trained the blueprint strategy for Pluribus in eight days on a 64-core server and required less than 512 GB of RAM. No GPUs were used. Stassa Patsantzis, a Ph.D. research student appreciated Pluribus’s resource-friendly compute power. She commented on Hacker News, “That's the best part in all of this. I'm hoping that there is going to be more of this kind of result, signaling a shift away from Big Data and huge compute and towards well-designed and efficient algorithms.” She also said how this is significantly lesser than ML algorithms used at DeepMind and Open AI. “In fact, I kind of expect it. The harder it gets to do the kind of machine learning that only large groups like DeepMind and OpenAI can do, the more smaller teams will push the other way and find ways to keep making progress cheaply and efficiently”, she added. Real-life implications AI bots such as Pluribus give a better understanding of how to build general AI that can cope with multi-agent environments, both with other AI agents and with humans. A six-player AI bot has better implications in reality because two-player zero-sum interactions (in which one player wins and one player loses) are common in recreational games, but they are very rare in real life.  These AI bots can be used for handling harmful content, dealing with cybersecurity challenges, or managing an online auction or navigating traffic, all of which involve multiple actors and/or hidden information. Apart from fighting online harm, four-time World Poker Tour title holder Darren Elias helped test the program's skills, said, Pluribus could spell the end of high-stakes online poker. "I don't think many people will play online poker for a lot of money when they know that this type of software might be out there and people could use it to play against them for money." Poker sites are actively working to detect and root out possible bots. Brown, Pluribus' developer, on the other hand, is optimistic. He says it's exciting that a bot could teach humans new strategies and ultimately improve the game. "I think those strategies are going to start penetrating the poker community and really change the way professional poker is played," he said. For more information on Pluribus and it’s working, read Facebook’s blog. DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa OpenAI Five bots destroyed human Dota 2 players this weekend
Read more
  • 0
  • 0
  • 27741

article-image-postgresql-12-progress-update
Amrata Joshi
13 May 2019
2 min read
Save for later

PostgreSQL 12 progress update

Amrata Joshi
13 May 2019
2 min read
Last week, the team at PostgreSQL released a progress update for the eagerly awaited PostgreSQL 12. This release  comes with performance improvements and better server configuration, indexes, recovery parameters and much more.   This article was updated 05.14.2019 to correct the fact that this was a progress update for PostgreSQL, not a software release. What’s going to be coming in PostgreSQL 12? Performance In PostgreSQL 12  the Just-in-Time (JIT) compilation will be enabled by default. Memory consumption of COPY and function calls will be reduced and the search performance for multi-byte characters will also be improved. Server configuration Updates to server configuration will add the ability to enable/disable cluster checksums using pg_checksums. It should also reduce the default value of autovacuum_vacuum_cost_delay to 2ms and allows time-based server variables to use micro-seconds. Indexes in PostgreSQL 12  The speed of btree index insertions should be optimized for PostgreSQL. The new code will also improve the space-efficiency of page splits and should further reduce locking overhead, and gives better performance for UPDATEs and DELETEs on indexes with many duplicates. Recovery parameters PostgreSQL 12 should also allow recovery parameters to be changed with reload. These parameters include, archive_cleanup_command, promote_trigger_file, recovery_end_command, and recovery_min_apply_delay. It also allows streaming replication timeout. OID columns The special behavior of OID columns will likely be removed, but columns will still be explicitly specified as type OID. The operations on tables that have columns named OID will need to be adjusted. Data types Data types abstime, reltime, and tinterval look as though they'll be removed from PostgreSQL 12. Geometric functions Geometric functions and operators will be refactored to produce better results than are currently available. The geometric types can be restructured to handle NaN, underflow, overflow and division by zero. To learn more about what's likely to be coming to PostgreSQL 12, check out the official announcement. Building a scalable PostgreSQL solution PostgreSQL security: a quick look at authentication best practices [Tutorial] How to handle backup and recovery with PostgreSQL 11 [Tutorial]  
Read more
  • 0
  • 0
  • 27546

article-image-reinforcement-learning-works
Pravin Dhandre
14 Nov 2017
5 min read
Save for later

How Reinforcement Learning works

Pravin Dhandre
14 Nov 2017
5 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Rodolfo Bonnin titled Machine Learning for Developers.[/box] Reinforcement learning is a field that has resurfaced recently, and it has become more popular in the fields of control, finding the solutions to games and situational problems, where a number of steps have to be implemented to solve a problem. A formal definition of reinforcement learning is as follows: "Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error interactions with a dynamic environment.” (Kaelbling et al. 1996). In order to have a reference frame for the type of problem we want to solve, we will start by going back to a mathematical concept developed in the 1950s, called the Markov decision process. Markov decision process Before explaining reinforcement learning techniques, we will explain the type of problem we will attack with them. When talking about reinforcement learning, we want to optimize the problem of a Markov decision process. It consists of a mathematical model that aids decision making in situations where the outcomes are in part random, and in part under the control of an agent. The main elements of this model are an Agent, an Environment, and a State, as shown in the following diagram: Simplified scheme of a reinforcement learning process The agent can perform certain actions (such as moving the paddle left or right). These actions can sometimes result in a reward rt, which can be positive or negative (such as an increase or decrease in the score). Actions change the environment and can lead to a new state st+1, where the agent can perform another action at+1. The set of states, actions, and rewards, together with the rules for transitioning from one state to another, make up a Markov decision process. Decision elements To understand the problem, let's situate ourselves in the problem solving environment and look at the main elements: The set of states The action to take is to go from one place to another The reward function is the value represented by the edge The policy is the way to complete the task A discount factor, which determines the importance of future rewards The main difference with traditional forms of supervised and unsupervised learning is the time taken to calculate the reward, which in reinforcement learning is not instantaneous; it comes after a set of steps. Thus, the next state depends on the current state and the decision maker's action, and the state is not dependent on all the previous states (it doesn't have memory), thus it complies with the Markov property. Since this is a Markov decision process, the probability of state st+1 depends only on the current state st and action at: Unrolled reinforcement mechanism The goal of the whole process is to generate a policy P, that maximizes rewards. The training samples are tuples, <s, a, r>.  Optimizing the Markov process Reinforcement learning is an iterative interaction between an agent and the environment. The following occurs at each timestep: The process is in a state and the decision-maker may choose any action that is available in that state The process responds at the next timestep by randomly moving into a new state and giving the decision-maker a corresponding reward The probability that the process moves into its new state is influenced by the chosen action in the form of a state transition function Basic RL techniques: Q-learning One of the most well-known reinforcement learning techniques, and the one we will be implementing in our example, is Q-learning. Q-learning can be used to find an optimal action for any given state in a finite Markov decision process. Q-learning tries to maximize the value of the Q-function that represents the maximum discounted future reward when we perform action a in state s. Once we know the Q-function, the optimal action a in state s is the one with the highest Q- value. We can then define a policy π(s), that gives us the optimal action in any state, expressed as follows: We can define the Q-function for a transition point (st, at, rt, st+1) in terms of the Q-function at the next point (st+1, at+1, rt+1, st+2), similar to what we did with the total discounted future reward. This equation is known as the Bellman equation for Q-learning: In practice, we  can think of the Q-function as a lookup table (called a Q-table) where the states (denoted by s) are rows and the actions (denoted by a) are columns, and the elements (denoted by Q(s, a)) are the rewards that you get if you are in the state given by the row and take the action given by the column. The best action to take at any state is the one with the highest reward: initialize Q-table Q observe initial state s while (! game_finished): select and perform action a get reward r advance to state s' Q(s, a) = Q(s, a) + α(r + γ max_a' Q(s', a') - Q(s, a)) s = s' You will realize that the algorithm is basically doing stochastic gradient descent on the Bellman equation, backpropagating the reward through the state space (or episode) and averaging over many trials (or epochs). Here, α is the learning rate that determines how much of the difference between the previous Q-value and the discounted new maximum Q- value should be incorporated.  We can represent this process with the following flowchart: We have successfully reviewed Q-Learning, one of the most important and innovative architecture of reinforcement learning that have appeared in recent. Every day, such reinforcement models are applied in innovative ways, whether to generate feasible new elements from a selection of previously known classes or even to win against professional players in strategy games. If you enjoyed this excerpt from the book Machine learning for developers, check out the book below.
Read more
  • 0
  • 0
  • 27404

article-image-learn-about-enterprise-blockchain-development-with-hyperledger-fabric
Matt Zand
03 Feb 2021
5 min read
Save for later

Learn about Enterprise Blockchain Development with Hyperledger Fabric

Matt Zand
03 Feb 2021
5 min read
The blockchain technology is gradually making its way among enterprise application developers. One of main barriers that hinder the pervasive adoption of blockchain technology is the lack of enough human capacity like system administrators and engineers to build and manage the blockchain applications. Indeed, to be fully qualified as a blockchain specialist, you need an interdisciplinary knowledge of information technology and information management. Relative to other well-established technologies like Data Science, blockchain has more terminologies and complex design architectures. Thus, once you learn how blockchain works, you may pick a platform and start building your applications.  Currently, the most popular platform for building private Distributed Ledger Technology (DLT) is Hyperledger Fabric. Under Hyperledger family, there are several DLTs, tools and libraries that assist developers and system administrators in building and managing enterprise blockchain applications.  Hyperledger Fabric is an enterprise-grade, distributed ledger platform that offers modularity and versatility for a broad set of industry use cases. The modular architecture for Hyperledger Fabric accommodates the diversity of enterprise use cases through plug and play components, such as consensus, privacy and membership services.  Why Hyperledger Fabric?  One of major highlights of Hyperledger Fabric that sets it apart from other public and private DLTs is its architecture. Specifically, it comes with different components that are meant for blockchain implementations at the enterprise level. A common use case is sharing private data with a subset of members while sharing common transaction data with all members simultaneously. The flexibility in data sharing is made possible via the “channels” feature in Hyperledger Fabric if you need total transaction isolation, and the “private data” feature if you’d like to keep data private while sharing hashes as transaction evidence on the ledger (private data can be shared among “collection” members, or with a specific organization on a need-to-know basis). Here is a good article for an in-depth review of Hyperledger Fabric components.  Currently, there are few resources available that cover Hyperledger Fabric holistically from design stage to development to deployment and finally to maintenance. One of highly recommended resources is “Blockchain with Hyperledger Fabric” a book by Nitin Gaur and others published for Packt Publication Company. Its second edition (get here) is now available at Amazon. For the remainder of this article, I briefly review some of its highlights.  Blockchain with Hyperledger Fabric Book Highlights  Compared with other available blockchain books in the market, the book by Nitin Gaur and others has more pages which means it covers more practical topics. As a senior Fabric developer, I find the following 5 major topics of the book very useful and can be used by all Fabric developers on a daily basis. Here is a good article for those who are new to blockchain development in Hyperledger.  1- Focus on enterprise  I personally have read a few books on Hyperledger from Packt written by Brian Wu, yet I think this book covers more practical enterprise topics than them. Also, unlike other Packt books on blockchain that are written mostly for educational audiences, this book, in my opinion, is more geared toward readers interested in putting Fabric concepts into practice. Here is a good article for a comprehensive review of blockchain use cases in many industries.  2- Coverage of Fabric network  Most books on Hyperledger focus usually draw a line between network administration and smart contract development by covering one in more depth (see this article for details). Indeed, in the previous Fabric books from Packt, I saw more emphasis on Fabric smart contract development than the network. However, this book does a good job of covering the Fabric network in more detail.  3- Integration and design patterns  For all I know, other books on Fabric have not covered design patterns for integrating Fabric into current or legacy systems. Thus, this book does a great job in covering it. Specifically, regarding Fabric integrations, this book discusses the following practical topics:  Integrating with an existing system of record  Integrating with an operational data store for blockchain analytics  Microservice and event-driven architecture  Resiliency and fault tolerance  Reliability and availability  Serviceability  4- DevOps and CI/CD  Almost every enterprise developer is familiar with DevOps and how to implement Continuous Integration (CI) and Continuous Delivery (CD) on containerized applications using Kubernetes or Docker. However, in the previous books I read, there was no discussion on best practices for achieving agility in the Fabric network using DevOps best practices as covered in this book.  5- Hyperledger Fabric Security  As the cybersecurity landscape changes very fast, being the latest book in the market on Hyperledger Fabric, it offers good insights on the latest development and practices in securing Fabric networks and applications.  Other notable book topics that caught my attention were a- Developing service-layer applications, b-Modifying or upgrading a Hyperledger Fabric application, and c-System monitoring and performance.  Overall, I highly recommend this book to those who are serious about mastering Hyperledger Fabric. Indeed, if you learn and put most of the topics and concepts covered in this book into practice, you will earn a badge of Hyperledger Fabric specialist. 
Read more
  • 0
  • 0
  • 27268
article-image-ces-2019-top-announcements-made-so-far
Sugandha Lahoti
07 Jan 2019
3 min read
Save for later

CES 2019: Top announcements made so far

Sugandha Lahoti
07 Jan 2019
3 min read
CES 2019, the annual consumer electronics show in Las Vegas will go from Tuesday, Jan. 8 through Friday, Jan. 11. However, the conference has unofficially kicked off on Sunday, January 6, followed by press conferences on Monday, Jan. 7. Over the span of these two days, a lot of companies showcased their latest projects and announced new products, software, and services. Let us look at the key announcements made by prominent tech companies so far. Nvidia Nvidia CEO Jensen Huang unveiled some "amazing new technology innovations." First, they announced that over 40 new laptop models in 100-plus configurations will be powered by NVIDIA GeForce RTX GPUs. Turing-based laptops will be available across the GeForce RTX family — from RTX 2080 through RTX 2060 GPUs, said Huang. Seventeen of the new models will feature Max-Q design. Laptops with the latest GeForce RTX GPUs will also be equipped with WhisperMode, NVIDIA Battery Boost, and NVIDIA G-SYNC. GeForce RTX-powered laptops will be available starting Jan. 29 from the world's top OEMs. Nvidia also announced the first 65-inch 4K HDR gaming display that will arrive in February for $4,999. LG LG Electronics, which have a major press release today, has already confirmed a variety of their new products. These include the release of LG's 2019 TVs with Alexa and Google Assistant support, 8K OLED, full HDMI 2.1 support and more. Also includes, LG CineBeam Laser 4K projector for voice control, new sound bars included with Dolby Atmos and Google Assistant and LG Gram 17 and new 14-inch 2-in-1. Samsung Samsung announced that their Smart TVs will be soon equipped with iTunes Movies & TV Shows and will support AirPlay 2 beginning Spring 2019. AirPlay 2 support will be available on Samsung Smart TVs in 190 countries worldwide. Samsung is also launching a new Notebook Odyssey to take PC gaming more seriously posing a threat to competitors Razer and Alienware. HP HP also announced HP Chromebook 14, at CES 2019. It is the world's first AMD-powered Chromebook running on either an AMD A4 or A6 processor with integrated Radeon R4 or R5 graphics. It has 4GB of memory and 32GB of storage and support for Android apps from the Google Play Store. These models will start shipping in January starting at $269. More announcements: Asus launches a new 17-inch, 10-pound Surface Pro gaming laptop, the Asus ROG Mothership. It has also announced Zephyrus S GX701, the smallest and lightest 17-inch gaming laptop yet. Corsair’s impressive compact gaming desktops come with Core i9 chips and GeForce RTX graphics L’Oréal’s newest prototype detects wearers’ skin pH levels Acer’s new Swift 7 will kill the bezel when it launches in May for $1,699. It is one of the thinnest and lightest laptops ever made Audeze’s motion-aware headphones will soon recreate your head gestures in-game Whirlpool is launching a Wear OS app for its connected appliances with simplified voice commands for both Google Assistant and Alexa devices. Vuzix starts selling its AR smart glasses for $1,000 Pico Interactive just revealed the Pico G2 4K, an all-in-one 4K VR headset based-on China’s best-selling VR unit, the Pico G2. It’s incredibly lightweight, powerful and highly customizable for enterprise purposes. Features include kiosk mode, hands-free controls, and hygienic design. You can have a look at all products that will be showcased at CES 2019. NVIDIA launches GeForce Now’s (GFN) ‘recommended router’ program to enhance the overall performance and experience of GFN NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0 Uses of Machine Learning in Gaming
Read more
  • 0
  • 0
  • 26818

article-image-data-scientist-sexiest-role-21st-century
Aarthi Kumaraswamy
08 Nov 2017
6 min read
Save for later

Data Scientist: The sexiest role of the 21st century

Aarthi Kumaraswamy
08 Nov 2017
6 min read
"Information is the oil of the 21st century, and analytics is the combustion engine." -Peter Sondergaard, Gartner Research By 2018, it is estimated that companies will spend $114 billion on big data-related projects, an increase of roughly 300%, compared to 2013 (https://www.capgemini-consulting.com/resource-file-access/resource/pdf/big_dat a_pov_03-02-15.pdf). Much of this increase in expenditure is due to how much data is being created and how we are better able to store such data by leveraging distributed filesystems such as Hadoop. However, collecting the data is only half the battle; the other half involves data extraction, transformation, and loading into a computation system, which leverages the power of modern computers to apply various mathematical methods in order to learn more about data and patterns and extract useful information to make relevant decisions. The entire data workflow has been boosted in the last few years by not only increasing the computation power and providing easily accessible and scalable cloud services (for example, Amazon AWS, Microsoft Azure, and Heroku) but also by a number of tools and libraries that help to easily manage, control, and scale infrastructure and build applications. Such a growth in the computation power also helps to process larger amounts of data and to apply algorithms that were impossible to apply earlier. Finally, various computation- expensive statistical or machine learning algorithms have started to help extract nuggets of information from data. Finding a uniform definition of data science is akin to tasting wine and comparing flavor profiles among friends—everyone has their own definition and no one description is more accurate than the other. At its core, however, data science is the art of asking intelligent questions about data and receiving intelligent answers that matter to key stakeholders. Unfortunately, the opposite also holds true—ask lousy questions of the data and get lousy answers! Therefore, careful formulation of the question is the key for extracting valuable insights from your data. For this reason, companies are now hiring data scientists to help formulate and ask these questions. At first, it's easy to paint a stereotypical picture of what a typical data scientist looks like: t- shirt, sweatpants, thick-rimmed glasses, and debugging a chunk of code in IntelliJ... you get the idea. Aesthetics aside, what are some of the traits of a data scientist? One of our favorite posters describing this role is shown here in the following diagram: Math, statistics, and general knowledge of computer science is given, but one pitfall that we see among practitioners has to do with understanding the business problem, which goes back to asking intelligent questions of the data. It cannot be emphasized enough: asking more intelligent questions of the data is a function of the data scientist's understanding of the business problem and the limitations of the data; without this fundamental understanding, even the most intelligent algorithm would be unable to come to solid conclusions based on a wobbly foundation. A day in the life of a data scientist This will probably come as a shock to some of you—being a data scientist is more than reading academic papers, researching new tools, and model building until the wee hours of the morning, fueled on espresso; in fact, this is only a small percentage of the time that a data scientist gets to truly play (the espresso part however is 100% true for everyone)! Most part of the day, however, is spent in meetings, gaining a better understanding of the business problem(s), crunching the data to learn its limitations (take heart, this book will expose you to a ton of different feature engineering or feature extractions tasks), and how best to present the findings to non data-sciencey people. This is where the true sausage making process takes place, and the best data scientists are the ones who relish in this process because they are gaining more understanding of the requirements and benchmarks for success. In fact, we could literally write a whole new book describing this process from top-to-tail! So, what (and who) is involved in asking questions about data? Sometimes, it is process of saving data into a relational database and running SQL queries to find insights into data: "for the millions of users that bought this particular product, what are the top 3 OTHER products also bought?" Other times, the question is more complex, such as, "Given the review of a movie, is this a positive or negative review?" This book is mainly focused on complex questions, like the latter. Answering these types of questions is where businesses really get the most impact from their big data projects and is also where we see a proliferation of emerging technologies that look to make this Q and A system easier, with more functionality. Some of the most popular, open source frameworks that look to help answer data questions include R, Python, Julia, and Octave, all of which perform reasonably well with small (X < 100 GB) datasets. At this point, it's worth stopping and pointing out a clear distinction between big versus small data. Our general rule of thumb in the office goes as follows: If you can open your dataset using Excel, you are working with small data. Working with big data What happens when the dataset in question is so vast that it cannot fit into the memory of a single computer and must be distributed across a number of nodes in a large computing cluster? Can't we just rewrite some R code, for example, and extend it to account for more than a single-node computation? If only things were that simple! There are many reasons why the scaling of algorithms to more machines is difficult. Imagine a simple example of a file containing a list of names: B D X A D A We would like to compute the number of occurrences of individual words in the file. If the file fits into a single machine, you can easily compute the number of occurrences by using a combination of the Unix tools, sort and uniq: bash> sort file | uniq -c The output is as shown ahead: 2 A 1 B 1 D 1 X However, if the file is huge and distributed over multiple machines, it is necessary to adopt a slightly different computation strategy. For example, compute the number of occurrences of individual words for every part of the file that fits into the memory and merge the results together. Hence, even simple tasks, such as counting the occurrences of names, in a distributed environment can become more complicated. The above is an excerpt from the book  Mastering Machine Learning with Spark 2.x by Alex Tellez, Max Pumperla and Michal Malohlava. If you would like to learn how to solve the above problem and other cool machine learning tasks a data scientist carries out such as the following, check out the book. Use Spark streams to cluster tweets online Run the PageRank algorithm to compute user influence Perform complex manipulation of DataFrames using Spark Define Spark pipelines to compose individual data transformations Utilize generated models for off-line/on-line prediction
Read more
  • 0
  • 0
  • 26776

article-image-googles-new-facial-recognition-patent-uses-your-social-network-to-identify-you
Melisha Dsouza
10 Aug 2018
3 min read
Save for later

Google’s new facial recognition patent uses your social network to identify you!

Melisha Dsouza
10 Aug 2018
3 min read
Google is making its mark in facial recognition technology. After two successful forays in facial identification patents in August 2017 and January 2018, Google is back with another charter. This time its huge and plans to use machine-learning technology for facial recognition of publicly available personal photos on the internet. It’s no secret that Google can crawl trillions of websites at once. Using this as an advantage, the new patent allows Google to source pictures and identify faces from personal communications, social networks, collaborative apps, blogs and much more! Why is facial recognition gaining importance? The internet is buzzing with people clicking and uploading their images. Whether it be profile pictures or group photographs, images on social networks is all the rage these days.  Apart from this, facial recognition also comes in handy while performing secure banking and financial transactions. ATMs and banks use this technology to make sure the user is who he/she says they are. From criminal tracking to identifying individuals in huge masses of people- facial recognition has applications everywhere! Clearly, Google has been taking full advantage of this tech. First, in the “Reverse Image Search” system, that allowed users to upload an image of a public figure to Google, the results would be a “best Guess” about who appears in the photo. And now, with the new patent, users can identify photos of less famous individuals. Imagine uploading a picture of a fifth-grade friend and coming back with the result of his/her email ID or occupation or for that matter, where they lives! The Workings of the Google Brain The process is simple and straightforward. First, the user uploads a photo, screenshot or scanned image The system analyzes the image and comes up with  both visually similar, and a potential match using advanced image recognition Google will find the best possible match  based partially on the data it pulled from your social accounts and other collaborative apps plus the aforementioned data sources The process of recognizing an image adopted by Google Source: CBInsights While all of this does sound exciting, there is a dark side left to be explored. Imagine you are out going about your own business. Someone who you don't even know happens to click your picture. This could later be used to find out all your personal details like where you live, what you do for a living, what your email address. All because everything is available on your social media accounts and on the internet these days!  Creepy much? This is where basic ethics and privacy concerns come into play. The only solace here is that the patent states, in certain scenarios, a person would have to opt-in to have his/identity appear in search results. Need to know more? Check out the perspective on thenextweb.com. Admiring the many faces of Facial Recognition with Deep Learning Google’s second innings in China: Exploring cloud partnerships with Tencent and others Google’s Smart Display – A push towards the new OS, Fuchsia
Read more
  • 0
  • 59
  • 26772
article-image-best-of-the-tableau-web-november-from-whats-new
Anonymous
11 Dec 2020
3 min read
Save for later

Best of the Tableau Web: November from What's New

Anonymous
11 Dec 2020
3 min read
Andy Cotgreave Technical Evangelist Director, Tableau Kristin Adderson December 11, 2020 - 11:09pm December 12, 2020 Hello everyone and welcome to the latest round up of the Tableau community highlights. I was reminded this month of how important “success” in analytics is about much more than one’s skills in the platform. This month, as always, the community has shared many super tips and tricks to improve your ability to master Tableau, but there has been a great set of posts on all the other career development “stuff” that you mustn’t ignore if you want to succeed.  Judit Bekker’s latest post describes how she found a job in analytics. Her story contains great advice for anyone setting out on this wonderful career journey.  Ruth Amarteifio from The Information Lab describes how to ask the right questions before embarking on a data project. Believe me, these are great questions that I wish I had known before I started my career in analytics. Helping grow a community is a great means to develop your network and open yourself to new opportunities. What better way than starting a Tableau User Group? Interworks has a great list of ideas to inspire you and help you get started on the right path.  If those aren’t enough, then you must head to Adam Mico’s blog where he curated reflections from 129 different people (!) from the Tableau community. There are so many great stories here. Read them all or dip into a few, and you’ll find ideas to help you build your own career in analytics, regardless of which tool or platform you end up using.   As always, enjoy the list! Follow me on Twitter and LinkedIn as I try and share many of these throughout the month. Also, you can check out which blogs I am following here. If you don’t see yours on the list, you can add it here. Tips and tricks Andy Kriebel #TableauTipTuesday: How to Sort a Chart with a Parameter Action Luke Stanke Beyond Dual Axis: Using Multiple Map Layers to create next-level visualizations in Tableau    Marc Reid Tableau Map Layers Inspiration Adam Mico The #DataFam: 128 Authors From 21 Countries Spanning Six Continents Share Their 2020 Tableau… Bridget Cogley Data Viz Philosophy: Better than Bar Charts Adam McCann Layering Multiple Charts in Tableau 2020.4 Mark Bradbourne Real World Fake Data – Human Resources Recap Lindsay Betzendahl Visualizing COVID-19: March’s #ProjectHealthViz and the Impact of Pushing Boundaries Formatting, Design, Storytelling Evelina Judeikyte  Three Design Principles for More Effective Dashboards Ken Flerlage Creating a Basic Beeswarm Plot in Tableau Adam McCann Animated Buttons and Animated Night/Day Toggle Prep Tom Prowse Tableau Prep vs Einstein Analytics – Combining and Outputs Server Mark Wu Difference between ‘suspend extract refresh task’ vs ‘tag stale content’ feature Set and Parameter Actions Kevin Flerlage Dynamically Show & Hide Parameters & Filters based on another Parameter Selection Ethan Lang  3 Essential Ways to Use Dynamic Parameters in Tableau
Read more
  • 0
  • 0
  • 26534

article-image-media-manipulation-by-deepfakes-and-cheap-fakes-require-both-ai-and-social-fixes-finds-a-data-society-report
Sugandha Lahoti
19 Sep 2019
3 min read
Save for later

Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data & Society report

Sugandha Lahoti
19 Sep 2019
3 min read
A new report from Data and Society published by researchers Britt Paris and Joan Donovan argues that the violence of Audio Visual manipulation - namely Deepfakes and Cheap fakes can not be addressed by artificial intelligence alone. It requires a combination of technical and social solutions. What are Deepfakes and cheap fakes One form of Audio Visual manipulation can be executed using experimental machine learning which is deepfakes. Most recently, a terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise went viral on YouTube. Facebook creator Mark Zuckerberg also became the target of the world’s first high profile white hat deepfake operation. This video was created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny where Zuckerberg appears to give a threatening speech about the power of Facebook. Read Also Now there is a Deepfake that can animate your face with just your voice and a picture. Worried about Deepfakes? Check out the new algorithm that manipulates talking-head videos by altering the transcripts. However, fake videos can also be rendered through Photoshop, lookalikes, re-contextualizing footage, speeding, or slowing. This form of AV manipulation – are cheap fakes. The researchers have coined the term stating they rely on cheap, accessible software, or no software at all. Deepfakes can’t be fixed with Artificial Intelligence alone The researchers argue that deepfakes, while new, are part of a long history of media manipulation — one that requires both a social and a technical fix. They determine that deepfakes need to address structural inequality; groups most vulnerable to that violence should be able to influence public media systems. The authors say, “Those without the power to negotiate truth–including people of color, women, and the LGBTQA+ community–will be left vulnerable to increased harms.” Researchers worry that AI-driven content filters and other technical fixes could cause real harm. “They make things better for some but could make things worse for others. Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life.” “It’s a massive project, but we need to find solutions that are social as well as political so people without power aren’t left out of the equation.” This technical fix, the researchers say, must work alongside the legal system to prosecute bad actors and stop the spread of faked videos. “We need to talk about mitigation and limiting harm, not solving this issue, Deepfakes aren’t going to disappear.” The report states, “There should be “social” policy solutions that penalize individuals for harmful behavior. More encompassing solutions should also be formed to enact federal measures on corporations to encourage them to more meaningfully address the fallout from their massive gains.” It concludes, “Limiting the harm of AV manipulation will require an understanding of the history of evidence, and the social processes that produce truth, in order to avoid new consolidations of power for those who can claim exclusive expertise.” Other interesting news in tech $100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses
Read more
  • 0
  • 0
  • 25807
Modal Close icon
Modal Close icon