Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-use-sqlite-ionic-store-data
Oli Huggins
13 Jun 2016
10 min read
Save for later

How to use SQLite with Ionic to store data?

Oli Huggins
13 Jun 2016
10 min read
Hybrid Mobile apps have a challenging task of being as performant as native apps, but I always tell other developers that it depends on not the technology but how we code. The Ionic Framework is a popular hybrid app development library, which uses optimal design patterns to create awe-inspiring mobile experiences. We cannot exactly use web design patterns to create hybrid mobile apps. The task of storing data locally on a device is one such capability, which can make or break the performance of your app. In a web app, we may use localStorage to store data but mobile apps require much more data to be stored and swift access. Localstorage is synchronous, so it acts slow in accessing the data. Also, web developers who have experience of coding in a backend language such as C#, PHP, or Java would find it more convenient to access data using SQL queries than using object-based DB. SQLite is a lightweight embedded relational DBMS used in web browsers and web views for hybrid mobile apps. It is similar to the HTML5 WebSQL API and is asynchronous in nature, so it does not block the DOM or any other JS code. Ionic apps can leverage this tool using an open source Cordova Plugin by Chris Brody (@brodybits). We can use this plugin directly or use it with the ngCordova library by the Ionic team, which abstracts Cordova plugin calls into AngularJS-based services. We will create an Ionic app in this blog post to create Trackers to track any information by storing it at any point in time. We can use this data to analyze the information and draw it on charts. We will be using the ‘cordova-sqlite-ext’ plugin and the ngCordova library. We will start by creating a new Ionic app with a blank starter template using the Ionic CLI command, ‘$ ionic start sqlite-sample blank’. We should also add appropriate platforms for which we want to build our app. The command to add a specific platform is ‘$ ionic platform add <platform_name>’. Since we will be using ngCordova to manage SQLite plugin from the Ionic app, we have to now install ngCordova to our app. Run the following bower command to download ngCordova dependencies to the local bower ‘lib’ folder: bower install ngCordova We need to inject the JS file using a script tag in our index.html: <script src=“lib/ngCordova/dist/ng-cordova.js"></script> Also, we need to include the ngCordova module as a dependency in our app.js main module declaration: angular.module('starter', [‘ionic’,’ngCordova']) Now, we need to add the Cordova plugin for SQLite using the CLI command: cordova plugin add https://github.com/litehelpers/Cordova-sqlite-storage.git Since we will be using the $cordovaSQLite service of ngCordova only to access this plugin from our Ionic app, we need not inject any other plugin. We will have the following two views in our Ionic app: Trackers list: This list shows all the trackers we add to DB Tracker details: This is a view to show list of data entries we make for a specific tracker We would need to create the routes by registering the states for the two views we want to create. We need to add the following config block code for our ‘starter’ module in the app.js file only: .config(function($stateProvider,$urlRouterProvider){ $urlRouterProvider.otherwise('/') $stateProvider.state('home', { url: '/', controller:'TrackersListCtrl', templateUrl: 'js/trackers-list/template.html' }); $stateProvider.state('tracker', { url: '/tracker/:id', controller:'TrackerDetailsCtrl', templateUrl: 'js/tracker-details/template.html' }) }); Both views would have similar functionality, but will display different entities. Our view will display a list of trackers from the SQLite DB table and also provide a feature to add a new tracker or delete an existing one. Create a new folder named trackers-list where we can store our controller and template for the view. We will also abstract our code to access the SQLite DB into an Ionic factory. We will implement the following methods: initDB: This will initialize or create a table for this entity if it does not exist getAllTrackers: This will get all trackers list rows from the created table addNewTracker - This is a method to insert a new row for a new tracker into the table deleteTracker - This is a method to delete a specific tracker using its ID getTracker - This will get a specific Tracker from the cached list using an ID to display anywhere We will be injecting the $cordovaSQLite service into our factory to interact with our SQLite DB. We can open an existing DB or create a new DB using the command $cordovaSQLite.openDB(“myApp.db”). We have to store the object reference returned from this method call, so we will store it in a module-level variable called db. We have to pass this object reference to our future $cordovaSQLite service calls. $cordovaSQLite has a handful of methods to provide varying features: openDB: This is a method to establish a connection to the existing DB or create a new DB execute: This is a method to execute a single SQL command query insertCollection: This is a method to insert bulk values nestedExecute: This is a method to run nested queries deleted: This is a method to delete a particular DB We see the usage of openDB and execute the command in this post. In our factory, we will create a standard method runQuery to adhere to DRY(Don’t Repeat Yourself) principles. The code for the runQuery function is as follows: function runQuery(query,dataParams,successCb,errorCb) { $ionicPlatform.ready(function() { $cordovaSQLite.execute(db, query,dataParams).then(function(res) { successCb(res); }, function (err) { errorCb(err); }); }.bind(this)); } In the preceding code, we pass the query as a string, dataParams (dynamic query parameters) as an array, and successCB/errorCB as callback functions. We should always ensure that any Cordova plugin code should be called when the Cordova ready event is already fired, which is ensured by the $ionicPlatform.ready() method. We will then call the execute method of the $cordovaSQLite service passing the ‘db’ object reference, query, and dataParams as arguments. The method returns a promise to which we register callbacks using the ‘.then’ method. We pass the results or error using the success callback or error callback. Now, we will write code for each of the methods to initialize DB, insert a new row, fetch all rows, and then delete a row. initDB Method: function initDB() { db = $cordovaSQLite.openDB("myapp.db"); var query = "CREATE TABLE IF NOT EXISTS trackers_list (id in-teger autoincrement primary key, name string)"; runQuery(query,[],function(res) { console.log("table created "); }, function (err) { console.log(err); }); } In the preceding code, the openDB method is used to establish a connection with an existing DB or create a new DB. Then, we run the query to create a new table called ‘trackers_list’ if it does not exist. We define the columns ID with integer autoincrement primary key properties with the name string. addNewTracker Method: function addNewTracker(name) { var deferred = $q.defer(); var query = "INSERT INTO trackers_list (name) VALUES (?)"; runQuery(query,[name],function(response){ //Success Callback console.log(response); deferred.resolve(response); },function(error){ //Error Callback console.log(error); deferred.reject(error); }); return deferred.promise; } In the preceding code, we take ‘name’ as an argument, which will be passed into the insert query. We write the insert query and add a new row to the trackers_list table where ID will be auto-generated. We pass dynamic query parameters using the ‘?’ character in our query string, which will be replaced by elements in the dataParams array passed as the second argument to the runQuery method. We also use a $q library to return a promise to our factory methods so that controllers can manage asynchronous calls. getAllTrackers Method: This method is the same as the addNewTracker method, only without the name parameter, and it has the following query: var query = "SELECT * from trackers_list”; This method will return a promise, which when resolved will give the response from the $cordovaSQLite method. The response object will have the following structure: { insertId: <specific_id>, rows: {item: function, length: <total_no_of_rows>} rowsAffected: 0 } The response object has properties insertId representing the new ID generated for the row, rowsAffected giving the number of rows affected by the query and rows object with item method property, to which we can pass the index of the row to retrieve it. We will write the following code in the controller to convert the response.rows object into an utterable array of rows to be displayed using the ng-repeat directive: for(var i=0;i<response.rows.length;i++) { $scope.trackersList.push({ id:response.rows.item(i).id, name:response.rows.item(i).name }); } The code in the template to display the list of Trackers would be as follows: <ion-item ui-sref="tracker({id:tracker.id})" class="item-icon-right" ng-repeat="tracker in trackersList track by $index"> {{tracker.name}} <ion-delete-button class="ion-minus-circled" ng-click=“deleteTracker($index,tracker.id)"> </ion-delete-button> <i class="icon ion-chevron-right”></i> </ion-item> deleteTracker Method: function deleteTracker(id) { var deferred = $q.defer(); var query = "DELETE FROM trackers_list WHERE id = ?"; runQuery(query,[id],function(response){ … [Same Code as addNewTrackerMethod] } The delete tracker method has the same code as the addNewTracker method, where the only change is in the query and the argument passed. We pass ‘id’ as the argument to be used in the WHERE clause of delete query to delete the row with that specific ID. Rest of the Ionic App Code: The rest of the app code has not been discussed in this post because we have already discussed the code that is intended for integration with SQLite. You can implement your own version of this app or even use this sample code for any other use case. The trackers details view will be implemented in the same way to store data into the tracker_entries table with a foreign key, tracker_id, used for this table. It will also use this ID in the SELECT query to fetch entries for a specific tracker on its detail view. The GitHub link for the exact functioning code for complete app developed during this tutorial. About the author Rahat Khanna is a techno nerd experienced in developing web and mobile apps for many international MNCs and start-ups. He has completed his bachelors in technology with computer science and engineering as specialization. During the last 7 years, he has worked for a multinational IT service company and ran his own entrepreneurial venture in his early twenties. He has worked on projects ranging from static HTML websites to scalable web applications and engaging mobile apps. Along with his current job as a senior UI developer at Flipkart, a billion dollar e-commerce firm, he now blogs on the latest technology frameworks on sites such as www.airpair.com, appsonmob.com, and so on, and delivers talks at community events. He has been helping individual developers and start-ups in their Ionic projects to deliver amazing mobile apps.
Read more
  • 0
  • 0
  • 25068

article-image-amazon-joins-nsf-funding-fairness-ai-public-outcry-big-tech-ethicswashing
Sugandha Lahoti
27 Mar 2019
5 min read
Save for later

Amazon joins NSF in funding research exploring fairness in AI amidst public outcry over big tech #ethicswashing

Sugandha Lahoti
27 Mar 2019
5 min read
Behind the heels of Stanford’s HCAI Institute ( which, mind you, received public backlash for non-representative faculty makeup). Amazon is collaborating with the National Science Foundation (NSF) to develop systems based on fairness in AI. The company will be investing $10M each in artificial intelligence research grants over a three-year period. The official announcement was made by Prem Natarajan, VP of natural understanding in the Alexa AI group, who wrote in a blog post “With the increasing use of AI in everyday life, fairness in artificial intelligence is a topic of increasing importance across academia, government, and industry. Here at Amazon, the fairness of the machine learning systems we build to support our businesses is critical to establishing and maintaining our customers’ trust.” Per the blog post, Amazon will be collaborating with NSF to build trustworthy AI systems to address modern challenges. They will explore topics of transparency, explainability, accountability, potential adverse biases and effects, mitigation strategies, validation of fairness, and considerations of inclusivity. Proposals will be accepted from March 26 until May 10, to result in new open source tools, publicly available data sets, and publications. The two organizations plan to continue the program with calls for additional proposals in 2020 and 2021. There will be 6 to 9 awards of type Standard Grant or Continuing Grant. The award size will be $750,000 - up to a maximum of $1,250,000 for periods of up to 3 years. The anticipated funding amount is $7,600,000. “We are excited to announce this new collaboration with Amazon to fund research focused on fairness in AI,” said Jim Kurose, NSF's head for Computer and Information Science and Engineering. “This program will support research related to the development and implementation of trustworthy AI systems that incorporate transparency, fairness, and accountability into the design from the beginning.” The insidious nexus of private funding in public research: What does Amazon gain from collab with NSF? Amazon’s foray into fairness system looks more of a publicity stunt than eliminating AI bias. For starters, Amazon said that they will not be making the award determinations for this project. NSF would solely be awarding in accordance with its merit review process. However, Amazon said that Amazon researchers may be involved with the projects as an advisor only at the request of an awardee, or of NSF with the awardee's consent. As advisors, Amazon may host student interns who wish to gain further industry experience, which seems a bit dicey. Amazon will also not participate in the review process or receive proposal information. NSF will only be sharing with Amazon summary-level information that is necessary to evaluate the program, specifically the number of proposal submissions, number of submitting organizations, and numbers rated across various review categories. There was also the question of who exactly is funding since VII.B section of the proposal states: "Individual awards selected for joint funding by NSF and Amazon will be   funded through separate NSF and Amazon funding instruments." https://twitter.com/nniiicc/status/1110335108634951680 https://twitter.com/nniiicc/status/1110335004989521920 Nic Weber, the author of the above tweets and Assistant Professor at UW iSchool, also raises another important question: “Why does Amazon get to put its logo on a national solicitation (for a paltry $7.6 million dollars in basic research) when it profits in the multi-billions off of AI that is demonstrably unfair and harmful.” Twitter was abundant with tweets from those in working tech questioning Amazon’s collaboration. https://twitter.com/mer__edith/status/1110560653872373760 https://twitter.com/patrickshafto/status/1110748217887649793 https://twitter.com/smunson/status/1110657292549029888 https://twitter.com/haldaume3/status/1110697325251448833 Amazon has already been under the fire due to its controversial decisions in the recent past. In June last year, when the US Immigration and Customs Enforcement agency (ICE) began separating migrant children from their parents, Amazon came under fire as one of the tech companies that aided ICE with the software required to do so. Amazon has also faced constant criticisms since the news came that Amazon had sold its facial recognition product Rekognition to a number of law enforcement agencies in the U.S. in the first half of 2018. Amazon is also under backlash after a study by the Massachusetts Institute of Technology in January, found Amazon Rekognition incapable of reliably determining the sex of female and darker-skinned faces in certain scenarios. Amazon is yet to fix this AI-bias anomaly, and yet it has now started a new collaboration with NSF that ironically focusses on building bias-free AI systems. Amazon’s Ring (a smart doorbell company) also came under public scrutiny in January, after it gave access to its employees to watch live footage from cameras of the customers. In other news, yesterday, Google also formed an external AI advisory council to help advance the responsible development of AI. More details here. Amazon won’t be opening its HQ2 in New York due to public protests Amazon admits that facial recognition technology needs to be regulated Amazon’s Ring gave access to its employees to watch live footage of the customers, The Intercept reports
Read more
  • 0
  • 0
  • 24994

article-image-creating-effective-dashboards-using-splunk-tutorial
Sunith Shetty
28 Jul 2018
10 min read
Save for later

Creating effective dashboards using Splunk [Tutorial]

Sunith Shetty
28 Jul 2018
10 min read
Splunk is easy to use for developing a powerful analytical dashboard with multiple panels. A dashboard with too many panels, however, will require scrolling down the page and can cause the viewer to miss crucial information. An effective dashboard should generally meet the following conditions: Single screen view: The dashboard fits in a single window or page, with no scrolling Multiple data points: Charts and visualizations should display a number of data points Crucial information highlighted: The dashboard points out the most important information, using appropriate titles, labels, legends, markers, and conditional formatting as required Created with the user in mind: Data is presented in a way that is meaningful to the user Loads quickly: The dashboard returns results in 10 seconds or less Avoid redundancy: The display does not repeat information in multiple places In this tutorial, we learn to create different types of dashboards using Splunk. We will also discuss how to gather business requirements for your dashboards. Types of Splunk dashboards There are three kinds of dashboards typically created with Splunk: Dynamic form-based dashboards Real-time dashboards Dashboards as scheduled reports Dynamic form-based dashboards allow Splunk users to modify the dashboard data without leaving the page. This is accomplished by adding data-driven input fields (such as time, radio button, textbox, checkbox, dropdown, and so on) to the dashboard. Updating these inputs changes the data based on the selections. Dynamic form-based dashboards have existed in traditional business intelligence tools for decades now, so users who frequently use them will be familiar with changing prompt values on the fly to update the dashboard data. Real-time dashboards are often kept on a big panel screen for constant viewing, simply because they are so useful. You see these dashboards in data centers, network operations centers (NOCs), or security operations centers (SOCs) with constant format and data changing in real time. The dashboard will also have indicators and alerts for operators to easily identify and act on a problem. Dashboards like this typically show the current state of security, network, or business systems, using indicators for web performance and traffic, revenue flow, login failures, and other important measures. Dashboards as scheduled reports may not be exposed for viewing; however, the dashboard view will generally be saved as a PDF file and sent to email recipients at scheduled times. This format is ideal when you need to send information updates to multiple recipients at regular intervals, and don't want to force them to log in to Splunk to capture the information themselves. We will create the first two types of dashboards, and you will learn how to use the Splunk dashboard editor to develop advanced visualizations along the way. Gathering business requirements As a Splunk administrator, one of the most important responsibilities is to be responsible for the data. As a custodian of data, a Splunk admin has significant influence over how to interpret and present information to users. It is common for the administrator to create the first few dashboards. A more mature implementation, however, requires collaboration to create an output that is beneficial to a variety of user requirements and may be completed by a Splunk development resource with limited administrative rights. Make it a habit to consistently request users input regarding the Splunk delivered dashboards and reports and what makes them useful. Sit down with day-to-day users and layout, on a drawing board, for example, the business process flows or system diagrams to understand how the underlying processes and systems you're trying to measure really work. Look for key phrases like these, which signify what data is most important to the business: If this is broken, we lose tons of revenue... This is a constant point of failure... We don't know what's going on here... If only I can see the trend, it will make my work easier... This is what my boss wants to see... Splunk dashboard users may come from many areas of the business. You want to talk to all the different users, no matter where they are on the organizational chart. When you make friends with the architects, developers, business analysts, and management, you will end up building dashboards that benefit the organization, not just individuals. With an initial dashboard version, ask for users thoughts as you observe them using it in their work and ask what can be improved upon, added, or changed. We hope that at this point, you realize the importance of dashboards and are ready to get started creating some, as we will do in the following sections. Dynamic form-based dashboard In this section, we will create a dynamic form-based dashboard in our Destinations app to allow users to change input values and rerun the dashboard, presenting updated data. Here is a screenshot of the final output of this dynamic form-based dashboard: Let's begin by creating the dashboard itself and then generate the panels: Go the search bar in the Destinations app Run this search command: SPL> index=main status_type="*" http_uri="*" server_ip="*" | top status_type, status_description, http_uri, server_ip Be careful when copying commands with quotation marks. It is best to type in the entire search command to avoid problems. Go to Save As | Dashboard Panel Fill in the information based on the following screenshot: Click on Save Close the pop-up window that appears (indicating that the dashboard panel was created) by clicking on the X in the top-right corner of the window Creating a Status Distribution panel We will go to the after all the panel searches have been generated. Let's go ahead and create the second panel: In the search window, type in the following search command: SPL> index=main status_type="*" http_uri=* server_ip=* | top status_type You will save this as a dashboard panel in the newly created dashboard. In the Dashboard option, click on the Existing button and look for the new dashboard, as seen here. Don't forget to fill in the Panel Title as Status Distribution: Click on Save when you are done and again close the pop-up window, signaling the addition of the panel to the dashboard. Creating the Status Types Over Time panel Now, we'll move on to create the third panel: Type in the following search command and be sure to run it so that it is the active search: SPL> index=main status_type="*" http_uri=* server_ip=* | timechart count by http_status_code You will save this as a Dynamic Form-based Dashboard panel as well. Type in Status Types Over Time in the Panel Title field: Click on Save and close the pop-up window, signaling the addition of the panel to the dashboard. Creating the Hits vs Response Time panel Now, on to the final panel. Run the following search command: SPL> index=main status_type="*" http_uri=* server_ip=* | timechart count, avg(http_response_time) as response_time Save this dashboard panel as Hits vs Response Time: Arrange the dashboard We'll move on to look at the dashboard we've created and make a few changes: Click on the View Dashboard button. If you missed out on the View Dashboard button, you can find your dashboard by clicking on Dashboards in the main navigation bar. Let's edit the panel arrangement. Click on the Edit button. Move the Status Distribution panel to the upper-right row. Move the Hits vs Response Time panel to the lower-right row. Click on Save to save your layout changes. Look at the following screenshot. The dashboard framework you've created should now look much like this. The dashboard probably looks a little plainer than you expected it to. But don't worry; we will improve the dashboard visuals one panel at a time: Panel options in dashboards In this section, we will learn how to alter the look of our panels and create visualizations. Go to the edit dashboard mode by clicking on the Edit button. Each dashboard panel will have three setting options to work with: edit search, select visualization, and visualization format options. They are represented by three drop-down icons: The Edit Search window allows you to modify the search string, change the time modifier for the search, add auto-refresh and progress bar options, as well as convert the panel into a report: The Select Visualization dropdown allows you to change the type of visualization to use for the panel, as shown in the following screenshot: Finally, the Visualization Options dropdown will give you the ability to fine-tune your visualization. These options will change depending on the visualization you select. For a normal statistics table, this is how it will look: Pie chart – Status Distribution Go ahead and change the Status Distribution visualization panel to a pie chart. You do this by selecting the Select Visualization icon and selecting the Pie icon. Once done, the panel will look like the following screenshot: Stacked area chart – Status Types Over Time We will change the view of the Status Types Over Time panel to an area chart. However, by default, area charts will not be stacked. We will update this through adjusting the visualization options: Change the Status Types Over Time panel to an Area Chart using the same Select Visualization button as the prior pie chart exercise. Make the area chart stacked using the Format Visualization icon. In the Stack Mode section, click on Stacked. For Null Values, select Zero. Use the chart that follows for guidance: Click on Apply. The panel will change right away. Remove the _time label as it is already implied. You can do this in the X-Axis section by setting the Title to None. Close the Format Visualization window by clicking on the X in the upper-right corner: Here is the new stacked area chart panel: Column with overlay combination chart – Hits vs Response Time When representing two or more kinds of data with different ranges, using a combination chart—in this case combining a column and a line—can tell a bigger story than one metric and scale alone. We'll use the Hits vs Response Time panel to explore the combination charting options: In the Hits vs Response Time panel, change the chart panel visualization to Column In the Visualization Options window, click on Chart Overlay In the Overlay selection box, select response_time Turn on View as Axis Click on X-Axis from the list of options on the left of the window and change the Title to None Click on Legend from the list of options on the left Change the Legend Position to Bottom Click on the X in the upper-right-hand corner to close the Visualization Options window The new panel will now look similar to the following screenshot. From this and the prior screenshot, you can see there was clearly an outage in the overnight hours: Click on Done to save all the changes you made and exit the Edit mode The dashboard has now come to life. This is how it should look now: To summarize we saw how to create different types of dashboards. To know more about core Splunk functionalities to transform machine data into powerful insights, check out this book Splunk 7 Essentials, Third Edition. Splunk leverages AI in its monitoring tools Splunk Industrial Asset Intelligence (Splunk IAI) targets Industrial IoT marketplace Create a data model in Splunk to enable interactive reports and dashboards
Read more
  • 0
  • 0
  • 24904

article-image-introducing-algorithm-design-paradigms
Packt
18 Nov 2016
10 min read
Save for later

Introducing Algorithm Design Paradigms

Packt
18 Nov 2016
10 min read
In this article by David Julian and Benjamin Baka, author of the book Python Data Structures and Algorithm, we will discern three broad approaches to algorithm design. They are as follows: Divide and conquer Greedy algorithms Dynamic programming   (For more resources related to this topic, see here.) As the name suggests, the divide and conquer paradigm involves breaking a problem into smaller subproblems, and then in some way combining the results to obtain a global solution. This is a very common and natural problem solving technique and is, arguably, the most used approach to algorithm design. Greedy algorithms often involve optimization and combinatorial problems; the classic example is applying it to the traveling salesperson problem, where a greedy approach always chooses the closest destination first. This shortest path strategy involves finding the best solution to a local problem in the hope that this will lead to a global solution. The dynamic programming approach is useful when our subproblems overlap. This is different from divide and conquer. Rather than breaking our problem into independent subproblems, with dynamic programming, intermediate results are cached and can be used in subsequent operations. Like divide and conquer, it uses recursion. However, dynamic programing allows us to compare results at different stages. This can have a performance advantage over divide and conquer for some problems because it is often quicker to retrieve a previously calculated result from memory rather than having to recalculate it. Recursion and backtracking Recursion is particularly useful for divide and conquer problems; however, it can be difficult to understand exactly what is happening, since each recursive call is itself spinning off other recursive calls. At the core of a recursive function are two types of cases. Base cases, which tell the recursion when to terminate and recursive cases that call the function they are in. A simple problem that naturally lends itself to a recursive solution is calculating factorials. The recursive factorial algorithm defines two cases—the base case, when n is zero, and the recursive case, when n is greater than zero. A typical implementation is shown in the following code: def factorial(n): #test for a base case if n==0: return 1 # make a calculation and a recursive call f= n*factorial(n-1) print(f) return(f) factorial(4) This code prints out the digits 1, 2, 4, 24. To calculate 4!, we require four recursive calls plus the initial parent call. On each recursion, a copy of the methods variables is stored in memory. Once the method returns, it is removed from memory. Here is a way to visualize this process: It may not necessarily be clear if recursion or iteration is a better solution to a particular problem, after all, they both repeat a series of operations and both are very well suited to divide and conquer approaches to algorithm design. An iteration churns away until the problem is done. Recursion breaks the problem down into smaller chunks and then combines the results. Iteration is often easier for programmers because the control stays local to a loop, whereas recursion can more closely represent mathematical concepts such as factorials. Recursive calls are stored in memory, whereas iterations are not. This creates a tradeoff between processor cycles and memory usage, so choosing which one to use may depend on whether the task is processor or memory intensive. The following table outlines the key differences between recursion and iteration. Recursion Iteration Terminates when a base case is reached Terminates when a defined condition is met Each recursive call requires space in memory Each iteration is not stored in memory An infinite recursion results in a stack overflow error An infinite iteration will run while the hardware is powered Some problems are naturally better suited to recursive solutions Iterative solutions may not always be obvious Backtracking Backtracking is a form of recursion that is particularly useful for types of problems such as traversing tree structures where we are presented with a number of options at each node, from which we must choose one. Subsequently, we are presented with a different set of options, and depending on the series of choices made, either a goal state or a dead end is reached. If it is the latter, we mast backtrack to a previous node and traverse a different branch. Backtracking is a divide and conquer method for exhaustive search. Importantly, backtracking prunes branches that cannot give a result. An example of back tracking is given by the following. Here, we have used a recursive approach to generating all the possible permutations of a given string, s, of a given length n: def bitStr(n, s): if n == 1: return s return [ digit + bits for digit in bitStr(1,s)for bits in bitStr(n - 1,s)] print (bitStr(3,'abc')) This generates the following output: Note the double list compression and the two recursive calls within this comprehension. This recursively concatenates each element of the initial sequence, returned when n = 1, with each element of the string generated in the previous recursive call. In this sense, it is backtracking to uncover previously ungenerated combinations. The final string that is returned is all n letter combinations of the initial string. Divide and conquer – long multiplication For recursion to be more than just a clever trick, we need to understand how to compare it to other approaches, such as iteration, and to understand when it is use will lead to a faster algorithm. An iterative algorithm that we are all familiar with is the procedure you learned in primary math classes, which was used to multiply two large numbers, that is, long multiplication. If you remember, long multiplication involved iterative multiplying and carry operations followed by a shifting and addition operation. Our aim here is to examine ways to measure how efficient this procedure is and attempt to answer the question, is this the most efficient procedure we can use for multiplying two large numbers together? In the following figure, we can see that multiplying two 4-digit numbers together requires 16 multiplication operations, and we can generalize to say that an n digit number requires, approximately, n2 multiplication operations: This method of analyzing algorithms, in terms of number of computational primitives such as multiplication and addition, is important because it can give a way to understand the relationship between the time it takes to complete a certain computation and the size of the input to that computation. In particular, we want to know what happens when the input, the number of digits, n, is very large. Can we do better? A recursive approach It turns out that in the case of long multiplication, the answer is yes, there are in fact several algorithms for multiplying large numbers that require fewer operations. One of the most well-known alternatives to long multiplication is the Karatsuba algorithm, published in 1962. This takes a fundamentally different approach: rather than iteratively multiplying single digit numbers, it recursively carries out multiplication operation on progressively smaller inputs. Recursive programs call themselves on smaller subset of the input. The first step in building a recursive algorithm is to decompose a large number into several smaller numbers. The most natural way to do this is to simply split the number into halves: the first half comprising the most significant digits and the second half comprising the least significant digits. For example, our four-digit number, 2345, becomes a pair of two digit numbers, 23 and 45. We can write a more general decomposition of any two n-digit numbers x and y using the following, where m is any positive integer less than n. For x-digit number: For y-digit number: So, we can now rewrite our multiplication problem x and y as follows: When we expand and gather like terms we get the following: More conveniently, we can write it like this (equation 1): Here, It should be pointed out that this suggests a recursive approach to multiplying two numbers since this procedure itself involves multiplication. Specifically, the products ac, ad, bc, and bd all involve numbers smaller than the input number, and so it is conceivable that we could apply the same operation as a partial solution to the overall problem. This algorithm, so far consists of four recursive multiplication steps and it is not immediately clear if it will be faster than the classic long multiplication approach. What we have discussed so far in regards to the recursive approach to multiplication was well known to mathematicians since the late 19th century. The Karatsuba algorithm improves on this is by making the following observation. We really only need to know three quantities, z2 = ac, z1=ad +bc, and z0 = bd to solve equation 1. We need to know the values of a, b, c, and d as they contribute to the overall sum and products involved in calculating the quantities z2, z1, and z0. This suggests the possibility that perhaps we can reduce the number of recursive steps. It turns out that this is indeed the situation. Since the products ac and bd are already in their simplest form, it seems unlikely that we can eliminate these calculations. We can, however, make the following observation: When we subtract the quantities ac and bd, which we have calculated in the previous recursive step, we get the quantity we need, namely ad + bc: This shows that we can indeed compute the sum of ad and bc without separately computing each of the individual quantities. In summary, we can improve on equation 1 by reducing from four recursive steps to three. These three steps are as follows: Recursively calculate ac. Recursively calculate bd. Recursively calculate (a +b)(c + d) and subtract ac and bd. The following code shows a Python implementation of the Karatsuba algorithm: from math import log10 def karatsuba(x,y): # The base case for recursion if x < 10 or y < 10: return x*y #sets n, the number of digits in the highest input number n = max(int(log10(x)+1), int(log10(y)+1)) # rounds up n/2 n_2 = int(math.ceil(n / 2.0)) #adds 1 if n is uneven n = n if n % 2 == 0 else n + 1 #splits the input numbers a, b = divmod(x, 10**n_2) c, d = divmod(y, 10**n_2) #applies the three recursive steps ac = karatsuba(a,c) bd = karatsuba(b,d) ad_bc = karatsuba((a+b),(c+d)) - ac - bd #performs the multiplication return (((10**n)*ac) + bd + ((10**n_2)*(ad_bc))) To satisfy ourselves that this does indeed work, we can run the following test function: import random def test(): for i in range(1000): x = random.randint(1,10**5) y = random.randint(1,10**5) expected = x * y result = karatsuba(x, y) if result != expected: return("failed") return('ok') Summary In this article, we looked at a way to recursively multiply large numbers and also a recursive approach for merge sort. We saw how to use backtracking for exhaustive search and generating strings. Resources for Article: Further resources on this subject: Python Data Structures [article] How is Python code organized [article] Algorithm Analysis [article]
Read more
  • 0
  • 0
  • 24738

article-image-visualizing-univariate-distribution-seaborn
Sugandha Lahoti
16 Nov 2017
7 min read
Save for later

Visualizing univariate distribution in Seaborn

Sugandha Lahoti
16 Nov 2017
7 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book by Allen Chi Shing Yu, Claire Yik Lok Chung, and Aldrin Kay Yuen Yim titled Matplotlib 2.x By Example. [/box] Seaborn by Michael Waskom is a statistical visualization library that is built on top of Matplotlib. It comes with handy functions for visualizing categorical variables, univariate distributions, and bivariate distributions. In this article, we will visualize univariate distribution in Seaborn. Visualizing univariate distribution Seaborn makes the task of visualizing the distribution of a dataset much easier. In this example, we are going to use the annual population summary published by the Department of Economic and Social Affairs, United Nations, in 2015. Projected population figures towards 2100 were also included in the dataset. Let's see how it distributes among different countries in 2017 by plotting a bar plot: import seaborn as sns import matplotlib.pyplot as plt # Extract USA population data in 2017 current_population = population_df[(population_df.Location == 'United States of America') & (population_df.Time == 2017) & (population_df.Sex != 'Both')] # Population Bar chart sns.barplot(x="AgeGrp",y="Value", hue="Sex", data = current_population) # Use Matplotlib functions to label axes rotate tick labels ax = plt.gca() ax.set(xlabel="Age Group", ylabel="Population (thousands)") ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=45) plt.title("Population Barchart (USA)") # Show the figure plt.show() Bar chart in Seaborn The seaborn.barplot() function shows a series of data points as rectangular bars. If multiple points per group are available, confidence intervals will be shown on top of the bars to indicate the uncertainty of the point estimates. Like most other Seaborn functions, various input data formats are supported, such as Python lists, Numpy arrays, pandas Series, and pandas DataFrame. A more traditional way to show the population structure is through the use of a population pyramid. So what is a population pyramid? As its name suggests, it is a pyramid-shaped plot that shows the age distribution of a population. It can be roughly classified into three classes, namely constrictive, stationary, and expansive for populations that are undergoing negative, stable, and rapid growth respectively. For instance, constrictive populations have a lower proportion of young people, so the pyramid base appears to be constricted. Stable populations have a more or less similar number of young and middle-aged groups. Expansive populations, on the other hand, have a large proportion of youngsters, thus resulting in pyramids with enlarged bases. We can build a population pyramid by plotting two bar charts on two subplots with a shared y axis: import seaborn as sns import matplotlib.pyplot as plt # Extract USA population data in 2017 current_population = population_df[(population_df.Location == 'United States of America') & (population_df.Time == 2017) & (population_df.Sex != 'Both')] # Change the age group to descending order current_population = current_population.iloc[::-1] # Create two subplots with shared y-axis fig, axes = plt.subplots(ncols=2, sharey=True) # Bar chart for male sns.barplot(x="Value",y="AgeGrp", color="darkblue", ax=axes[0], data = current_population[(current_population.Sex == 'Male')]) # Bar chart for female sns.barplot(x="Value",y="AgeGrp", color="darkred", ax=axes[1], data = current_population[(current_population.Sex == 'Female')]) # Use Matplotlib function to invert the first chart axes[0].invert_xaxis() # Use Matplotlib function to show tick labels in the middle axes[0].yaxis.tick_right() # Use Matplotlib functions to label the axes and titles axes[0].set_title("Male") axes[1].set_title("Female") axes[0].set(xlabel="Population (thousands)", ylabel="Age Group") axes[1].set(xlabel="Population (thousands)", ylabel="") fig.suptitle("Population Pyramid (USA)") # Show the figure plt.show() Since Seaborn is built on top of the solid foundations of Matplotlib, we can customize the plot easily using built-in functions of Matplotlib. In the preceding example, we used matplotlib.axes.Axes.invert_xaxis() to flip the male population plot horizontally, followed by changing the location of the tick labels to the right-hand side using matplotlib.axis.YAxis.tick_right(). We further customized the titles and axis labels for the plot using a combination of matplotlib.axes.Axes.set_title(), matplotlib.axes.Axes.set(), and matplotlib.figure.Figure.suptitle(). Let's try to plot the population pyramids for Cambodia and Japan as well by changing the line population_df.Location == 'United States of America' to population_df.Location == 'Cambodia' or  population_df.Location == 'Japan'. Can you classify the pyramids into one of the three population pyramid classes? To see how Seaborn simplifies the code for relatively complex plots, let's see how a similar plot can be achieved using vanilla Matplotlib. First, like the previous Seaborn-based example, we create two subplots with shared y axis: fig, axes = plt.subplots(ncols=2, sharey=True) Next, we plot horizontal bar charts using matplotlib.pyplot.barh() and set the location and labels of ticks, followed by adjusting the subplot spacing: # Get a list of tick positions according to the data bins y_pos = range(len(current_population.AgeGrp.unique())) # Horizontal barchart for male axes[0].barh(y_pos, current_population[(current_population.Sex == 'Male')].Value, color="darkblue") # Horizontal barchart for female axes[1].barh(y_pos, current_population[(current_population.Sex == 'Female')].Value, color="darkred") # Show tick for each data point, and label with the age group axes[0].set_yticks(y_pos) axes[0].set_yticklabels(current_population.AgeGrp.unique()) # Increase spacing between subplots to avoid clipping of ytick labels plt.subplots_adjust(wspace=0.3) Finally, we use the same code to further customize the look and feel of the figure: # Invert the first chart axes[0].invert_xaxis() # Show tick labels in the middle axes[0].yaxis.tick_right() # Label the axes and titles axes[0].set_title("Male") axes[1].set_title("Female") axes[0].set(xlabel="Population (thousands)", ylabel="Age Group") axes[1].set(xlabel="Population (thousands)", ylabel="") fig.suptitle("Population Pyramid (USA)") # Show the figure plt.show() When compared to the Seaborn-based code, the pure Matplotlib implementation requires extra lines to define the tick positions, tick labels, and subplot spacing. For some other Seaborn plot types that include extra statistical calculations such as linear regression, and Pearson correlation, the code reduction is even more dramatic. Therefore, Seaborn is a "batteries-included" statistical visualization package that allows users to write less verbose code. Histogram and distribution fitting in Seaborn In the population example, the raw data was already binned into different age groups. What if the data is not binned (for example, the BigMac Index data)? Turns out, seaborn.distplot can help us to process the data into bins and show us a histogram as a result. Let's look at this example: import seaborn as sns import matplotlib.pyplot as plt # Get the BigMac index in 2017 current_bigmac = bigmac_df[(bigmac_df.Date == "2017-01-31")] # Plot the histogram ax = sns.distplot(current_bigmac.dollar_price) plt.show() The seaborn.distplot function expects either pandas Series, single-dimensional numpy.array, or a Python list as input. Then, it determines the size of the bins according to the Freedman-Diaconis rule, and finally it fits a kernel density estimate (KDE) over the histogram. KDE is a non-parametric method used to estimate the distribution of a variable. We can also supply a parametric distribution, such as beta, gamma, or normal distribution, to the fit argument. In this example, we are going to fit the normal distribution from the scipy.stats package over the Big Mac Index dataset: from scipy import stats ax = sns.distplot(current_bigmac.dollar_price, kde=False, fit=stats.norm) plt.show() [INSERT IMAGE] You have now equipped yourself with the knowledge to visualize univariate data in Seaborn as Bar Charts, Histogram, and distribution fitting. To have more fun visualizing data with Seaborn and Matplotlib, check out the book,  this snippet appears from.
Read more
  • 0
  • 0
  • 24734

article-image-why-intel-is-betting-on-bfloat16-to-be-a-game-changer-for-deep-learning-training-hint-range-trumps-precision
Vincy Davis
22 Jul 2019
4 min read
Save for later

Why Intel is betting on BFLOAT16 to be a game changer for deep learning training? Hint: Range trumps Precision.

Vincy Davis
22 Jul 2019
4 min read
A group of researchers from Intel Labs and Facebook have published a paper titled, “A Study of BFLOAT16 for Deep Learning Training”. The paper presents a comprehensive study indicating the success of Brain Floating Point (BFLOAT16) half-precision format in Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 has a 7-bit mantissa and an 8-bit exponent, similar to FP32, but with less precision. BFLOAT16 was originally developed by Google and implemented in its third generation Tensor Processing Unit (TPU). https://twitter.com/JeffDean/status/1134524217762951168 Many state of the art training platforms use IEEE-754 or automatic mixed precision as their preferred numeric format for deep learning training. However, these formats lack in representing error gradients during back propagation. Thus, they are not able to satisfy the required  performance gains. BFLOAT16 exhibits a dynamic range which can be used to represent error gradients during back propagation. This enables easier migration of deep learning workloads to BFLOAT16 hardware. Image Source: BFLOAT16 In the above table, all the values are represented as trimmed full precision floating point values with 8 bits of mantissa with their dynamic range comparable to FP32. By adopting to BFLOAT16 numeric format, the core compute primitives such as Fused Multiply Add (FMA) can be built using 8-bit multipliers. This leads to significant reduction in area and power while preserving the full dynamic range of FP32. How Deep neural network(DNNs) is trained with BFLOAT16? The below figure shows the mixed precision data flow used to train deep neural networks using BFLOAT16 numeric format. Image Source: BFLOAT16 The BFLOAT16 tensors are taken as input to the core compute kernels represented as General Matrix Multiply (GEMM) operations. It is then forwarded to the FP32 tensors as output.   The researchers have developed a library called Quantlib, represented as Q in the figure, to implement the emulation in multiple deep learning frameworks. One of the functions of a Quantlib is to modify the elements of an input FP32 tensor to echo the behavior of BFLOAT16. Quantlib is also used to modify a copy of the FP32 weights to BFLOAT16 for the forward pass.   The non-GEMM computations include batch-normalization and activation functions. The  FP32 always maintains the bias tensors.The FP32 copy of the weights updates the step uses to maintain model accuracy. How does BFLOAT16 perform compared to FP32? Convolution Neural Networks Convolutional neural networks (CNN) are primarily used for computer vision applications such as image classification, object detection and semantic segmentation. AlexNet and ResNet-50 are used as the two representative models for the BFLOAT16 evaluation. AlexNet demonstrates that BFLOAT16 emulation follows very near to the actual FP32 run and achieves 57.2% top-1 and 80.1% top-5 accuracy. Whereas in ResNet-50, the BFLOAT16 emulation follows the FP32 baseline almost exactly and achieves the same top-1 and top-5 accuracy. Image Source: BFLOAT16 Similarly, the researchers were able to successfully demonstrate that BFLOAT16 is able to represent tensor values across many application domains including Recurrent Neural Networks, Generative Adversarial Networks (GANs) and Industrial Scale Recommendation System. The researchers thus established that the dynamic range of BFLOAT16 is of the same range as that of FP32 and its conversion to/from FP32 is also easy. It is important to maintain the same range as FP32 since no hyper-parameter tuning is required for convergence in FP32. A hyperparameter is a parameter of choosing a set of optimal hyperparameters in machine learning for a learning algorithm. Researchers of this paper expect to see an industry-wide adoption of BFLOAT16 across emerging domains. Recent reports suggest that Intel is planning to graft Google’s BFLOAT16 onto its processors  as well as on its initial Nervana Neural Network Processor for training, the NNP-T 1000. Pradeep Dubey, who directs the Parallel Computing Lab at Intel and is also one of the researchers of this paper believes that for deep learning, the range of the processor is more important than the precision, which is the inverse of the rationale used for IEEE’s floating point formats. Users are finding it interesting that a BFLOAT16 half-precision format is suitable for deep learning applications. https://twitter.com/kevlindev/status/1152984689268781056 https://twitter.com/IAmMattGreen/status/1152769690621448192 For more details, head over to the “A Study of BFLOAT16 for Deep Learning Training” paper. Intel’s new brain inspired neuromorphic AI chip contains 8 million neurons, processes data 1K times faster Google plans to remove XSS Auditor used for detecting XSS vulnerabilities from its Chrome web browser IntelliJ IDEA 2019.2 Beta 2 released with new Services tool window and profiling tools
Read more
  • 0
  • 0
  • 24618
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-implement-named-entity-recognition-ner-using-opennlp-and-java
Pravin Dhandre
22 Jan 2018
5 min read
Save for later

Implement Named Entity Recognition (NER) using OpenNLP and Java

Pravin Dhandre
22 Jan 2018
5 min read
[box type="note" align="" class="" width=""]This article is an excerpt from a book written by Richard M. Reese and Jennifer L. Reese titled Java for Data Science. This book provides in-depth understanding of important tools and proven techniques used across data science projects in a Java environment.[/box] In this article, we are going to show Java implementation of Information Extraction (IE) task to identify what the document is all about. From this task you will know how to enhance search retrieval and boost the ranking of your document in the search results. To begin with, let's understand what Named Entity Recognition (NER) is all about. It is  referred to as classifying elements of a document or a text such as finding people, location and things. Given a text segment, we may want to identify all the names of people present. However, this is not always easy because a name such as Rob may also be used as a verb. In this section, we will demonstrate how to use OpenNLP's TokenNameFinderModel class to find names and locations in text. While there are other entities we may want to find, this example will demonstrate the basics of the technique. We begin with names. Most names occur within a single line. We do not want to use multiple lines because an entity such as a state might inadvertently be identified incorrectly. Consider the following sentences: Jim headed north. Dakota headed south. If we ignored the period, then the state of North Dakota might be identified as a location, when in fact it is not present. Using OpenNLP to perform NER We start our example with a try-catch block to handle exceptions. OpenNLP uses models that have been trained on different sets of data. In this example, the en-token.bin and enner-person.bin files contain the models for the tokenization of English text and for English name elements, respectively. These files can be downloaded fromhttp://opennlp.sourceforge.net/models-1.5/. However, the IO stream used here is standard Java: try (InputStream tokenStream = new FileInputStream(new File("en-token.bin")); InputStream personModelStream = new FileInputStream( new File("en-ner-person.bin"));) { ... } catch (Exception ex) { // Handle exceptions } An instance of the TokenizerModel class is initialized using the token stream. This instance is then used to create the actual TokenizerME tokenizer. We will use this instance to tokenize our sentence: TokenizerModel tm = new TokenizerModel(tokenStream); TokenizerME tokenizer = new TokenizerME(tm); The TokenNameFinderModel class is used to hold a model for name entities. It is initialized using the person model stream. An instance of the NameFinderME class is created using this model since we are looking for names: TokenNameFinderModel tnfm = new TokenNameFinderModel(personModelStream); NameFinderME nf = new NameFinderME(tnfm); To demonstrate the process, we will use the following sentence. We then convert it to a series of tokens using the tokenizer and tokenizer method: String sentence = "Mrs. Wilson went to Mary's house for dinner."; String[] tokens = tokenizer.tokenize(sentence); The Span class holds information regarding the positions of entities. The find method will return the position information, as shown here: Span[] spans = nf.find(tokens); This array holds information about person entities found in the sentence. We then display this information as shown here: for (int i = 0; i < spans.length; i++) { out.println(spans[i] + " - " + tokens[spans[i].getStart()]); } The output for this sequence is as follows. Notice that it identifies the last name of Mrs. Wilson but not the “Mrs.”: [1..2) person - Wilson [4..5) person - Mary Once these entities have been extracted, we can use them for specialized analysis. Identifying location entities We can also find other types of entities such as dates and locations. In the following example, we find locations in a sentence. It is very similar to the previous person example, except that an en-ner-location.bin file is used for the model: try (InputStream tokenStream = new FileInputStream("en-token.bin"); InputStream locationModelStream = new FileInputStream( new File("en-ner-location.bin"));) { TokenizerModel tm = new TokenizerModel(tokenStream); TokenizerME tokenizer = new TokenizerME(tm); TokenNameFinderModel tnfm = new TokenNameFinderModel(locationModelStream); NameFinderME nf = new NameFinderME(tnfm); sentence = "Enid is located north of Oklahoma City."; String tokens[] = tokenizer.tokenize(sentence); Span spans[] = nf.find(tokens); for (int i = 0; i < spans.length; i++) { out.println(spans[i] + " - " + tokens[spans[i].getStart()]); } } catch (Exception ex) { // Handle exceptions } With the sentence defined previously, the model was only able to find the second city, as shown here. This likely due to the confusion that arises with the name Enid which is both the name of a city and a person' name: [5..7) location - Oklahoma Suppose we use the following sentence: sentence = "Pond Creek is located north of Oklahoma City."; Then we get this output: [1..2) location - Creek [6..8) location - Oklahoma Unfortunately, it has missed the town of Pond Creek. NER is a useful tool for many applications, but like many techniques, it is not always foolproof. The accuracy of the NER approach presented, and many of the other NLP examples, will vary depending on factors such as the accuracy of the model, the language being used, and the type of entity.   With this, we successfully learnt one of the core tasks of natural language processing using Java and Apache OpenNLP. To know what else you can do with Java in the exciting domain of Data Science, check out this book Java for Data Science.  
Read more
  • 0
  • 0
  • 24570

article-image-mine-popular-trends-on-github-using-python-part-1
Amey Varangaonkar
26 Dec 2017
11 min read
Save for later

Mine Popular Trends on GitHub using Python - Part 1

Amey Varangaonkar
26 Dec 2017
11 min read
[box type="note" align="" class="" width=""]This interesting article is an excerpt from the book Python Social Media Analytics, written by Siddhartha Chatterjee and Michal Krystyanczuk. The book contains useful techniques to gain valuable insights from different social media channels using popular Python packages.[/box] In this article, we explore how to leverage the power of Python in order to gather and process data from GitHub and make it analysis-ready. Those who love to code, love GitHub. GitHub has taken the widely used version controlling approach to coding to the highest possible level by implementing social network features to the world of programming. No wonder GitHub is also thought of as Social Coding. We thought a book on Social Network analysis would not be complete without a use case on data from GitHub. GitHub allows you to create code repositories and provides multiple collaborative features, bug tracking, feature requests, task managements, and wikis. It has about 20 million users and 57 million code repositories (source: Wikipedia). These kind of statistics easily demonstrate that this is the most representative platform of programmers. It's also a platform for several open source projects that have contributed greatly to the world of software development. Programming technology is evolving at such a fast pace, especially due to the open source movement, and we have to be able to keep a track of emerging technologies. Assuming that the latest programming tools and technologies are being used with GitHub, analyzing GitHub could help us detect the most popular technologies. The popularity of repositories on GitHub is assessed through the number of commits it receives from its community. We will use the GitHub API in this chapter to gather data around repositories with the most number of commits and then discover the most popular technology within them. For all we know, the results that we get may reveal the next great innovations. Scope and process GitHub API allows us to get information about public code repositories submitted by users. It covers lots of open-source, educational and personal projects. Our focus is to find the trending technologies and programming languages of last few months, and compare with repositories from past years. We will collect all the meta information about the repositories such as: Name: The name of the repository Description: A description of the repository Watchers: People following the repository and getting notified about its activity Forks: Users cloning the repository to their own accounts Open Issues: Issues submitted about the repository We will use this data, a combination of qualitative and quantitative information, to identify the most recent trends and weak signals. The process can be represented by the steps shown in the following figure: Getting the data Before using the API, we need to set the authorization. The API gives you access to all publicly available data, but some endpoints need user permission. You can create a new token with some specific scope access using the application settings. The scope depends on your application's needs, such as accessing user email, updating user profile, and so on. Password authorization is only needed in some cases, like access by user authorized applications. In that case, you need to provide your username or email, and your password. All API access is over HTTPS, and accessed from the https://api.github.com/ domain. All data is sent and received as JSON. Rate Limits The GitHub Search API is designed to help to find specific items (repository, users, and so on). The rate limit policy allows up to 1,000 results for each search. For requests using basic authentication, OAuth, or client ID and secret, you can make up to 30 requests per minute. For unauthenticated requests, the rate limit allows you to make up to 10 requests per minute. Connection to GitHub GitHub offers a search endpoint which returns all the repositories matching a query. As we go along, in different steps of the analysis we will change the value of the variable q (query). In the first part, we will retrieve all the repositories created since January 1, 2017 and then we will compare the results with previous years. Firstly, we initialize an empty list results which stores all data about repositories. Secondly, we build get requests with parameters required by the API. We can only get 100 results per request, so we have to use a pagination technique to build a complete dataset. results = [] q = "created:>2017-01-01" def search_repo_paging(q): url = 'https://api.github.com/search/repositories' params = {'q' : q, 'sort' : 'forks', 'order': 'desc', 'per_page' : 100} while True: res = requests.get(url,params = params) result = res.json() results.extend(result['items']) params = {} try: url = res.links['next']['url'] except: break In the first request we have to pass all the parameters to the GET method in our request. Then, we make a new request for every next page, which can be found in res.links['next']['url']. res. links contains a full link to the resources including all the other parameters. That is why we empty the params dictionary. The operation is repeated until there is no next page key in res.links dictionary. For other datasets we modify the search query in such a way that we retrieve repositories from previous years. For example to get the data from 2015 we define the following query: q = "created:2015-01-01..2015-12-31" In order to find proper repositories, the API provides a wide range of query parameters. It is possible to search for repositories with high precision using the system of qualifiers. Starting with main search parameters q, we have following options: sort: Set to forks as we are interested in finding the repositories having the largest number of forks (you can also sort by number of stars or update time) order: Set to descending order per_page: Set to the maximum amount of returned repositories Naturally, the search parameter q can contain multiple combinations of qualifiers. Data pull The amount of data we collect through GitHub API is such that it fits in memory. We can deal with it directly in a pandas dataframe. If more data is required, we would recommend storing it in a database, such as MongoDB. We use JSON tools to convert the results into a clean JSON and to create a dataframe. from pandas.io.json import json_normalize import json import pandas as pd import bson.json_util as json_util sanitized = json.loads(json_util.dumps(results)) normalized = json_normalize(sanitized) df = pd.DataFrame(normalized) The dataframe df contains columns related to all the results returned by GitHub API. We can list them by typing the following: Df.columns Index(['archive_url', 'assignees_url', 'blobs_url', 'branches_url', 'clone_url', 'collaborators_url', 'comments_url', 'commits_url', 'compare_url', 'contents_url', 'contributors_url', 'default_branch', 'deployments_url', 'description', 'downloads_url', 'events_url', 'Fork', 'forks', 'forks_count', 'forks_url', 'full_name', 'git_commits_url', 'git_refs_url', 'git_tags_url', 'git_url', 'has_downloads', 'has_issues', 'has_pages', 'has_projects', 'has_wiki', 'homepage', 'hooks_url', 'html_url', 'id', 'issue_comment_url', 'Issue_events_url', 'issues_url', 'keys_url', 'labels_url', 'language', 'languages_url', 'merges_url', 'milestones_url', 'mirror_url', 'name', 'notifications_url', 'open_issues', 'open_issues_count', 'owner.avatar_url', 'owner.events_url', 'owner.followers_url', 'owner.following_url', 'owner.gists_url', 'owner.gravatar_id', 'owner.html_url', 'owner.id', 'owner.login', 'Owner.organizations_url', 'owner.received_events_url', 'owner.repos_url', 'owner.site_admin', 'owner.starred_url', 'owner.subscriptions_url', 'owner.type', 'owner.url', 'private', 'pulls_url', 'pushed_at', 'releases_url', 'score', 'size', 'ssh_url', 'stargazers_count', 'stargazers_url', 'statuses_url', 'subscribers_url', 'subscription_url', 'svn_url', 'tags_url', 'teams_url', 'trees_url', 'updated_at', 'url', 'Watchers', 'watchers_count', 'year'], dtype='object') Then, we select a subset of variables which will be used for further analysis. Our choice is based on the meaning of each of them. We skip all the technical variables related to URLs, owner information, or ID. The remaining columns contain information which is very likely to help us identify new technology trends: description: A user description of a repository watchers_count: The number of watchers size: The size of repository in kilobytes forks_count: The number of forks open_issues_count: The number of open issues language: The programming language the repository is written in We have selected watchers_count as the criterion to measure the popularity of repositories. This number indicates how many people are interested in the project. However, we may also use forks_count which gives us slightly different information about the popularity. The latter represents the number of people who actually worked with the code, so it is related to a different group. Data processing In the previous step we structured the raw data which is now ready for further analysis. Our objective is to analyze two types of data: Textual data in description Numerical data in other variables Each of them requires a different pre-processing technique. Let's take a look at each type in Detail. Textual data For the first kind, we have to create a new variable which contains a cleaned string. We will do it in three steps which have already been presented in previous chapters: Selecting English descriptions Tokenization Stopwords removal As we work only on English data, we should remove all the descriptions which are written in other languages. The main reason to do so is that each language requires a different processing and analysis flow. If we left descriptions in Russian or Chinese, we would have very noisy data which we would not be able to interpret. As a consequence, we can say that we are analyzing trends in the English-speaking world. Firstly, we remove all the empty strings in the description column. df = df.dropna(subset=['description']) In order to remove non-English descriptions we have to first detect what language is used in each text. For this purpose we use a library called langdetect which is based on the Google language detection project (https://github.com/shuyo/language-detection). from langdetect import detect df['lang'] = df.apply(lambda x: detect(x['description']),axis=1) We create a new column which contains all the predictions. We see different languages, such as en (English), zh-cn (Chinese), vi (Vietnamese), or ca (Catalan). df['lang'] 0 en 1 en 2 en 3 en 4 en 5 zh-cn In our dataset en represents 78.7% of all the repositories. We will now select only those repositories with a description in English: df = df[df['lang'] == 'en'] In the next step, we will create a new clean column with pre-processed textual data. We execute the following code to perform tokenization and remove stopwords: import nltk from nltk import word_tokenize from nltk.corpus import stopwords def clean(text = '', stopwords = []): #tokenize tokens = word_tokenize(text.strip()) #lowercase clean = [i.lower() for i in tokens] #remove stopwords clean = [i for i in clean if i not in stopwords] #remove punctuation punctuations = list(string.punctuation) clean = [i.strip(''.join(punctuations)) for i in clean if i not in punctuations] return " ".join(clean) df['clean'] = df['description'].apply(str) #make sure description is a string df['clean'] = df['clean'].apply(lambda x: clean(text = x, stopwords = stopwords.words('english'))) Finally, we obtain a clean column which contains cleaned English descriptions, ready for analysis: df['clean'].head(5) 0 roadmap becoming web developer 2017 1 base repository imad v2 course application ple… 2 decrypted content eqgrp-auction-file.tar.xz 3 shadow brokers lost translation leak 4 learn design large-scale systems prep system d... Numerical data For numerical data, we will check statistically both what the distribution of values is and whether there are any missing values: df[['watchers_count','size','forks_count','open_issues']].describe() We see that there are no missing values in all four variables: watchers_count, size, forks_count, and open_issues. The watchers_count varies from 0 to 20,792 while the minimum number of forks is 33 and goes up to 2,589. The first quartile of repositories has no open issues while top 25% have more than 12. It is worth noticing that, in our dataset, there is a repository which has 458 open issues. Once we are done with the pre-processing of the data, our next step would be to analyze it, in order to get actionable insights from it. If you found this article to be useful, stay tuned for Part 2, where we perform analysis on the processed GitHub data and determine the top trending technologies. Alternatively, you can check out the book Python Social Media Analytics, to learn how to get valuable insights from various social media sites such as Facebook, Twitter and more.    
Read more
  • 0
  • 0
  • 24459

article-image-neural-network
Packt
04 Jan 2017
11 min read
Save for later

What is an Artificial Neural Network?

Packt
04 Jan 2017
11 min read
In this article by Prateek Joshi, author of book Artificial Intelligence with Python, we are going to learn about artificial neural networks. We will start with an introduction to artificial neural networks and the installation of the relevant library. We will discuss perceptron and how to build a classifier based on that. We will learn about single layer neural networks and multilayer neural networks. (For more resources related to this topic, see here.) Introduction to artificial neural networks One of the fundamental premises of Artificial Intelligence is to build machines that can perform tasks that require human intelligence. The human brain is amazing at learning new things. Why not use the model of the human brain to build a machine? An artificial neural network is a model designed to simulate the learning process of the human brain. Artificial neural networks are designed such that they can identify the underlying patterns in data and learn from them. They can be used for various tasks such as classification, regression, segmentation, and so on. We need to convert any given data into the numerical form before feeding it into the neural network. For example, we deal with many different types of data including visual, textual, time-series, and so on. We need to figure out how to represent problems in a way that can be understood by artificial neural networks. Building a neural network The human learning process is hierarchical. We have various stages in our brain’s neural network and each stage corresponds to a different granularity. Some stages learn simple things and some stages learn more complex things. Let’s consider an example of visually recognizing an object. When we look at a box, the first stage identifies simple things like corners and edges. The next stage identifies the generic shape and the stage after that identifies what kind of object it is. This process differs for different tasks, but you get the idea! By building this hierarchy, our human brain quickly separates the concepts and identifies the given object. To simulate the learning process of the human brain, an artificial neural network is built using layers of neurons. These neurons are inspired from the biological neurons we discussed in the previous paragraph. Each layer in an artificial neural network is a set of independent neurons. Each neuron in a layer is connected to neurons in the adjacent layer. Training a neural network If we are dealing with N-dimensional input data, then the input layer will consist of N neurons. If we have M distinct classes in our training data, then the output layer will consist of M neurons. The layers between the input and output layers are called hidden layers. A simple neural network will consist of a couple of layers and a deep neural network will consist of many layers. Consider the case where we want to use a neural network to classify the given data. The first step is to collect the appropriate training data and label it. Each neuron acts as a simple function and the neural network trains itself until the error goes below a certain value. The error is basically the difference between the predicted output and the actual output. Based on how big the error is, the neural network adjusts itself and retrains until it gets closer to the solution. You can learn more about neural networks here: http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html. We will be using a library called NeuroLab . You can find more about it here: https://pythonhosted.org/neurolab. You can install it by running the following command on your Terminal: $ pip3 install neurolab Once you have installed it, you can proceed to the next section. Building a perceptron based classifier Perceptron is the building block of an artificial neural network. It is a single neuron that takes inputs, performs computation on them, and then produces an output. It uses a simple linear function to make the decision. Let’s say we are dealing with an N-dimension input datapoint. A perceptron computes the weighted summation of those N numbers and it then adds a constant to produce the output. The constant is called the bias of the neuron. It is remarkable to note that these simple perceptrons are used to design very complex deep neural networks. Let’s see how to build a perceptron based classifier using NeuroLab. Create a new python file and import the following packages: importnumpy as np importmatplotlib.pyplot as plt importneurolab as nl Load the input data from the text file data_perceptron.txt provided to you. Each line contains space separated numbers where the first two numbers are the features and the last number is the label: # Load input data text = np.loadtxt(‘data_perceptron.txt’) Separate the text into datapoints and labels: # Separate datapoints and labels data = text[:, :2] labels = text[:, 2].reshape((text.shape[0], 1)) Plot the datapoints: # Plot input data plt.figure() plt.scatter(data[:,0], data[:,1]) plt.xlabel(‘Dimension 1’) plt.ylabel(‘Dimension 2’) plt.title(‘Input data’) Define the maximum and minimum values that each dimension can take: # Define minimum and maximum values for each dimension dim1_min, dim1_max, dim2_min, dim2_max = 0, 1, 0, 1 Since the data is separated into two classes, we just need one bit to represent the output. So the output layer will contain a single neuron. # Number of neurons in the output layer num_output = labels.shape[1] We have a dataset where the datapoints are 2-dimensional. Let’s define a perceptron with 2 input neurons where we assign one neuron for each dimension. # Define a perceptron with 2 input neurons (because we # have 2 dimensions in the input data) dim1 = [dim1_min, dim1_max] dim2 = [dim2_min, dim2_max] perceptron = nl.net.newp([dim1, dim2], num_output) Train the perceptron with the training data: # Train the perceptron using the data error_progress = perceptron.train(data, labels, epochs=100, show=20, lr=0.03) Plot the training progress using the error metric: # Plot the training progress plt.figure() plt.plot(error_progress) plt.xlabel(‘Number of epochs’) plt.ylabel(‘Training error’) plt.title(‘Training error progress’) plt.grid() plt.show() The full code is given in the file perceptron_classifier.py. If you run the code, you will get two output figures. The first figure indicates the input datapoints: The second figure represents the training progress using the error metric: As we can observe from the preceding figure, the error goes down to 0 at the end of fourth epoch. Constructing a single layer neural network A perceptron is a good start, but it cannot do much. The next step is to have a set of neurons act as a unit to see what we can achieve. Let’s create a single neural network that consists of independent neurons acting on input data to produce the output. Create a new python file and import the following packages: importnumpy as np importmatplotlib.pyplot as plt importneurolab as nl Load the input data from the file data_simple_nn.txt provided to you. Each line in this file contains 4 numbers. The first two numbers form the datapoint and the last two numbers are the labels. Why do we need to assign two numbers for labels? Because we have 4 distinct classes in our dataset, so we need two bits represent them. # Load input data text = np.loadtxt(‘data_simple_nn.txt’) Separate the data into datapoints and labels: # Separate it into datapoints and labels data = text[:, 0:2] labels = text[:, 2:] Plot the input data: # Plot input data plt.figure() plt.scatter(data[:,0], data[:,1]) plt.xlabel(‘Dimension 1’) plt.ylabel(‘Dimension 2’) plt.title(‘Input data’) Extract the minimum and maximum values for each dimension (we don’t need to hardcode it like we did in the previous section): # Minimum and maximum values for each dimension dim1_min, dim1_max = data[:,0].min(), data[:,0].max() dim2_min, dim2_max = data[:,1].min(), data[:,1].max() Define the number of neurons in the output layer: # Define the number of neurons in the output layer num_output = labels.shape[1] Define a single layer neural network using the above parameters: # Define a single-layer neural network dim1 = [dim1_min, dim1_max] dim2 = [dim2_min, dim2_max] nn = nl.net.newp([dim1, dim2], num_output) Train the neural network using training data: # Train the neural network error_progress = nn.train(data, labels, epochs=100, show=20, lr=0.03) Plot the training progress: # Plot the training progress plt.figure() plt.plot(error_progress) plt.xlabel(‘Number of epochs’) plt.ylabel(‘Training error’) plt.title(‘Training error progress’) plt.grid() plt.show() Define some sample test datapoints and run the network on those points: The full code is given in the file simple_neural_network.py. If you run the code, you will get two figures. The first figure represents the input datapoints: # Run the classifier on test datapoints print(‘nTest results:’) data_test = [[0.4, 4.3], [4.4, 0.6], [4.7, 8.1]] for item in data_test: print(item, ‘-->‘, nn.sim([item])[0]) The second figure shows the training progress: You will see the following printed on your Terminal: If you locate those test datapoints on a 2D graph, you can visually verify that the predicted outputs are correct. Constructing a multilayer neural network In order to enable higher accuracy, we need to give more freedom the neural network. This means that a neural network needs more than one layer to extract the underlying patterns in the training data. Let’s create a multilayer neural network to achieve that. Create a new python file and import the following packages: importnumpy as np importmatplotlib.pyplot as plt importneurolab as nl In the previous two sections, we saw how to use a neural network as a classifier. In this section, we will see how to use a multilayer neural network as a regressor. Generate some sample datapoints based on the equation y = 3x^2 + 5 and then normalize the points: # Generate some training data min_val = -15 max_val = 15 num_points = 130 x = np.linspace(min_val, max_val, num_points) y = 3 * np.square(x) + 5 y /= np.linalg.norm(y) Reshape the above variables to create a training dataset: # Create data and labels data = x.reshape(num_points, 1) labels = y.reshape(num_points, 1) Plot the input data: # Plot input data plt.figure() plt.scatter(data, labels) plt.xlabel(‘Dimension 1’) plt.ylabel(‘Dimension 2’) plt.title(‘Input data’) Define a multilayer neural network with 2 hidden layers. You are free to design a neural network any way you want. For this case, let’s have 10 neurons in the first layer and 6 neurons in the second layer. Our task is to predict the value, so the output layer will contain a single neuron. # Define a multilayer neural network with 2 hidden layers; # First hidden layer consists of 10 neurons # Second hidden layer consists of 6 neurons # Output layer consists of 1 neuron nn = nl.net.newff([[min_val, max_val]], [10, 6, 1]) Set the training algorithm to gradient descent: # Set the training algorithm to gradient descent nn.trainf = nl.train.train_gd Train the neural network using the training data that was generated: # Train the neural network error_progress = nn.train(data, labels, epochs=2000, show=100, goal=0.01) Run the neural network on the training datapoints: # Run the neural network on training datapoints output = nn.sim(data) y_pred = output.reshape(num_points) Plot the training progress: # Plot training error plt.figure() plt.plot(error_progress) plt.xlabel(‘Number of epochs’) plt.ylabel(‘Error’) plt.title(‘Training error progress’) Plot the predicted output: # Plot the output x_dense = np.linspace(min_val, max_val, num_points * 2) y_dense_pred = nn.sim(x_dense.reshape(x_dense.size,1)).reshape(x_dense.size) plt.figure() plt.plot(x_dense, y_dense_pred, ‘-’, x, y, ‘.’, x, y_pred, ‘p’) plt.title(‘Actual vs predicted’) plt.show() The full code is given in the file multilayer_neural_network.py. If you run the code, you will get three figures. The first figure shows the input data: The second figure shows the training progress: The third figure shows the predicted output overlaid on top of input data: The predicted output seems to follow the general trend. If you continue to train the network and reduce the error, you will see that the predicted output will match the input curve even more accurately. You will see the following printed on your Terminal: Summary In this article, we learnt more about artificial neural networks. We discussed how to build and train neural networks. We also talked about perceptron and built a classifier based on that. We also learnt about single layer neural networks as well as multilayer neural networks. Resources for Article: Further resources on this subject: Training and Visualizing a neural network with R [article] Implementing Artificial Neural Networks with TensorFlow [article] How to do Machine Learning with Python [article]
Read more
  • 0
  • 0
  • 24394

article-image-how-tflearn-makes-building-tensorflow-models-easier
Savia Lobo
04 Jun 2018
7 min read
Save for later

How TFLearn makes building TensorFlow models easier

Savia Lobo
04 Jun 2018
7 min read
Today, we will introduce you to TFLearn, and will create layers and models which are directly beneficial in any model implementation with Tensorflow. TFLearn is a modular library in Python that is built on top of core TensorFlow. [box type="note" align="" class="" width=""]This article is an excerpt taken from the book Mastering TensorFlow 1.x written by Armando Fandango. In this book, you will learn how to build TensorFlow models to work with multilayer perceptrons using Keras, TFLearn, and R.[/box] TIP: TFLearn is different from the TensorFlow Learn package which is also known as TF Learn (with one space in between TF and Learn). It is available at the following link; and the source code is available on GitHub. TFLearn can be installed in Python 3 with the following command: pip3  install  tflearn Note: To install TFLearn in other environments or from source, please refer to the following link: http://tflearn.org/installation/ The simple workflow in TFLearn is as follows:  Create an input layer first.  Pass the input object to create further layers.  Add the output layer.  Create the net using an estimator layer such as regression.  Create a model from the net created in the previous step.  Train the model with the model.fit() method.  Use the trained model to predict or evaluate. Creating the TFLearn Layers Let us learn how to create the layers of the neural network models in TFLearn:  Create an input layer first: input_layer  =  tflearn.input_data(shape=[None,num_inputs]  Pass the input object to create further layers: layer1  =  tflearn.fully_connected(input_layer,10, activation='relu') layer2  =  tflearn.fully_connected(layer1,10, activation='relu')  Add the output layer: output  =  tflearn.fully_connected(layer2,n_classes, activation='softmax')  Create the final net from the estimator layer such as regression: net  =  tflearn.regression(output, optimizer='adam', metric=tflearn.metrics.Accuracy(), loss='categorical_crossentropy' ) The TFLearn provides several classes for layers that are described in following sub-sections. TFLearn core layers TFLearn offers the following layers in the tflearn.layers.core module: Layer classDescriptioninput_dataThis layer is used to specify the input layer for the neural network.fully_connectedThis layer is used to specify a layer where all the neurons are connected to all the neurons in the previous layer.dropoutThis layer is used to specify the dropout regularization. The input elements are scaled by 1/keep_prob while keeping the expected sum unchanged.Layer classDescriptioncustom_layerThis layer is used to specify a custom function to be applied to the input. This class wraps our custom function and presents the function as a layer.reshapeThis layer reshapes the input into the output of specified shape.flattenThis layer converts the input tensor to a 2D tensor.activationThis layer applies the specified activation function to the input tensor.single_unitThis layer applies the linear function to the inputs.highwayThis layer implements the fully connected highway function.one_hot_encodingThis layer converts the numeric labels to their binary vector one-hot encoded representations.time_distributedThis layer applies the specified function to each time step of the input tensor.multi_target_dataThis layer creates and concatenates multiple placeholders, specifically used when the layers use targets from multiple sources. TFLearn convolutional layers TFLearn offers the following layers in the tflearn.layers.conv module: Layer classDescriptionconv_1dThis layer applies 1D convolutions to the input dataconv_2dThis layer applies 2D convolutions to the input dataconv_3dThis layer applies 3D convolutions to the input dataconv_2d_transposeThis layer applies transpose of conv2_d to the input dataconv_3d_transposeThis layer applies transpose of conv3_d to the input dataatrous_conv_2dThis layer computes a 2-D atrous convolutiongrouped_conv_2dThis layer computes a depth-wise 2-D convolutionmax_pool_1dThis layer computes 1-D max poolingmax_pool_2dThis layer computes 2D max poolingavg_pool_1dThis layer computes 1D average poolingavg_pool_2dThis layer computes 2D average poolingupsample_2dThis layer applies the row and column wise 2-D repeat operationupscore_layerThis layer implements the upscore as specified in http://arxiv. org/abs/1411.4038global_max_poolThis layer implements the global max pooling operationglobal_avg_poolThis layer implements the global average pooling operationresidual_blockThis layer implements the residual block to create deep residual networksresidual_bottleneckThis layer implements the residual bottleneck block for deep residual networksresnext_blockThis layer implements the ResNeXt block TFLearn recurrent layers TFLearn offers the following layers in the tflearn.layers.recurrent module: Layer classDescriptionsimple_rnnThis layer implements the simple recurrent neural network modelbidirectional_rnnThis layer implements the bi-directional RNN modellstmThis layer implements the LSTM modelgruThis layer implements the GRU model TFLearn normalization layers TFLearn offers the following layers in the tflearn.layers.normalization module: Layer classDescriptionbatch_normalizationThis layer normalizes the output of activations of previous layers for each batchlocal_response_normalizationThis layer implements the LR normalizationl2_normalizationThis layer applies the L2 normalization to the input tensors TFLearn embedding layers TFLearn offers only one layer in the tflearn.layers.embedding_ops module: Layer classDescriptionembeddingThis layer implements the embedding function for a sequence of integer IDs or floats TFLearn merge layers TFLearn offers the following layers in the tflearn.layers.merge_ops module: Layer classDescriptionmerge_outputsThis layer merges the list of tensors into a single tensor, generally used to merge the output tensors of the same shapemergeThis layer merges the list of tensors into a single tensor; you can specify the axis along which the merge needs to be done TFLearn estimator layers TFLearn offers only one layer in the tflearn.layers.estimator module: Layer classDescriptionregressionThis layer implements the linear or logistic regression While creating the regression layer, you can specify the optimizer and the loss and metric functions. TFLearn offers the following optimizer functions as classes in the tflearn.optimizers module: SGD RMSprop Adam Momentum AdaGrad Ftrl AdaDelta ProximalAdaGrad Nesterov Note: You can create custom optimizers by extending the tflearn.optimizers.Optimizer base class. TFLearn offers the following metric functions as classes or ops in the tflearn.metrics module: Accuracy or  accuracy_op Top_k or top_k_op R2 or r2_op WeightedR2  or weighted_r2_op Binary_accuracy_op Note : You can create custom metrics by extending the tflearn.metrics.Metric base class. TFLearn provides the following loss functions, known as objectives, in the tflearn.objectives module: Softymax_categorical_crossentropy categorical_crossentropy binary_crossentropy Weighted_crossentropy mean_square hinge_loss roc_auc_score Weak_cross_entropy_2d While specifying the input, hidden, and output layers, you can specify the activation functions to be applied to the output. TFLearn provides the following activation functions in the tflearn.activations module: linear tanh Sigmoid softmax softplus Softsign relu relu6 leaky_relu Prelu elu Crelu selu Creating the TFLearn Model Create the model from the net created in the previous step (step 4 in creating the TFLearn layers section): model  =  tflearn.DNN(net) Types of TFLearn models The TFLearn offers two different classes of the models: DNN  (Deep Neural Network) model: This class allows you to create a multilayer perceptron from the network that you have created from the layers SequenceGenerator model: This class allows you to create a deep neural network that can generate sequences Training the TFLearn Model After creating, train the model with the model.fit() method: model.fit(X_train, Y_train, n_epoch=n_epochs, batch_size=batch_size, show_metric=True, run_id='dense_model') Using the TFLearn Model Use the trained model to predict or evaluate: score  =  model.evaluate(X_test,  Y_test) print('Test  accuracy:',  score[0]) The complete code for the TFLearn MNIST classification example is provided in the notebook ch-02_TF_High_Level_Libraries. The output from the TFLearn MNIST example is as follows: Training  Step:  5499         |  total  loss:  0.42119  |  time:  1.817s |  Adam  |  epoch:  010  |  loss:  0.42119  -  acc:  0.8860  --  iter:  54900/55000 Training  Step:  5500         |  total  loss:  0.40881  |  time:  1.820s |  Adam  |  epoch:  010  |  loss:  0.40881  -  acc:  0.8854  --  iter:  55000/55000 -- Test  accuracy:  0.9029 Note: You can get more information about TFLearn from the following link: http://tflearn.org/. To summarize, we got to know about TFLearn and the different TFLearn layers and models. If you found this post useful, do check out this book Mastering TensorFlow 1.x, to explore advanced features of TensorFlow 1.x, and gain insight into TensorFlow Core, Keras, TF Estimators, TFLearn, TF Slim, Pretty Tensor, and Sonnet. TensorFlow.js 0.11.1 releases! How to Build TensorFlow Models for Mobile and Embedded devices Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 24333
article-image-challenge-deep-learning-sustain-current-pace-innovation-ivan-vasilev-machine-learning-engineer
Sugandha Lahoti
13 Dec 2019
8 min read
Save for later

“The challenge in Deep Learning is to sustain the current pace of innovation”, explains Ivan Vasilev, machine learning engineer

Sugandha Lahoti
13 Dec 2019
8 min read
If we talk about recent breakthroughs in the software community, machine learning and deep learning is a major contender - the usage, adoption, and experimentation of deep learning has exponentially increased. Especially in the areas of computer vision, speech, natural language processing and understanding, deep learning has made unprecedented progress. GANs, variational autoencoders and deep reinforcement learning are also creating impressive AI results. To know more about the progress of deep learning, we interviewed Ivan Vasilev, a machine learning engineer and researcher based in Bulgaria. Ivan is also the author of the book Advanced Deep Learning with Python. In this book, he teaches advanced deep learning topics like attention mechanism, meta-learning, graph neural networks, memory augmented neural networks, and more using the Python ecosystem. In this interview, he shares his experiences working on this book, compares TensorFlow and PyTorch, as well as talks about computer vision, NLP, and GANs. On why he chose Computer Vision and NLP as two major focus areas of his book Computer Vision and Natural Language processing are two popular areas where a number of developments are ongoing. In his book, Advanced Deep Learning with Python, Ivan delves deep into these two broad application areas. “One of the reasons I emphasized computer vision and NLP”, he clarifies, “is that these fields have a broad range of real-world commercial applications, which makes them interesting for a large number of people.” The other reason for focusing on Computer Vision, he says “is because of the natural (or human-driven if you wish) progress of deep learning. One of the first modern breakthroughs was in 2012, when a solution based on convolutional network won the ImageNet competition of that year with a large margin compared to any previous algorithms. Thanks in part to this impressive result, the interest in the field was renewed and brought many other advances including solving complex tasks like object detection and new generative models like generative adversarial networks. In parallel, the NLP domain saw its own wave of innovation with things like word vector embeddings and the attention mechanism.” On the ongoing battle between TensorFlow and PyTorch There are two popular machine learning frameworks that are currently at par - TensorFlow and PyTorch (Both had new releases in the past month, TensorFlow 2.0 and PyTorch 1.3). There is an ongoing debate that pitches TensorFlow and PyTorch as rivaling tech and communities. Ivan does not think there is a clear winner between the two libraries and this is why he has included them both in the book. He explains, “On the one hand, it seems that the API of PyTorch is more streamlined and the library is more popular with the academic community. On the other hand, TensorFlow seems to have better cloud support and enterprise features. In any case, developers will only benefit from the competition. For example, PyTorch has demonstrated the importance of eager execution and TensorFlow 2.0 now has much better support for eager execution to the point that it is enabled by default. In the past, TensorFlow had internal competing APIs, whereas now Keras is promoted as its main high-level API. On the other hand, PyTorch 1.3 has introduced experimental support for iOS and Android devices and quantization (computation operations with reduced precision for increased efficiency).” Using Machine Learning in the stock trading process can make markets more efficient Ivan discusses his venture into the field of financial machine learning, being the author of an ML-oriented event-based algorithmic trading library. However, financial machine learning (and stock price prediction in particular) is usually not in the focus of mainstream deep learning research. “One reason”, Ivan states, “is that the field isn’t as appealing as, say, computer vision or NLP. At first glance, it might even appear gimmicky to predict stock prices.” He adds, “Another reason is that quality training data isn’t freely available and can be quite expensive to obtain. Even if you have such data, pre-processing it in an ML-friendly way is not a straightforward process, because the noise-to-signal ratio is a lot higher compared to images or text. Additionally, the data itself could have huge volume.” “However”, he counters, “using ML in finance could have benefits, besides the obvious (getting rich by trading stocks). The participation of ML algorithms in the stock trading process can make the markets more efficient. This efficiency will make it harder for market imbalances to stay unnoticed for long periods of time. Such imbalances will be corrected early, thus preventing painful market corrections, which could otherwise lead to economic recessions.” GANs can be used for nefarious purposes, but that doesn’t warrant discarding them Ivan has also given a special emphasis to Generative adversarial networks in his book. Although extremely useful, in recent times GANs have been used to generate high-dimensional fake data that look very convincing. Many researchers and developers have raised concerns about the negative repercussions of using GANs and wondered if it is even possible to prevent and counter its misuse/abuse. Ivan acknowledges that GANs may have unintended outcomes but that shouldn’t be the sole reason to discard them. He says, “Besides great entertainment value, GANs have some very useful applications and could help us better understand the inner workings of neural networks. But as you mentioned, they can be used for nefarious purposes as well. Still, we shouldn’t discard GANs (or any algorithm with similar purpose) because of this. If only because the bad actors won’t discard them. I think the solution to this problem lies beyond the realm of deep learning. We should strive to educate the public on the possible adverse effects of these algorithms, but also to their benefits. In this way we can raise the awareness of machine learning and spark an honest debate about its role in our society.” Machine learning can have both intentional and unintentional harmful effects Awareness and Ethics go in parallel. Ethics is one of the most important topics to emerge in machine learning and artificial intelligence over the last year. Ivan agrees that the ethics and algorithmic bias in machine learning are of extreme importance. He says, “We can view the potential harmful effects of machine learning as either intentional and unintentional. For example, the bad actors I mentioned when we discussed GANs fall into the intentional category. We can limit their influence by striving to keep the cutting edge of ML research publicly available, thus denying them any unfair advantage of potentially better algorithms. Fortunately, this is largely the case now and hopefully will remain that way in the future. “ “I don't think algorithmic bias is necessarily intentional,'' he says. “Instead, I believe that it is the result of the underlying injustices in our society, which creep into ML through either skewed training datasets or unconscious bias of the researchers. Although the bias might not be intentional, we still have a responsibility to put a conscious effort to eliminate it.” Challenges in the Machine learning ecosystem “The field of ML exploded (in a good sense) a few years ago,'' says Ivan, “thanks to a combination of algorithmic and computer hardware advances. Since then, the researches have introduced new smarter and more elegant deep learning algorithms. But history has shown that AI can generate such a great hype that even the impressive achievements of the last few years could fall short of the expectations of the general public.” “So, in a broader sense, the challenge in front of ML is to sustain the current pace of innovation. In particular, current deep learning algorithms fall short in some key intelligence areas, where humans excel. For example, neural networks have a hard time learning multiple unrelated tasks. They also tend to perform better when working with unstructured data (like images), compared to structured data (like graphs).” “Another issue is that neural networks sometimes struggle to remember long-distance dependencies in sequential data. Solving these problems might require new fundamental breakthroughs, and it’s hard to give an estimation of such one time events. But even at the current level, ML can fundamentally change our society (hopefully for the better). For instance, in the next 5 to 10 years, we can see the widespread introduction of fully autonomous vehicles, which have the potential to transform our lives.” This is just a snapshot of some of the important focus areas in the deep learning ecosystem. You can check out more of Ivan’s work in his book Advanced Deep Learning with Python. In this book you will investigate and train CNN models with GPU accelerated libraries like TensorFlow and PyTorch. You will also apply deep neural networks to state-of-the-art domains like computer vision problems, NLP, GANs, and more. Author Bio Ivan Vasilev started working on the first open source Java Deep Learning library with GPU support in 2013. The library was acquired by a German company, where he continued its development. He has also worked as a machine learning engineer and researcher in the area of medical image classification and segmentation with deep neural networks. Since 2017 he has focused on financial machine learning. He is working on a Python based platform, which provides the infrastructure to rapidly experiment with different ML algorithms for algorithmic trading. You can find him on Linkedin and GitHub. Kaggle’s Rachel Tatman on what to do when applying deep learning is overkill  Brad Miro talks TensorFlow 2.0 features and how Google is using it internally François Chollet, creator of Keras on TensorFlow 2.0 and Keras integration, tricky design decisions in deep learning and more
Read more
  • 0
  • 0
  • 24286

article-image-postgresql-12-beta-1-released
Fatema Patrawala
24 May 2019
6 min read
Save for later

PostgreSQL 12 Beta 1 released

Fatema Patrawala
24 May 2019
6 min read
The PostgreSQL Global Development Group announced yesterday its first beta release of PostgreSQL 12. It is now also available for download. This release contains previews of all features that will be available in the final release of PostgreSQL 12, though some details of the release could also change. PostgreSQL 12 feature highlights Indexing Performance, Functionality, and Management PostgreSQL 12 will improve the overall performance of the standard B-tree indexes with improvements to the space management of these indexes as well. These improvements also provide a reduction of index size for B-tree indexes that are frequently modified, in addition to a performance gain. Additionally, PostgreSQL 12 adds the ability to rebuild indexes concurrently, which lets you perform a REINDEX operation without blocking any writes to the index. This feature should help with lengthy index rebuilds that could cause downtime when managing a PostgreSQL database in a production environment. PostgreSQL 12 extends the abilities of several of the specialized indexing mechanisms. The ability to create covering indexes, i.e. the INCLUDE clause that was introduced in PostgreSQL 11, has now been added to GiST indexes. SP-GiST indexes now support the ability to perform K-nearest neighbor (K-NN) queries for data types that support the distance (<->) operation. The amount of write-ahead log (WAL) overhead generated when creating a GiST, GIN, or SP-GiST index is also significantly reduced in PostgreSQL 12, which provides several benefits to the disk utilization of a PostgreSQL cluster and features such as continuous archiving and streaming replication. Inlined WITH queries (Common table expressions) Common table expressions (or WITH queries) can now be automatically inlined in a query if they: a) are not recursive b) do not have any side-effects c) are only referenced once in a later part of a query This removes an "optimization fence" that has existed since the introduction of the WITH clause in PostgreSQL 8.4 Partitioning PostgreSQL 12 while processing tables with thousands of partitions for operations, it only needs to use a small number of partitions. This release also provides improvements to the performance of both INSERT and COPY into a partitioned table. ATTACH PARTITION can now be performed without blocking concurrent queries on the partitioned table. Additionally, the ability to use foreign keys to reference partitioned tables is now permitted in PostgreSQL 12. JSON path queries per SQL/JSON specification PostgreSQL 12 now allows execution of JSON path queries per the SQL/JSON specification in the SQL:2016 standard. Similar to XPath expressions for XML, JSON path expressions let you evaluate a variety of arithmetic expressions and functions in addition to comparing values within JSON documents. A subset of these expressions can be accelerated with GIN indexes, allowing the execution of highly performant lookups across sets of JSON data. Collations PostgreSQL 12 now supports case-insensitive and accent-insensitive comparisons for ICU provided collations, also known as "nondeterministic collations". When used, these collations can provide convenience for comparisons and sorts, but can also lead to a performance penalty as a collation may need to make additional checks on a string. Most-common Value Extended Statistics CREATE STATISTICS, introduced in PostgreSQL 12 to help collect more complex statistics over multiple columns to improve query planning, now supports most-common value statistics. This leads to improved query plans for distributions that are non-uniform. Generated Columns PostgreSQL 12 allows the creation of generated columns that compute their values with an expression using the contents of other columns. This feature provides stored generated columns, which are computed on inserts and updates and are saved on disk. Virtual generated columns, which are computed only when a column is read as part of a query, are not implemented yet. Pluggable Table Storage Interface PostgreSQL 12 introduces the pluggable table storage interface that allows for the creation and use of different methods for table storage. New access methods can be added to a PostgreSQL cluster using the CREATE ACCESS METHOD command and subsequently added to tables with the new USING clause on CREATE TABLE. A table storage interface can be defined by creating a new table access method. In PostgreSQL 12, the storage interface that is used by default is the heap access method, which is currently is the only built-in method. Page Checksums The pg_verify_checkums command has been renamed to pg_checksums and now supports the ability to enable and disable page checksums across a PostgreSQL cluster that is offline. Previously, page checksums could only be enabled during the initialization of a cluster with initdb. Authentication & Connection Security GSSAPI now supports client-side and server-side encryption and can be specified in the pg_hba.conf file using the hostgssenc and hostnogssencrecord types. PostgreSQL 12 also allows for discovery of LDAP servers based on DNS SRV records if PostgreSQL was compiled with OpenLDAP. Few noted behavior changes in PostgreSQL 12 There are several changes introduced in PostgreSQL 12 that can affect the behavior as well as management of your ongoing operations. A few of these are noted below; for other changes, visit the "Migrating to Version 12" section of the release notes. The recovery.conf configuration file is now merged into the main postgresql.conf file. PostgreSQL will not start if it detects thatrecovery.conf is present. To put PostgreSQL into a non-primary mode, you can use the recovery.signal and the standby.signal files. You can read more about archive recovery here: https://www.postgresql.org/docs/devel/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY Just-in-Time (JIT) compilation is now enabled by default. OIDs can no longer be added to user created tables using the WITH OIDs clause. Operations on tables that have columns that were created using WITH OIDS (i.e. columns named "OID") will need to be adjusted. Running a SELECT * command on a system table will now also output the OID for the rows in the system table as well, instead of the old behavior which required the OID column to be specified explicitly. Testing for Bugs & Compatibility The stability of each PostgreSQL release greatly depends on the community, to test the upcoming version with the workloads and testing tools in order to find bugs and regressions before the general availability of PostgreSQL 12. As this is a Beta, minor changes to database behaviors, feature details, and APIs are still possible. The PostgreSQL team encourages the community to test the new features of PostgreSQL 12 in their database systems to help eliminate any bugs or other issues that may exist. A list of open issues is publicly available in the PostgreSQL wiki. You can report bugs using this form on the PostgreSQL website: Beta Schedule This is the first beta release of version 12. The PostgreSQL Project will release additional betas as required for testing, followed by one or more release candidates, until the final release in late 2019. For further information please see the Beta Testing page. Many other new features and improvements have been added to PostgreSQL 12. Please see the Release Notes for a complete list of new and changed features. PostgreSQL 12 progress update Building a scalable PostgreSQL solution PostgreSQL security: a quick look at authentication best practices [Tutorial]
Read more
  • 0
  • 0
  • 24274

article-image-k-means-clustering-python
Aaron Lazar
09 Nov 2017
9 min read
Save for later

Implementing K-Means Clustering in Python

Aaron Lazar
09 Nov 2017
9 min read
This article is an adaptation of content from the book Data Science Algorithms in a Week, by David Natingga. I’ve modified it a bit and made turned it into a sequence from a thriller, starring Agents Hobbs and O’Connor, from the FBI. The idea is to practically show you how to implement a k-means cluster in your friendly neighborhood language, Python. Agent Hobbs: Agent… Agent O’Connor… O’Connor! Agent O’Connor: Blimey! Uh.. Ohh.. Sorry, sir! Hobbs: ‘Tat’s abou’ the fifth time oive’ caught you sleeping on duty, young man! O’Connor: Umm. Apologies, sir. I just arrived here, and didn’t have much to… Hobbs: Cut the bull, agent! There’s an important case oime workin’ on and oi’ need information on this righ’ awai’! Here’s the list of missing persons kidnapped so far by the suspects. The suspects now taunt us with open clues abou’ their next target! Based on the information, we’ve narrowed their target list down to Miss. Gibbons and Mr. Hudson. Hobbs throws a file across O’Connor’s desk. Hobbs says as he storms out the door: You ‘ave an hour to find out who needs the special security, so better get working. O’Connor: Yes, sir! Bloody hell, that was close! Here’s the information O’Connor has: He needs to find what the probability is, that the 11th person with a height of 172cm, weight of 60kg, and with long hair is a man. O’Connor gets to work. To simplify matters, he removes the column Hair length as well as the column Gender, since he would like to cluster the people in the table based on their height and weight. To find out whether the 11th person in the table is more likely to be a man or a woman, he uses Clustering: Analysis O’Connor may apply scaling to the initial data, but to simplify the matters, he uses the unscaled data in the algorithm. He clusters the data into the two clusters since there are two possibilities for genders – a male or a female. Then he aims to classify a person with the height 172cm and weight 60kg to be more likely a man if and only if there are more men in that cluster. The clustering algorithm is a very efficient technique. Thus classifying this way is very fast, especially if there are a large number of the features to classify. So he goes on to apply the k-means clustering algorithm to the data he has. First, he picks up the initial centroids. He assumes the first centroid be, for example, a person with the height 180cm and the weight 75kg denoted in a vector as (180,75). Then the point that is furthest away from (180,75) is (155,46). So that will be the second centroid. The points that are closer to the first centroid (180,75) by taking Euclidean distance are (180,75), (174,71), (184,83), (168,63), (178,70), (170,59), (172,60). So these points will be in the first cluster. The points that are closer to the second centroid (155,46) are (155,46), (164,53), (162,52), (166,55). So these points will be in the second cluster. He displays the current situation of these two clusters in the image as below. Clustering of people by their height and weight He then recomputes the centroids of the clusters. The blue cluster with the features (180,75), (174,71), (184,83), (168,63), (178,70), (170,59), (172,60) will have the centroid ((180+174+184+168+178+170+172)/7 (75+71+83+63+70+59+60)/7)~(175.14,68.71). The red cluster with the features (155,46), (164,53), (162,52), (166,55) will have the centroid ((155+164+162+166)/4, (46+53+52+55)/4) = (161.75, 51.5). Reclassifying the points using the new centroid, the classes of the points do not change. The blue cluster will have the points (180,75), (174,71), (184,83), (168,63), (178,70), (170,59), (172,60). The red cluster will have the points (155,46), (164,53), (162,52), (166,55). Therefore the clustering algorithm terminates with clusters as displayed in the following image: Clustering of people by their height and weight Now he classifies the instance (172,60) as to whether it is a male or a female. The instance (172,60) is in the blue cluster. So it is similar to the features in the blue cluster. Are the remaining features in the blue cluster more likely males or females? 5 out of 6 features are males, only 1 is a female. Since the majority of the features are males in the blue cluster and the person (172,60) is in the blue cluster as well, he classifies the person with the height 172cm and the weight 60kg as a male. Implementing K-Means clustering in Python O’Connor implements the k-means clustering algorithm in Python. It takes as an input a CSV file with one data item per line. A data item is converted to a point. The algorithm classifies these points into the specified number of clusters. In the end, the clusters are visualized on the graph using the matplotlib library: # source_code/5/k-means_clustering.py import math import imp import sys import matplotlib.pyplot as plt import matplotlib import sys sys.path.append('../common') import common # noqa matplotlib.style.use('ggplot') # Returns k initial centroids for the given points. def choose_init_centroids(points, k): centroids = [] centroids.append(points[0]) while len(centroids) < k: # Find the centroid that with the greatest possible distance # to the closest already chosen centroid. candidate = points[0] candidate_dist = min_dist(points[0], centroids) for point in points: dist = min_dist(point, centroids) if dist > candidate_dist: candidate = point candidate_dist = dist centroids.append(candidate) return centroids # Returns the distance of a point from the closest point in points. def min_dist(point, points): min_dist = euclidean_dist(point, points[0]) for point2 in points: dist = euclidean_dist(point, point2) if dist < min_dist: min_dist = dist return min_dist # Returns an Euclidean distance of two 2-dimensional points. def euclidean_dist((x1, y1), (x2, y2)): return math.sqrt((x1 - x2) * (x1 - x2) + (y1 - y2) * (y1 - y2)) # PointGroup is a tuple that contains in the first coordinate a 2d point # and in the second coordinate a group which a point is classified to. def choose_centroids(point_groups, k): centroid_xs = [0] * k centroid_ys = [0] * k group_counts = [0] * k for ((x, y), group) in point_groups: centroid_xs[group] += x centroid_ys[group] += y group_counts[group] += 1 centroids = [] for group in range(0, k): centroids.append(( float(centroid_xs[group]) / group_counts[group], float(centroid_ys[group]) / group_counts[group])) return centroids # Returns the number of the centroid which is closest to the point. # This number of the centroid is the number of the group where # the point belongs to. def closest_group(point, centroids): selected_group = 0 selected_dist = euclidean_dist(point, centroids[0]) for i in range(1, len(centroids)): dist = euclidean_dist(point, centroids[i]) if dist < selected_dist: selected_group = i selected_dist = dist return selected_group # Reassigns the groups to the points according to which centroid # a point is closest to. def assign_groups(point_groups, centroids): new_point_groups = [] for (point, group) in point_groups: new_point_groups.append( (point, closest_group(point, centroids))) return new_point_groups # Returns a list of pointgroups given a list of points. def points_to_point_groups(points): point_groups = [] for point in points: point_groups.append((point, 0)) return point_groups # Clusters points into the k groups adding every stage # of the algorithm to the history which is returned. def cluster_with_history(points, k): history = [] centroids = choose_init_centroids(points, k) point_groups = points_to_point_groups(points) while True: point_groups = assign_groups(point_groups, centroids) history.append((point_groups, centroids)) new_centroids = choose_centroids(point_groups, k) done = True for i in range(0, len(centroids)): if centroids[i] != new_centroids[i]: done = False Break if done: return history centroids = new_centroids # Program start csv_file = sys.argv[1] k = int(sys.argv[2]) everything = False # The third argument sys.argv[3] represents the number of the step of the # algorithm starting from 0 to be shown or "last" for displaying the last # step and the number of the steps. if sys.argv[3] == "last": everything = True Else: step = int(sys.argv[3]) data = common.csv_file_to_list(csv_file) points = data_to_points(data) # Represent every data item by a point. history = cluster_with_history(points, k) if everything: print "The total number of steps:", len(history) print "The history of the algorithm:" (point_groups, centroids) = history[len(history) - 1] # Print all the history. print_cluster_history(history) # But display the situation graphically at the last step only. draw(point_groups, centroids) else: (point_groups, centroids) = history[step] print "Data for the step number", step, ":" print point_groups, centroids draw(point_groups, centroids) Input data from gender classification He saves data from the classification into the CSV file: # source_code/5/persons_by_height_and_weight.csv 180,75 174,71 184,83 168,63 178,70 170,59 164,53 155,46 162,52 166,55 172,60 Program output for the classification data O’Connor runs the program implementing k-means clustering algorithm on the data from the classification. The numerical argument 2 means that he would like to cluster the data into 2 clusters: $ python k-means_clustering.py persons_by_height_weight.csv 2 last The total number of steps: 2 The history of the algorithm: Step number 0: point_groups = [((180.0, 75.0), 0), ((174.0, 71.0), 0), ((184.0, 83.0), 0), ((168.0, 63.0), 0), ((178.0, 70.0), 0), ((170.0, 59.0), 0), ((164.0, 53.0), 1), ((155.0, 46.0), 1), ((162.0, 52.0), 1), ((166.0, 55.0), 1), ((172.0, 60.0), 0)] centroids = [(180.0, 75.0), (155.0, 46.0)] Step number 1: point_groups = [((180.0, 75.0), 0), ((174.0, 71.0), 0), ((184.0, 83.0), 0), ((168.0, 63.0), 0), ((178.0, 70.0), 0), ((170.0, 59.0), 0), ((164.0, 53.0), 1), ((155.0, 46.0), 1), ((162.0, 52.0), 1), ((166.0, 55.0), 1), ((172.0, 60.0), 0)] centroids = [(175.14285714285714, 68.71428571428571), (161.75, 51.5)] The program also outputs a graph visible in the 2nd image. The parameter last means that O’Connor would like the program to do the clustering until the last step. If he wants to display only the first step (step 0), he can change last to 0 to run: $ python k-means_clustering.py persons_by_height_weight.csv 2 0 Upon the execution of the program, O’Connor gets the graph of the clusters and their centroids at the initial step, as in image 1. He heaves a sigh of relief. Hobbs returns just then: Oye there O’Connor, not snoozing again now O’are ya? O’Connor: Not at all, sir. I think we need to provide Mr. Hudson with special protection because it looks like he’s the next target. Hobbs raises an eyebrow as he adjusts his gun in it’s holster: Emm, O’are ya sure, agent? O’Connor replies with a smile: 83.33% confident, sir! Hobbs: Wha’ are we waiting for then, eh? Let’s go get em! If you liked reading this mystery, go ahead and buy the book it was inspired by: Data Science Algorithms in a Week, by David Natingga.
Read more
  • 0
  • 1
  • 24066
article-image-four-ibm-facial-recognition-patents-in-2018-we-found-intriguing
Natasha Mathur
11 Aug 2018
10 min read
Save for later

Four IBM facial recognition patents in 2018, we found intriguing

Natasha Mathur
11 Aug 2018
10 min read
The media has gone into a frenzy over Google’s latest facial recognition patent that shows an algorithm can track you across social media and gather your personal details. We thought, we’d dive further into what other patents Google has applied for in facial recognition tehnology in 2018. What we discovered was an eye opener (pun intended). Google is only the 3rd largest applicant with IBM and Samsung leading the patents race in facial recognition. As of 10th Aug, 2018, 1292 patents have been granted in 2018 on Facial recognition. Of those, IBM received 53. Here is the summary comparison of leading companies in facial recognition patents in 2018. Read Also: Top four Amazon patents in 2018 that use machine learning, AR, and robotics IBM has always been at the forefront of innovation. Let’s go back about a quarter of a century, when IBM invented its first general-purpose computer for business. It built complex software programs that helped in launching Apollo missions, putting the first man on the moon. It’s chess playing computer, Deep Blue, back in 1997,  beat Garry Kasparov, in a traditional chess match (the first time a computer beat a world champion). Its researchers are known for winning Nobel Prizes. Coming back to 2018, IBM unveiled the world’s fastest supercomputer with AI capabilities, and beat the Wall Street expectations by making $20 billion in revenue in Q3 2018 last month, with market capitalization worth $132.14 billion as of August 9, 2018. Its patents are a major part of why it continues to be valuable highly. IBM continues to come up with cutting-edge innovations and to protect these proprietary inventions, it applies for patent grants. United States is the largest consumer market in the world, so patenting the technologies that the companies come out with is a standard way to attain competitive advantage. As per the United States Patent and Trademark Office (USPTO), Patent is an exclusive right to invention and “the right to exclude others from making, using, offering for sale, or selling the invention in the United States or “importing” the invention into the United States”. As always, IBM has applied for patents for a wide spectrum of technologies this year from Artificial Intelligence, Cloud, Blockchain, Cybersecurity, to Quantum Computing. Today we focus on IBM’s patents in facial recognition field in 2018. Four IBM facial recognition innovations patented in 2018 Facial recognition is a technology which identifies and verifies a person from a digital image or a video frame from a video source and IBM seems quite invested in it. Controlling privacy in a face recognition application Date of patent: January 2, 2018 Filed: December 15, 2015 Features: IBM has patented for a face-recognition application titled “Controlling privacy in a face recognition application”. Face recognition technologies can be used on mobile phones and wearable devices which may hamper the user privacy. This happens when a "sensor" mobile user identifies a "target" mobile user without his or her consent. The present mobile device manufacturers don’t provide the privacy mechanisms for addressing this issue. This is the major reason why IBM has patented this technology. Editor’s Note: This looks like an answer to the concerns raised over Google’s recent social media profiling facial recognition patent.   How it works? Controlling privacy in a face recognition application It consists of a privacy control system, which is implemented using a cloud computing node. The system uses a camera to find out information about the people, by using a face recognition service deployed in the cloud. As per the patent application “the face recognition service may have access to a face database, privacy database, and a profile database”. Controlling privacy in a face recognition application The facial database consists of one or more facial signatures of one or more users. The privacy database includes privacy preferences of target users. Privacy preferences will be provided by the target user and stored in the privacy database.The profile database contains information about the target user such as name, age, gender, and location. It works by receiving an input which includes a face recognition query and a digital image of a face. The privacy control system then detects a facial signature from the digital image. The target user associated with the facial signature is identified, and profile of the target user is extracted. It then checks the privacy preferences of the user. If there are no privacy preferences set, then it transmits the profile to the sensor user. But, if there are privacy preferences then the censored profile of the user is generated omitting out the private elements in the profile. There are no announcements, as for now, regarding when this technology will hit the market. Evaluating an impact of a user's content utilized in a social network Date of patent: January 30, 2018 Filed: April 11, 2015 Features:  IBM has patented for an application titled “Evaluating an impact of a user's content utilized in a social network”.  With so much data floating around on social network websites, it is quite common for the content of a document (e.g., e-mail message, a post, a word processing document, a presentation) to be reused, without the knowledge of an original author. Evaluating an impact of a user's content utilised in a social network Evaluating an impact of a user's content utilized in a social network Because of this, the original author of the content may not receive any credit, which creates less motivation for the users to post their original content in a social network. This is why IBM has decided to patent for this application. Evaluating an impact of a user's content utilized in a social network As per the patent application, the method/system/product  “comprises detecting content in a document posted on a social network environment being reused by a second user. The method further comprises identifying an author of the content. The method additionally comprises incrementing a first counter keeping track of a number of times the content has been adopted in derivative works”. There’s a processor, which generates an “impact score” which  represents the author's ability to influence other users to adopt the content. This is based on the number of times the content has been adopted in the derivative works. Also, “the method comprises providing social credit to the author of the content using the impact score”. Editor’s Note: This is particularly interesting to us as IBM, unlike other tech giants, doesn’t own a popular social network or media product. (Google has Google+, Microsoft has LinkedIn, Facebook and Twitter are social, even Amazon has stakes in a media entity in the form of Washington Post). No information is present about when or if this system will be used among social network sites. Spoof detection for facial recognition Date of patent: February 20, 2018 Filed: December 10, 2015 Features: IBM patented an application named “Spoof detection for facial recognition”.  It provides a method to determine whether the image is authentic or not. As per the patent “A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source.” Editor’s Note: This seems to have a direct impact on the work around tackling deepFakes, which incidentally is something DARPA is very keen on. Could IBM be vying for a long term contract with the government? How it works? The patent consists of a system that helps detect “if a face in a facial recognition authentication system is a three-dimensional structure based on multiple selected images from the input video”.                                      Spoof detection for facial recognition There are four or more two-dimensional feature points which are located via an image processing device connected to the camera. Here the two-dimensional feature points do not lie on the same two-dimensional plane. The patent reads that “one or more additional images of the user's face can be received with the camera; and, the at least four two-dimensional feature points can be located on each additional image with the image processor. The image processor can identify displacements between the two-dimensional feature points on the additional image and the two-dimensional feature points on the first image for each additional image” Spoof detection for facial recognition There is also a processor connected to the image processing device that helps figure out whether the displacements conform to a three-dimensional surface model. The processor can then determine whether to authenticate the user depending on whether the displacements conform to the three-dimensional surface model. Facial feature location using symmetry line Date of patent: June 5, 2018 Filed: July 20, 2015 Features: IBM patented for an application titled “Facial feature location using symmetry line”. As per the patent, “In many image processing applications, identifying facial features of the subject may be desired. Currently, location of facial features require a search in four dimensions using local templates that match the target features. Such a search tends to be complex and prone to errors because it has to locate both (x, y) coordinates, scale parameter and rotation parameter”. Facial feature location using symmetry line Facial feature location using symmetry line The application consists of a computer-implemented method that obtains an image of the subject’s face. After that it automatically detects a symmetry line of the face in the image, where the symmetry line intersects at least a mouth region of the face. It then automatically locates a facial feature of the face using the symmetry line. There’s also a computerised apparatus with a processor which performs the steps of obtaining an image of a subject’s face and helps locate the facial feature.  Editor’s note: Atleast, this patent makes direct sense to us. IBM is majorly focusing on bring AI to healthcare. A patent like this can find a lot of use in not just diagnostics and patient care, but also in cutting edge areas like robotics enabled surgeries. IBM is continually working on new technologies to provide the world with groundbreaking innovations. Its big investments in facial recognition technology speaks volumes about how IBM is well-versed with its endless possibilities. With the facial recognition technological progress,  come the privacy fears. But, IBM’s facial recognition application patent has got it covered as it lets the users set privacy preferences. This can be a great benchmark for IBM as no many existing applications are currently doing it. The social credit score evaluating app can really help bring the voice back to the users interested in posting content on social media platforms. The spoof detection application will help maintain authenticity by detecting forged images. Lastly, the facial feature detection can act as a great additional feature for image processing applications. IBM has been heavily investing in facial recognition technology. There are no guarantees by IBM as to whether these patents will ever make it to practical applications, but it does say a lot about how IBM thinks about the technology. Four interesting Amazon patents in 2018 that use machine learning, AR, and robotics Facebook patents its news feed filter tool to provide more relevant news to its users Google’s new facial recognition patent uses your social network to identify you!  
Read more
  • 0
  • 0
  • 24018

article-image-amazon-remars-day-1-kicks-off-showcasing-amazons-next-gen-ai-robots-spot-the-robo-dog-and-a-guest-appearance-from-iron-man
Savia Lobo
06 Jun 2019
11 min read
Save for later

Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’

Savia Lobo
06 Jun 2019
11 min read
Amazon’s inaugural re:MARS event kicked off on Tuesday, June 4 at the Aria in Las Vegas. This 4-day event is inspired by MARS, a yearly invite-only event hosted by Jeff Bezos that brings together innovative minds in Machine learning, Automation, Robotics, and Space to share new ideas across these rapidly advancing domains. re:MARS featured a lot of announcements revealing a range of robots each engineered for a different purpose. Some of them include helicopter drones for delivery, two robot dogs by Boston Dynamics, Autonomous human-like acrobats by Walt Disney Imagineering, and much more. Amazon also revealed Alexa’s new Dialog Modeling for Natural, Cross-Skill Conversations. Let us have a brief look at each of the announcements. Robert Downey Jr. announces ‘The Footprint Coalition’ project to clean up the environment using Robotics Popularly known as the “Iron Man”, Robert Downey Jr.’s visit was one of the exciting moments where he announced a new project called The Footprint Coalition to clean up the planet using advanced technologies at re:MARS. “Between robotics and nanotechnology we could probably clean up the planet significantly, if not entirely, within a decade,” he said. According to The Forbes, “Amazon did not immediately respond to questions about whether it was investing financially or technologically in Downey Jr.’s project.” “At this point, the effort is severely light on details, with only a bare-bones website to accompany Downey’s public statement, but the actor said he plans to officially launch the project by April 2020,” Forbes reports. A recent United Nations report found that humans are having an unprecedented and devastating effect on global biodiversity, and researchers have found microplastics polluting the air, ocean, and soil. The announcement of this project has been opened to the public because the “company itself is under fire for its policies around the environment and climate change”. Additionally, Morgan Pope and Tony Dohi of Walt Disney Imagineering, also demonstrated their work to create autonomous acrobats. https://twitter.com/jillianiles/status/1136082571081555968 https://twitter.com/thesullivan/status/1136080570549563393 Amazon will soon deliver orders using drones On Wednesday, Amazon unveiled a revolutionary new drone that will test deliver toothpaste and other household goods starting within months. This drone is “part helicopter and part science-fiction aircraft” with built-in AI features and sensors that will help it fly robotically without threatening traditional aircraft or people on the ground. Gur Kimchi, vice president of Amazon Prime Air, said in an interview to Bloomberg, “We have a design that is amazing. It has performance that we think is just incredible. We think the autonomy system makes the aircraft independently safe.” However, he refused to provide details on where the delivery tests will be conducted. Also, the drones have received a year’s approval from the FAA to test the devices in limited ways that still won't allow deliveries. According to a Bloomberg report, “It can take years for traditional aircraft manufacturers to get U.S. Federal Aviation Administration approval for new designs and the agency is still developing regulations to allow drone flights over populated areas and to address national security concerns. The new drone presents even more challenges for regulators because there aren’t standards yet for its robotic features”. Competitors to Amazon’s unnamed drone include Alphabet Inc.’s Wing, which became the first drone to win an FAA approval to operate as a small airline, in April. Also, United Parcel Service Inc. and drone startup Matternet Inc. began using drones to move medical samples between hospitals in Raleigh, North Carolina, in March. Amazon’s drone is about six feet across with six propellers that lift it vertically off the ground. It is surrounded by a six-sided shroud that will protect people from the propellers, and also serves as a high-efficiency wing such that it can fly more horizontally like a plane. Once it gets off the ground, the craft tilts and flies sideways -- the helicopter blades becoming more like airplane propellers. Kimchi said, “Amazon’s business model for the device is to make deliveries within 7.5 miles (12 kilometers) from a company warehouse and to reach customers within 30 minutes. It can carry packages weighing as much as five pounds. More than 80% of packages sold by the retail behemoth are within that weight limit.” According to the company, one of the things the drone has mastered is detecting utility wires and clotheslines. They have been notoriously difficult to identify reliably and pose a hazard for a device attempting to make deliveries in urban and suburban areas. To know more about these high-tech drones in detail, head over to Amazon’s official blogpost. Boston Dynamics’ first commercial robot, Spot Boston Dynamics revealed its first commercial product, a quadrupedal robot named Spot.  Boston Dynamics’ CEO Marc Raibert told The Verge, “Spot is currently being tested in a number of “proof-of-concept” environments, including package delivery and surveying work.” He also said that although there’s no firm launch date for the commercial version of Spot, it should be available within months, certainly before the end of the year. “We’re just doing some final tweaks to the design. We’ve been testing them relentlessly”, Raibert said. These Spot robots are capable of navigating environments autonomously, but only when their surroundings have been mapped in advance. They can withstand kicks and shoves and keep their balance on tricky terrain, but they don’t decide for themselves where to walk. These robots are simple to control; using a D-pad, users can steer the robot as just like an RC car or mechanical toy. A quick tap on the video feed streamed live from the robot’s front-facing camera allows to select a destination for it to walk to, and another tap lets the user assume control of a robot arm mounted on top of the chassis. With 3D cameras mounted atop, a Spot robot can map environments like construction sites, identifying hazards and work progress. It also has a robot arm which gives it greater flexibility and helps it open doors and manipulate objects. https://twitter.com/jjvincent/status/1136096290016595968 The commercial version will be “much less expensive than prototypes [and] we think they’ll be less expensive than other peoples’ quadrupeds”, Raibert said. Here’s a demo video of the Spot robot at the re:MARS event. https://youtu.be/xy_XrAxS3ro Alexa gets new dialog modeling for improved natural, cross-skill conversations Amazon unveiled new features in Alexa that would help the conversational agent to answer more complex questions and carry out more complex tasks. Rohit Prasad, Alexa vice president and head scientist, said, “We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.” This new update to Alexa is a set of AI modules that work together to generate responses to customers’ questions and requests. With every round of dialog, the system produces a vector — a fixed-length string of numbers — that represents the context and the semantic content of the conversation. “With this new approach, Alexa will predict a customer’s latent goal from the direction of the dialog and proactively enable the conversation flow across topics and skills,” Prasad says. “This is a big leap for conversational AI.” At re:MARS, Prasad also announced the developer preview of Alexa Conversations, a new deep learning-based approach for skill developers to create more-natural voice experiences with less effort, fewer lines of code, and less training data than before. The preview allows skill developers to create natural, flexible dialogs within a single skill; upcoming releases will allow developers to incorporate multiple skills into a single conversation. With Alexa Conversations, developers provide: (1) application programming interfaces, or APIs, that provide access to their skills’ functionality; (2) a list of entities that the APIs can take as inputs, such as restaurant names or movie times;  (3) a handful of sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology handles the rest. “It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,” Prasad said. To know more about this announcement in detail, head over to Alexa’s official blogpost. Amazon Robotics unveiled two new robots at its fulfillment centers Brad Porter, vice president of robotics at Amazon, announced two new robots, one is, code-named Pegasus and the other one, Xanthus. Pegasus, which is built to sort packages, is a 3-foot-wide robot equipped with a conveyor belt on top to drop the right box in the right location. “We sort billions of packages a year. The challenge in package sortation is, how do you do it quickly and accurately? In a world of Prime one-day [delivery], accuracy is super-important. If you drop a package off a conveyor, lose track of it for a few hours  — or worse, you mis-sort it to the wrong destination, or even worse, if you drop it and damage the package and the inventory inside — we can’t make that customer promise anymore”, Porter said. Porter said Pegasus robots have already driven a total of 2 million miles, and have reduced the number of wrongly sorted packages by 50 percent. Porter said the Xanthus, represents the latest incarnation of Amazon’s drive robot. Amazon uses tens of thousands of the current-generation robot, known as Hercules, in its fulfillment centers. Amazon unveiled Xanthus Sort Bot and Xanthus Tote Mover. “The Xanthus family of drives brings innovative design, enabling engineers to develop a portfolio of operational solutions, all of the same hardware base through the addition of new functional attachments. We believe that adding robotics and new technologies to our operations network will continue to improve the associate and customer experience,” Porter says. To know more about these new robots watch the video below: https://youtu.be/4MH7LSLK8Dk StyleSnap: An AI-powered shopping Amazon announced StyleSnap, a recent move to promote AI-powered shopping. StyleSnap helps users pick out clothes and accessories. All they need to do is upload a photo or screenshot of what they are looking for, when they are unable to describe what they want. https://twitter.com/amazonnews/status/1136340356964999168 Amazon said, "You are not a poet. You struggle to find the right words to explain the shape of a neckline, or the spacing of a polka dot pattern, and when you attempt your text-based search, the results are far from the trend you were after." To use StyleSnap, just open the Amazon app, click the camera icon in the upper right-hand corner, select the StyleSnap option, and then upload an image of the outfit. Post this, StyleSnap provides recommendations of similar outfits on Amazon to purchase, with users able to filter across brand, pricing, and reviews. Amazon's AI system can identify colors and edges, and then patterns like floral and denim. Using this information, its algorithm can then accurately pick a matching style. To know more about StyleSnap in detail, head over to Amazon’s official blog post. Amazon Go trains cashierless store algorithms using synthetic data Amazon at the re:MARS shared more details about Amazon Go, the company’s brand for its cashierless stores. They said Amazon Go uses synthetic data to intentionally introduce errors to its computer vision system. Challenges that had to be addressed before opening stores to avoid queues include the need to make vision systems that account for sunlight streaming into a store, little time for latency delays, and small amounts of data for certain tasks. Synthetic data is being used in a number of ways to power few-shot learning, improve AI systems that control robots, train AI agents to walk, or beat humans in games of Quake III. Dilip Kumar, VP of Amazon Go, said, “As our application improved in accuracy — and we have a very highly accurate application today — we had this interesting problem that there were very few negative examples, or errors, which we could use to train our machine learning models.” He further added, “So we created synthetic datasets for one of our challenging conditions, which allowed us to be able to boost the diversity of the data that we needed. But at the same time, we have to be careful that we weren’t introducing artifacts that were only visible in the synthetic data sets, [and] that the data translates well to real-world situations — a tricky balance.” To know more about this news in detail, check out this video: https://youtu.be/jthXoS51hHA The Amazon re:MARS event is still ongoing and will have many more updates. To catch live updates from Vegas visit Amazon’s blog. World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase Amazon introduces S3 batch operations to process millions of S3 objects Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 23940
Modal Close icon
Modal Close icon