Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Data Analysis

16 Articles
article-image-mastering-performance-tuning-with-dax-studio-and-vertipaq-analyzer
Thomas LeBlanc, Bhavik Merchant
03 Dec 2024
15 min read
Save for later

Mastering Performance Tuning with DAX Studio and VertiPaq Analyzer

Thomas LeBlanc, Bhavik Merchant
03 Dec 2024
15 min read
This article is an excerpt from the book, "Microsoft Power BI Performance Best Practices - Second Edition", by Thomas LeBlanc, Bhavik Merchant. Overcome common challenges in data management, visualization, and security with this updated edition of Microsoft Power BI Performance Best Practices, and ramp-up your Power BI solutions, achieve faster insights, and drive better business outcomes.IntroductionOptimizing performance and storage in Power BI and Analysis Services can be a complex task. However, tools like DAX Studio and VertiPaq Analyzer simplify this process by providing insightful metrics and performance-tuning capabilities. This article explores how to leverage these tools to analyze semantic models, identify performance bottlenecks, and optimize DAX queries. We'll discuss key features such as viewing model metrics, capturing and analyzing query traces, and testing optimizations using DAX Studio's query editor.Tuning with DAX Studio and VertiPaq AnalyserDAX Studio, as the name implies, is a tool centered on DAX queries. It provides a simple yet intuitive interface with powerful features to browse and query Analysis Services semantic models. We will cover querying later in this section. For now, let’s look deeper into semantic models.The Analysis Services engine has supported dynamic management views (DMVs) for over a decade. These views refer to SQL-like queries that can be executed on Analysis Services to return information about semantic model objects and operations.VertiPaq Analyzer is a utility that uses publicly documented DMVs to display essential information about which structures exist inside the semantic model and how much space they occupy. It started life as a standalone utility, published as a Power Pivot for an Excel workbook, and still exists in that form today. In this chapter, we will refer to its more recent incarnation as a built-in feature of DAX Studio 3.0.11.It is interesting to note that VertiPaq is the original name given to the compressed column store engine within Analysis Services (Verti referring to columns and Paq referring to compression).Analyzing model size with VertiPaq AnalyzerVertiPaq Analyzer is built into DAX Studio as the View Metrics features, found in the Advanced tab of the toolbar. You simply click the icon to have DAX Studio run the DMVs for you and display statistics in a tabular form. This is shown in the following figure:Figure 6.8 – Using View Metrics to generate VertiPaq Analyzer statsYou can switch to the Summary tab of the VertiPaq Analyzer pane to get an idea of the overall total size of the model along with other summary statistics, as shown in the following figure:Figure 6.9 – Summary tab of VertiPaq AnalyzerThe Total Size metric provided in the previous figure will often be larger than the size of the semantic model on disk (as a .pbix file or Analysis Services .abf backup). This is because there are additional structures required when the model is loaded into memory, which is particularly true of Import mode semantic models.In Chapter 2, Exploring Power BI Architecture and Configuration, we learned about Power BI’s compressed column storage engine. The DMV statistics provided by VertiPaq Analyzer let us see just how compressible columns are and how much space they are taking up. It also allows us to observe other objects, such as relationships.The Columns tab is a great way to see whether you have any columns that are very large relative to others or the entire dataset. The following figure shows the columns view for the same model we saw in Figure 6.9. You can see how from 238 columns, a single column called SalesOrderNumber takes up a staggering 22.40% of the whole model size! It’s interesting to see its Cardinality (or uniqueness) value is about twelve times lower than the next largest column (SalesKey):|Figure 6.10 – Two columns monopolizing the semantic modelIn Figure 6.10, we can also see that Data Type is String for Online Sale-SalesOrderNumber, which was a column suggested by Tabular Editor to have a large dictionary footprint. These statistics would lead you to deduce that this column contains long, unique test values that do not compress well because there is a large cardinality. Indeed, in this case, the column contains a sales order number that is unique to each order plus is not used to group or slice analytical data in a Power BI report well.This analysis may lead you to re-evaluate the need for this level of reporting in the analysis of sales data. You’d need to ask yourself whether the extra storage space and time taken to build compressed columns and potentially other structures is worth it for your business case. In cases of highly detailed data such as this where you do not need detail-level sales order data, consider limiting the analysis to customer-related data such as demographics or date attributes such as year and month.Now, let’s learn about how DAX Studio can help us with performance analysis and improvement.Performance tuning the data model and DAXThe first-party option for capturing Analysis Services traces is SQL Server Profiler. When starting a trace, you must identify exactly which events to capture, which requires some knowledge of the trace events and what they contain. Even with this knowledge, working with the trace data in Profi ler can be tough since the tool was designed primarily to work with SQL Server application traces. The good news is that DAX Studio can start an Analysis Services server trace and then parse and format all the data to show you relevant results in a well-presented way within its user interface. It allows us to both tune and measure queries in a single place and provides features for Analysis Services that make it a good alternative SQL profiler for tuning semantic models.Capturing and replaying queriesThis All Queries command in the Traces section of the DAX Studio toolbar will start a trace against the semantic model you have connected to. Figure 6.11 shows the result when a trace is successfully started:Figure 6.11 – Query trace successfully started in DAX StudioOnce your trace has started, you can interact with the semantic model outside DAX Studio, and it will capture queries for you. How you interact with the semantic model depends on where it is. For a semantic model running on your computer in Power BI Desktop, you would simply interact with the report. This would generate queries that DAX Studio will see. The All Queries tab at the bottom of the tool is where the captured queries are listed in time order with durations in milliseconds. The following figure shows two queries captured when opening the Unique by Account No page from the Slow vs Fast Measures.pbix sample file:Figure 6.12 – Queries captured by DAX StudioThe preceding queries come from a screen that has the same table results in a visual, but two different DAX measures that calculate the aggregation. These measures make one table come back in less than a second while the other returns in about 17 seconds. The following figure shows the page in the report:Figure 6.13 – Tables with the same results but from using different measuresThe following screenshot shows the results of the Performance Analyzer for the tables previously.Observe how one query took over 17 seconds, whereas the other took under 1 second:Figure 6.14 – Vastly different query durations for the same visual resultIn Figure 6.12, the second query was double-clicked to bring the DAX text to the editor. You can modify this query in DAX Studio to test performance changes. We see here that the DAX expression for the UniqueRedProducts_Slow measure was not efficient. We’ll learn a technique to optimize queries soon, but first, we need to learn about capturing query performance traces.Obtaining query timingsTo get detailed query performance information, you can use the Server Timings command shown in Figure 6.11. After starting the trace, you can run queries and then use the Server Timings tab to see how the engine executed the query, as shown in the following figure:Figure 6.15 – Server Timings showing detailed query performance statisticsFigure 6.15 gives very useful information. FE and SE refer to the formula engine and storage engine. The storage engine is fast and multi-threaded, and its job is fetching data. It can apply basic logic such as filtering data to retrieve only what is needed. The formula engine is single-threaded, and it generates a query plan, which is the physical steps required to compute the result. It also performs calculations on the data such as joins, complex filters, aggregations, and lookups. We want to avoid queries that spend most of the time in the formula engine, or that execute many queries in the storage engine. The bottom-left section of Figure 6.15 shows that we executed almost 4,900 SE queries. The list of queries to the right shows many queries returning only one result, which is suspicious.For comparison, we look at timing for the fastest version of the query and we see the following:Figure 6.16 – Server Timings for a fast version of the queryIn Figure 6.16, we can see that only three server engine queries were run this time, and the result was obtained much faster (milliseconds compared to seconds).The faster DAX measure was as follows:UniqueRedProducts_Fast = CALCULATE( DISTINCTCOUNT('SalesOrderDetail'[ProductID]), 'Product'[Color] = "Red" )The slower DAX measure was as follows:UniqueRedProducts_Slow = CALCULATE( DISTINCTCOUNT('SalesOrderDetail'[ProductID]), FILTER('SalesOrderDetail', RELATED('Product'[Color]) = "Red"))TipThe Analysis Services engine does use data caches to speed up queries. These caches contain uncompressed query results that can be reused later to save time fetching and decompressing data. You should use the Clear Cache button in DAX Studio to force these caches to be cleared and get a proper worst-case performance measure. This is visible in the menu bar in Figure 6.11.We will build on these concepts when we look at DAX and model optimizations in later chapters. Now, let’s look at how we can experiment with DAX and query changes in DAX Studio.Modifying and tuning queriesEarlier in this section, we saw how we could capture a query generated by a Power BI visual and then display its text. A nice trick we can use here is to use query-scoped measures to override the measure definition and see how performance differs.The following figure shows how we can search for a measure, right-click, and then pull its definition into the query editor of DAX Studio:Figure 6.17 – The Define Measure option and result in the Query paneWe can now modify the measure in the query editor, and the engine will use the local definition instead of the one defined in the model! This technique gives you a fast way to prototype DAX enhancements without having to edit them in Power BI and refresh visuals over many iterations.Remember that this technique does not apply any changes to the dataset you are connected to. You can optimize expressions in DAX Studio, then transfer the definition to Power BI Desktop/Visual Studio when ready. The following figure shows how we changed the definition of UniqueRedProducts_ Slow in a query-scoped measure to get a huge performance boast:Figure 6.18 – Modified measure giving better resultsThe technique described here can be adapted to model changes too. For example, if you wanted to determine the impact of changing a relationship type, you could run the same queries in DAX Studio before and after the change to draw a comparison.Here are some additional tips for working with DAX Studio:Isolate measure: When performance tuning a query generated by a report visual, comment out complex measures and then establish a baseline performance score. Th en, add each measure back to the query individually and check the speed. This will help identify the slowest measures in the query and visual context.Work with Desktop Performance Analyzer traces: DAX Studio has a facility to import the trace files generated by Desktop Performance Analyzer. You can import trace files using the Load Perf Data button located next to All Queries highlighted in Figure 6.12. This trace can be captured by one person and then shared with a DAX/modeling expert who can use DAX Studio to analyze and replay their behavior. The following figure shows how DAX Studio formats the data to make it easy to see which visual component is taking the most time. It was generated by viewing each of the three report pages in the Slow vs Fast Measures.pbix sample file:Figure 6.19 – Performance Analyzer trace shows the slowest visual in the reportExport/import model metrics: DAX Studio has a facility to export or import the VertiPaq model metadata using .vpax files. These files do not contain any of your data. They contain table names, column names, and measure definitions. If you are not concerned with sharing these definitions, you can provide .vpax files to others if you need assistance with model optimizationConclusionDAX Studio and VertiPaq Analyzer are indispensable tools for anyone working with Power BI or Analysis Services models. From detailed model size analysis to advanced performance tuning, these tools empower users to identify inefficiencies and implement optimizations effectively. By using their robust features, such as the ability to view metrics, trace query performance, and prototype query changes, professionals can ensure their models are both efficient and scalable. Mastery of these tools lays a solid foundation for building high-performing, resource-efficient analytical solutions.Author BioThomas LeBlanc is a seasoned Business Intelligence Architect at Data on the Geaux, where he applies his extensive skillset in dimensional modeling, data visualization, and analytical modeling to deliver robust solutions. With a Bachelor of Science in Management Information Systems from Louisiana State University, Thomas has amassed over 30 years of experience in Information Technology, transitioning from roles as a software developer and database administrator to his current expertise in business intelligence and data warehouse architecture and management.Throughout his career, Thomas has spearheaded numerous impactful projects, including consulting for various companies on Power BI implementation, serving as lead database administrator for a major home health care company, and overseeing the implementation of Power BI and Analysis Service for a large bank. He has also contributed his insights as an author to the Power BI MVP book.Thomas is recognized as a Microsoft Data Platform MVP and is actively engaged in the tech community through his social media presence, notably as TheSmilinDBA on Twitter and ThePowerBIDude on Bluesky and Mastodon. With a passion for solving real-world business challenges with technology, Thomas continues to drive innovation in the field of business intelligence.Bhavik Merchant has nearly 18 years of deep experience in Business Intelligence. He is currently the Director of Product Analytics at Salesforce. Prior to that, he was at Microsoft, first as a Cloud Solution Architect and then as a Product Manager in the Power BI Engineering team. At Power BI, he led the customer-facing insights program, being responsible for the strategy and technical framework to deliver system-wide usage and performance insights to customers. Before Microsoft, Bhavik spent years managing high-caliber consulting teams delivering enterprise-scale BI projects. He has provided extensive technical and theoretical BI training over the years, including expert Power BI performance training he developed for top Microsoft Partners globally.
Read more
  • 0
  • 0
  • 2622

article-image-creating-and-using-kibana-dashboards
Huage Chen
02 Jul 2024
12 min read
Save for later

Creating and Using Kibana Dashboards

Huage Chen
02 Jul 2024
12 min read
This article is an excerpt from the book, Elastic Stack 8.x Cookbook, by Huage Chen and Yazid Akadiri. Unlock the full potential of Elastic Stack for search, analytics, security, and observability and manage substantial data workloads in both on-premise and cloud environmentsIntroductionIn this guide, we will integrate all previously created visualizations into a comprehensive dashboard consisting of multiple panels. Additionally, we will explore how to enhance user interaction using control-based drilldowns.Getting readyMake sure to complete the following recipes from this chapter:Creating visualizations with Kibana LensCreating visualizations from runtime fieldsCreating Kibana mapsAt the end of this recipe, you will have dashboards composed of the various visualizations and elements built into the aforementioned recipes.How to do it...Building dashboards is very straightforward in Kibana, especially if you’ve already created some visualizations. Follow these steps:1. Go to Kibana | Analytics | Dashboard and click on Create dashboard.This will bring you to a blank canvas, where you can start adding some visualizations.2. We will start by adding a nice image! You can be creative, but we provided a sample picture:A. Click on Add panel | Image. B. Select the Use link tab and set Link to image with the following URL: https://upload. wikimedia.org/wikipedia/commons/6/60/Ville_de_RENNES_Noir. svg. Then, click on Save:Figure 6.54 – Adding an image for a logoThe logo will be added to the panel. Including a picture is a great way to add some personalization and branding to your dashboards. Let’s add some proper visualizations from the ones we’ve built in the last three recipes.3. Click on Add from library and select the [Rennes Traffic] Number of locations visualization. Make sure to align it to the right with the image panel.4. Let’s add another visualization; this time, we’ll pick [Rennes Traffic] Average speed gauge.At this stage, your dashboard should look like the one shown in Figure 6.55:Figure 6.55 – Rennes traffic dashboard – first stepYou can easily rearrange the position of the different panels by clicking on the title section and moving the panel with your mouse anywhere you want on the canvas. To adjust the size and fit of the panel, position your mouse on the small arrow at the bottom right of the panel. Let’s keep adding more panels to our dashboard.5. Click on Add from library and add the following visualizations in the respective order:I. [Rennes Traffic] Traffic status waffleII. [Rennes Traffic] Speed by road hierarchyIII. [Rennes Traffic] Average speed & Traffic StatusIV. [Rennes Traffic] Traffic status by hour6. Finally, let’s add a Map visualization for a real-time view of the traffic; select the one named [Rennes Traffic] Traffic fluidity.By now, your dashboard should look like the one shown in Figure 6.56:Figure 6.56 – Rennes traffic dashboard – more visualizationsYou can start playing around with the dashboard to see the built-in interactivity of the panels. For example, clicking on a specific road hierarchy will automatically apply the filter to the entire dashboard.You can also have dedicated panels to filter and display only the data you are interested in with Controls. Let’s add some to our dashboard.7. On the dashboard toolbar, click on Controls:Figure 6.57 – Adding controls to the dashboard8. From the drop-down list, select Add control; the Create control flyout will appear on the right of the screen.9. Select the traffic_status field and click on Save and close.10. Back to the dashboard, you now have a new panel on top of the visualization named traffic_status. By clicking on it, you will see a drop-down list where you can select the values associated with the status of the traffic you want to filter, as shown in Figure 6.58. Select congested as an example:Figure 6.58 – Using controls in the dashboard11. You can see on your dashboard that all the panels have been updated according to the value selected in the traffic_status control.Imagine you want to filter your traffic data to analyze it within a specific time range, such as early in the morning or late in the afternoon, to better understand traffic patterns. This is where the time slider control proves to be incredibly useful.12. Go to the Controls menu again in the dashboard toolbar and select Add time slider control.You’ll see a new panel to the right of traffic_status:Figure 6.59 – Time slider controlBy clicking the play icon, you will see your dashboard animate and your data change over the defined time range. You can advance the time range forward as well as backward, which is especially useful when working with time series data.Your dashboard should now look as shown in Figure 6.60, with our two controls:                                                                                     Figure 6.60 – Rennes traffic dashboard with controls13. Save the dashboard by clicking the Save button in the upper-right corner. Name it [Rennes Traffic] Overview.To enhance our dashboard further, consider this: users frequently manage multiple dashboards, and the ability to navigate seamlessly from one to another is crucial, especially when aiming to refine analysis or focus on more detailed panels related to a specific dataset. Dashboard drilldowns are invaluable in this scenario as they allow you to transition between dashboards while maintaining the overall context. Let’s explore how to implement and use this feature effectively!For this exercise, we have already built a drilldown dashboard. Download and save the NDJSON file of the exported dashboard from the following location: https://github.com/PacktPublishing/ Elastic-Stack-8.x-Cookbook/blob/main/Chapter6/kibana-objects/rennesdata-drilldown-dashboard.ndjson. Then, follow these steps:1. To import the dashboard, go to Stack Management | Saved Objects.2. Click on Import and select the NDJSON file you have previously downloaded from the GitHub repository. Upon completing the import process, you will notice a warning in the flyout about data view conflicts. The reason is straightforward: our saved objects rely on an existing data view. To resolve the conflict, simply click on the drop-down list under the New data view column and select metrics-rennes_traffic-raw, as shown in Figure 6.61, then click on Confirm all changes to finalize the import procedure:Figure 6.61 – Importing saved objects and selecting the right data view3. Once all the objects have been imported, you will get a recap as shown in the following screenshot:                                                                                      Figure 6.62 – Saved objects successfully imported from the fileReturn to the [Rennes Traffic] Overview dashboard. Then, open the menu for the [Rennes Traffic] Speed by road hierarchie panel and select Create drilldown:Figure 6.63 – Creating drilldown from the panel4. Navigate to the drilldowns page and select the Go to Dashboard option. Here, you will need to name your drilldown—consider View Details for Road Hierarchy as a suggestion. Then, from the Choose destination dashboard drop-down menu, select [Rennes Traffic] Detailed traffic drilldown dashboard, which you have recently imported. This process sets up a targeted navigation path within your dashboard environment, allowing for a seamless transition between your overview and detailed analysis dashboards:Figure 6.64 – Configuring dashboard drilldown5. Click on Create drilldown. Save the dashboard to test our drilldown, click on one of the five charts in the [Rennes Traffic] Speed by road hierarchie panel. You will be redirected to the detailed dashboard filtered on the value you have selected.Figure 6.65 – Dashboard view after drilldownEt voilà! You have just built your first dashboard with a nice touch of interactivity thanks to controls and drilldowns.How it works...In Kibana, a dashboard is a collection of visualizations and saved searches that you can arrange and customize to display the data that is most important to you. You can create multiple dashboards for different use cases, and each dashboard can have its own set of visualizations and searches.Dashboards are a powerful tool for data analysis because they allow you to see multiple visualizations side by side and quickly identify patterns and trends in your data. You can also use dashboards to monitor key metrics in real time, which is especially useful for operational use cases. Kibana provides a wide range of visualization types that you can use to create custom dashboards, including bar charts, line charts, pie charts, tables, and more.The following table outlines a framework for choosing the right visualization:Use caseRecommended type of visualizationComparison and correlationMany items: Horizontal barFew items: Vertical barComparison over timeFew periods and categories: Stacked barFew time periods but many categories: Line graphDistribution of valuesFew numbers of points: Vertical bar histogramMany points: Line histogramComposition of a wholeSimple compositions with few items: Waffle or TreemapMultiple grouping dimensions for a few bottomlevel items: MosaicMultiple grouping dimensions for many bottomlevel items: TreemapEye-catching summaryOne value: MetricMany values: Table with color stylingVisualizing goals or targetsVertical bar or Line with reference linesMetricTable 6.2 – Choosing the right visualizationIn addition to visualizations, Kibana dashboards also support saved searches, which allow you to quickly filter your data based on specific criteria. You can save searches that you use frequently and add them to your dashboard for easy access.Overall, Kibana dashboards are a powerful tool for data analysis and monitoring. They allow you to quickly identify patterns and trends in your data, monitor key metrics in real time, and customize your view of the data to suit your needs.There’s more...In our recipe, we have used dashboard drilldowns, but you can also create URL and Discover drilldowns. With the former, you can link to data outside of Kibana, and with the latter, you can open Discover from a Lens panel while keeping all the contextual information.Dashboards are great when used in Kibana, but you can also share them with teams and colleagues outside of Kibana. You have many options that are easily accessible from the Share menu in the toolbar when it comes to sharing dashboards: you can interactively embed dashboards as an iFrame, export them as reports in various formats (PNG, CSV, PDF, etc.), and share them as direct links for easy access.When building dashboards, design thinking is a good practice. Start by asking yourself the following questions:What is the outcome or the goal of the dashboard? Is it about understanding high-level behaviors, visually correlating specific metrics at the same time, or finding the root cause of an issue?Who is using this dashboard to do their job? If you are building it for a team or someone else, step into their shoes to visualize their perspective when they will need that data.See alsoLooking for more design tips to elevate your dashboards? Look no further and check out this blog: https://www.elastic.co/blog/designing-intuitive-kibanadashboards-as-a-non-designerIf you’re interested in delving deeper into the topics of creating dashboards more efficiently, be sure to check out this technical blog: https://www.elastic.co/blog/buildingkibana-dashboards-more-efficientlyFor developers interested in debugging their Kibana dashboard, the following article will be very useful: https://www.elastic.co/blog/debugging-kibana-dashboardsConclusionIn this guide, we've explored the process of integrating various visualizations into a comprehensive Kibana dashboard, enhancing user interaction through control-based drilldowns. By following the steps outlined, you should now have a functional and interactive dashboard that can provide valuable insights into your data.We began by preparing the necessary visualizations and then moved on to assembling the dashboard by adding images for personalization and aligning various traffic visualizations. We also incorporated control panels for dynamic filtering, allowing for more precise data analysis. The final touch was adding drilldowns to enable seamless navigation between detailed and overview dashboards.Kibana dashboards offer powerful tools for data analysis and real-time monitoring. By displaying multiple visualizations side by side, you can quickly identify patterns and trends, making dashboards invaluable for operational and analytical use cases.Remember, the key to a successful dashboard is thoughtful design—consider the goals, the audience, and the specific data insights needed. Utilize the wide range of visualization types that Kibana offers and don't hesitate to leverage the sharing options to collaborate with your team effectively.For further reading and advanced tips on designing intuitive dashboards, building them efficiently, or debugging, check out the additional resources provided. Happy dashboarding!Author BioHuage Chen is a member of Elastic's customer engineering team and has been with Elastic for over five years, helping users throughout Europe to innovate and implement cloud-based solutions for search, data analysis, observability, and security. Before joining Elastic, he worked for 10 years in web content management, web portals, and digital experience platforms.Yazid Akadiri has been a solutions architect at Elastic for over four years, helping organizations and users solve their data and most critical business issues by harnessing the power of the Elastic Stack. At Elastic, he works with a broad range of customers, with a particular focus on Elastic observability and security solutions. He previously worked in web services-oriented architecture, focusing on API management and helping organizations build modern applications.
Read more
  • 0
  • 0
  • 681

article-image-hands-on-exploratory-data-analysis-with-duckdb
Ned Letcher
28 Jun 2024
7 min read
Save for later

Hands-On Exploratory Data Analysis with DuckDB

Ned Letcher
28 Jun 2024
7 min read
This article is an excerpt from the book, Getting Started with DuckDB, by Simon Aubury and Ned Letcher. Discover how Snowflake's unique objects and features can be used to leverage universal modeling techniques through real-world examples and SQL recipes.Introduction DuckDB is a versatile and highly optimized database management system designed for efficient data analysis workflows. Its capabilities allow practitioners to scale their data analysis efforts beyond traditional tools, making it an excellent choice for local machine data processing. In this excerpt, we will explore how to use DuckDB for hands-on exploratory data analysis, leveraging Python, Jupyter Notebooks, and Plotly for interactive data visualizations.Technical RequirementsTo follow along with the examples in this guide, you will need the following setup:Python environmentJupyter NotebookDuckDB installedJupySQL libraryPlotly libraryYou can find the necessary code examples in the chapter_11 folder in the book’s GitHub repository at [PacktPublishing](https://github.com/PacktPublishing/Getting-Started-with-DuckDB/tree/main/chapter_11).Obtaining the Dataset We will be using a pedestrian counting system dataset from the city of Melbourne, containing hourly pedestrian counts from sensors located in and around the Melbourne Central Business District (CBD). This dataset provides a comprehensive view of pedestrian traffic patterns over several years.To download the dataset, visit the dataset’s home page [Melbourne Pedestrian Counting System](https://data.melbourne.vic.gov.au/explore/dataset/pedestrian-counting-system-monthly-counts-per-hour) and locate the ZIP file containing the 2009 to 2022 archive.Setting Up the Environment Before diving into the code, ensure your Python environment is set up with the necessary dependencies. You will need to: 1. Set up a Python virtual environment:python -m venv duckdb_env source duckdb_env/bin/activate 2. Install the required libraries:   pip install jupyter duckdb plotly jupysql pandas  3. Start Jupyter Notebook: jupyter notebook Loading and Cleaning DataFirst, we will load our dataset from a CSV file and perform some data cleaning steps before writing it to a DuckDB database.Loading CSV Data into DuckDBimport duckdb import pandas as pd # Load the dataset into a pandas DataFrame data_url = "path_to_downloaded_zip_file/2022/2022.csv" pedestrian_counts = pd.read_csv(data_url) # Display the first few rows of the dataframe print(pedestrian_counts.head()) # Create a DuckDB connection and write the DataFrame to a DuckDB table con = duckdb.connect(database=':memory:') con.execute("CREATE TABLE pedestrian_counts AS SELECT * FROM pedestrian_counts") ```Data Cleaning StepsPerform necessary data cleaning operations such as handling missing values, correcting data types, and filtering irrelevant records.# Convert the 'Date_Time' column to datetime format pedestrian_counts['Date_Time'] = pd.to_datetime(pedestrian_counts['Date_Time']) # Handle missing values by filling them with 0 pedestrian_counts = pedestrian_counts.fillna(0) # Write the cleaned data to DuckDB con.execute("DROP TABLE pedestrian_counts") con.execute("CREATE TABLE pedestrian_counts AS SELECT * FROM pedestrian_counts") # Verify the cleaned data result = con.execute("SELECT * FROM pedestrian_counts LIMIT 5").fetchdf() print(result)Using JupySQL for SQL QueriesJupySQL is a powerful library that allows you to run SQL queries directly in Jupyter Notebooks. This makes it easy to interact with your DuckDB database without switching contexts. #### Example JupySQL Query%load_ext sql %sql duckdb:///:memory: # Query to view the first few rows of the dataset %%sql SELECT * FROM pedestrian_counts LIMIT 5;Visualizing Data with Plotly Plotly is a versatile data visualization library that integrates well with Jupyter Notebooks. We will use it to create interactive visualizations of our dataset.Total Pedestrian Counts Over Timeimport plotly.express as px # Aggregate pedestrian counts by year yearly_counts = con.execute("""    SELECT strftime('%Y', Date_Time) AS Year, SUM(Counts) AS Total_Counts    FROM pedestrian_counts    GROUP BY Year    ORDER BY Year """).fetchdf() # Create a bar chart fig = px.bar(yearly_counts, x='Year', y='Total_Counts', title='Total Pedestrian Counts by Year') fig.show()Monthly Traffic Counts# Aggregate pedestrian counts by month for the years 2019 and 2020 monthly_counts = con.execute("""    SELECT strftime('%Y-%m', Date_Time) AS Month, SUM(Counts) AS Monthly_Counts    FROM pedestrian_counts    WHERE strftime('%Y', Date_Time) IN ('2019', '2020')    GROUP BY Month    ORDER BY Month """).fetchdf() # Create a line chart to compare the two years fig = px.line(monthly_counts, x='Month', y='Monthly_Counts', title='Monthly Pedestrian Counts for 2019 and 2020') fig.show()Hourly Traffic Patterns# Aggregate pedestrian counts by hour of the day hourly_counts = con.execute("""    SELECT strftime('%H', Date_Time) AS Hour, AVG(Counts) AS Average_Counts    FROM pedestrian_counts    GROUP BY Hour    ORDER BY Hour """).fetchdf() # Create a line chart for hourly patterns fig = px.line(hourly_counts, x='Hour', y='Average_Counts', title='Average Hourly Pedestrian Counts') fig.show()Exploratory Data Analysis With our dataset loaded and visualized, we can perform a more detailed exploratory data analysis.Comparing Traffic on Weekdays vs. Weekends# Add a column for day of the week pedestrian_counts['Day_of_Week'] = pedestrian_counts['Date_Time'].dt.day_name() # Aggregate pedestrian counts by day of the week daily_counts = con.execute("""    SELECT Day_of_Week, AVG(Counts) AS Average_Counts    FROM pedestrian_counts    GROUP BY Day_of_Week    ORDER BY FIELD(Day_of_Week, 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday') """).fetchdf() # Create a bar chart for daily patterns fig = px.bar(daily_counts, x='Day_of_Week', y='Average_Counts', title='Average Pedestrian Counts by Day of the Week') fig.show()Peak Hours of Pedestrian Traffic# Identify peak hours by finding the hours with the highest average counts peak_hours = con.execute("""    SELECT strftime('%H', Date_Time) AS Hour, AVG(Counts) AS Average_Counts    FROM pedestrian_counts    GROUP BY Hour    ORDER BY Average_Counts DESC    LIMIT 5 """).fetchdf() # Create a bar chart for peak hours fig = px.bar(peak_hours, x='Hour', y='Average_Counts', title='Peak Hours of Pedestrian Traffic') fig.show()ConclusionDuckDB, combined with JupySQL and Plotly, provides a robust framework for performing hands-on exploratory data analysis. By leveraging DuckDB’s high-performance SQL capabilities and integrating with powerful visualization tools, you can efficiently uncover insights from your data. We encourage you to further explore DuckDB’s features and apply these techniques to your datasets.For a deeper dive into DuckDB's powerful data analysis capabilities and to explore more advanced topics, we highly recommend reading the book 'Getting Started with DuckDB' by Simon Aubury and Ned Letcher."Author BioSimon Aubury has been working in the IT industry since 2000 as a data engineering specialist. He has an extensive background in building large, flexible, highly available distributed data systems. Simon has delivered critical data systems for finance, transport, healthcare, insurance, and telecommunications clients in Australia, Europe, and Asia Pacific. In 2019, Simon joined ThoughtWorks as a principal data engineer and today is associate director of data platforms at Simple Machines in Sydney, Australia. Simon is active in the data community, a regular conference speaker, and the organizer of local and international meetups and data engineering conferences.Ned Letcher has worked as a data science and software engineering consultant since completing his PhD in computational linguistics in 2018 and currently works at Thoughtworks. He has designed and developed data-powered products and services across a range of industries and helped organizations and teams improve the effectiveness of their data processes and workflows. Ned has also worked as a Python trainer, supporting both tertiary students and data professionals across various organizations. He is active in the data community, speaking at and helping organize meetups and conferences, as well as contributing to a range of open source projects.
Read more
  • 0
  • 0
  • 474

article-image-writing-unit-tests-for-r-functions-with-chatgpt
Dan MacLean
23 Apr 2024
5 min read
Save for later

Writing unit tests for R functions with ChatGPT

Dan MacLean
23 Apr 2024
5 min read
Dive deeper into the world of AI innovation and stay ahead of the AI curve! Subscribe to our AI_Distilled newsletter for the latest insights. Don't miss out – sign up today!This article is an excerpt from the book, R Bioinformatics Cookbook - Second Edition, by Dan MacLean. Discover over 80 recipes for modeling and handling real-life biological data using modern libraries from the R ecosystem.IntroductionChatGPT is beneficial for generating R unit tests because of its knowledge of programming patterns and best practices. It can understand code snippets, including functions and their expected behaviors, and generate relevant unit tests. By leveraging its training on diverse programming contexts, ChatGPT can assist in creating comprehensive test cases that cover different scenarios and edge cases. It helps ensure code correctness, identify potential bugs, and improve code reliability. ChatGPT’s ability to generate unit tests saves developers time and effort, promotes software quality, and contributes to building robust and well-tested R code. In this recipe, we’ll learn how to get ChatGPT to do the tedious work of generating test cases for code that we are working on.Getting readyWe just need a ChatGPT account: https://chat.openai.com/.How to do it…To write unit tests for an R function with ChatGPT, do the following:1. Paste the following request into ChatGPT:write unit tests for the following R code factorise_cols <- function(df, col_list ){ not_all_na <- function(x) any(!is.na(x)) for ( i in col_list){ df <- df %>% dplyr::mutate(!!(i) := as.factor(!!(i)) ) } df %>% dplyr::select_if(not_all_na) }2.  Read and check the output:library(dplyr) library(testthat) # Create a test data frame test_df <- data.frame( col1 = c("A", "B", "C", NA), col2 = c(1, 2, NA, 4), col3 = c("X", "Y", "Z", NA) ) # Define the expected output for the test data frame expected_df <- data.frame( col1 = factor(c("A", "B", "C", NA)), col3 = factor(c("X", "Y", "Z", NA)) ) # Start defining the unit tests test_that("factorise_cols correctly factorises selected columns", { # Call the factorise_cols function result_df <- factorise_cols(test_df, c("col1", "col3")) # Check if the selected columns are factorised expect_equal(result_df$col1, expected_df$col1) expect_equal(result_df$col3, expected_df$col3) # Check if other columns are not modified expect_equal(result_df$col2, test_df$col2) }) test_that("factorise_cols returns a data frame with non-NA columns", { # Call the factorise_cols function result_df <- factorise_cols(test_df, c("col1", "col3")) # Check if the returned data frame only contains non-NA columns expect_true(all(!is.na(result_df$col1))) expect_true(all(!is.na(result_df$col3))) expect_true(all(is.na(result_df$col2))) })And with that, we should have an output that can test some of the facets of our code at least and a good start to a more dependable and workable code base.How it works…The recipe here takes advantage of ChatGPT’s representations of the structure of the R programming language in order to write code that will effectively test some example code. In step 1, we simply define the function we wish to test and ask for tests.In step 2, we see the output that ChatGPT generated in this instance. It has given us a pretty good set of unit tests. As with everything to do with ChatGPT, there isn’t a guarantee that they are correct, but we can read and verify them very easily – certainly in much less time than it would take to write them. One thing to note is that, in this case at least, ChatGPT hasn’t generated tests for the case with only NA in a column, which we may decide we need. It is true that this isn’t clear in the initial code, so generating the test has given us a new thought on the safe running of this function.ConclusionIn conclusion, leveraging ChatGPT for unit testing R functions offers a transformative approach. Its adept understanding of programming nuances simplifies the arduous task of generating comprehensive tests, fostering code reliability and quality assurance. By effortlessly crafting diverse test cases, ChatGPT significantly reduces developers' workload, ensuring code correctness, identifying potential bugs, and fortifying the codebase against edge cases. While it doesn't guarantee absolute correctness, its output provides a solid foundation for enhancing code robustness. Embracing ChatGPT's capabilities not only saves time and effort but also contributes profoundly to building more dependable and well-tested R code, elevating the development process to new levels of efficiency and reliability.Author BioProfessor Dan MacLean has a Ph.D. in molecular biology from the University of Cambridge and gained postdoctoral experience in genomics and bioinformatics at Stanford University in California. Dan is now Head of Bioinformatics at the world-leading Sainsbury Laboratory in Norwich, UK where he works on bioinformatics, genomics, and machine learning. He teaches undergraduates, post-graduates, and post-doctoral students in data science and computational biology. His research group has developed numerous new methods and software in R, Python, and other languages with over 100,000 downloads combined.
Read more
  • 0
  • 0
  • 320

article-image-5-reasons-why-you-should-use-an-open-source-data-analytics-stack-in-2020
Amey Varangaonkar
28 Jan 2020
7 min read
Save for later

5 reasons why you should use an open-source data analytics stack in 2020

Amey Varangaonkar
28 Jan 2020
7 min read
Today, almost every company is trying to be data-driven in some sense or the other. Businesses across all the major verticals such as healthcare, telecommunications, banking, insurance, retail, education, etc. make use of data to better understand their customers, optimize their business processes and, ultimately, maximize their profits. This is a guest post sponsored by our friends at RudderStack. When it comes to using data for analytics, companies face two major challenges: Data tracking: Tracking the required data from a multitude of sources in order to get insights out of it. As an example, tracking customer activity data such as logins, signups, purchases, and even clicks such as bookmarks from platforms such as mobile apps and websites becomes an issue for many eCommerce businesses. Building a link between the Data and Business Intelligence: Once data is acquired, transforming it and making it compatible for a BI tool can often prove to be a substantial challenge. A well designed data analytics stack comes is essential in combating these challenges. It will ensure you're well-placed to use the data at your disposal in more intelligent ways. It will help you drive more value. What does a data analytics stack do? A data analytics stack is a combination of tools which when put together, allows you to bring together all of your data in one platform, and use it to get actionable insights that help in better decision-making. As seen the diagram above illustrates, a data analytics stack is built upon three fundamental steps: Data Integration: This step involves collecting and blending data from multiple sources and transforming them in a compatible format, for storage. The sources could be as varied as a database (e.g. MySQL), an organization’s log files, or event data such as clicks, logins, bookmarks, etc from mobile apps or websites. A data analytics stack allows you to use all of such data together and use it to perform meaningful analytics. Data Warehousing: This next step involves storing the data for the purpose of analytics. As the complexity of data grows, it is feasible to consolidate all the data in a single data warehouse. Some of the popular modern data warehouses include Amazon’s Redshift, Google BigQuery and platforms such as Snowflake and MarkLogic. Data Analytics: In this final step, we use a visualization tool to load the data from the warehouse and use it to extract meaningful insights and patterns from the data, in the form of charts, graphs and reports. Choosing a data analytics stack - proprietary or open-source? When it comes to choosing a data analytics stack, businesses are often left with two choices - buy it or build it. On one hand, there are proprietary tools such as Google Analytics, Amplitude, Mixpanel, etc. - where the vendors alone are responsible for their configuration and management to suit your needs. With the best in class features and services that come along with the tools, your primary focus can just be project management, rather than technology management. While using proprietary tools have their advantages, there are also some major cons to them that revolve mainly around cost, data sharing, privacy concerns, and more. As a result, businesses today are increasingly exploring the open-source alternatives to build their data analytics stack. The advantages of open source analytics tools Let's now look at the 5 main advantages that open-source tools have over these proprietary tools. Open source analytics tools are cost effective Proprietary analytics products can cost hundreds of thousands of dollars beyond their free tier. For small to medium-sized businesses, the return on investment does not often justify these costs. Open-source tools are free to use and even their enterprise versions are reasonably priced compared to their proprietary counterparts. So, with a lower up-front costs, reasonable expenses for training, maintenance and support, and no cost for licensing, open-source analytics tools are much more affordable. More importantly, they're better value for money. Open source analytics tools provide flexibility Proprietary SaaS analytics products will invariably set restrictions on the ways in which they can be used. This is especially the case with the trial or the lite versions of the tools, which are free. For example, full SQL is not supported by some tools. This makes it hard to combine and query external data alongside internal data. You'll also often find that warehouse dumps provide no support either. And when they do, they'll probably cost more and still have limited functionality. Data dumps from Google Analytics, for instance, can only be loaded into Google BigQuery. Also, these dumps are time-delayed. That means the loading process can be very slow.. With open-source software, you get complete flexibility: from the way you use your tools, how you combine to build your stack, and even how you use your data. If your requirements change - which, let's face it, they probably will - you can make the necessary changes without paying extra for customized solutions. Avoid vendor lock-in Vendor lock-in, also known as proprietary lock-in, is essentially a state where a customer becomes completely dependent on the vendor for their products and services. The customer is unable to switch to another vendor without paying a significant switching cost. Some organizations spend a considerable amount of money on proprietary tools and services that they heavily rely on. If these tools aren't updated and properly maintained, the organization using it is putting itself at a real competitive disadvantage. This is almost never the case with open-source tools. Constant innovation and change is the norm. Even if the individual or the organization handling the tool moves on, the community catn take over the project and maintain it. With open-source, you can rest assured that your tools will always be up-to-date without heavy reliance on anyone. Improved data security and privacy Privacy has become a talking point in many data-related discussions of late. This is thanks, in part, to data protection laws such as the GDPR and CCPA coming into force. High-profile data leaks have also kept the issue high on the agenda. An open-source stack analytics running inside your cloud or on-prem environment gives complete control of your data. This lets you decide which data is to be used when, and how. It lets you dictate how third parties can access and use your data, if at all. Open-source is the present It's hard to counter the fact that open-source is now mainstream. Companies like Microsoft, Apple, and IBM are now not only actively participating in the open-source community, they're also contributing to it. Open-source puts you on the front foot when it comes to innovation. With it, you'll be able to leverage the power of a vibrant developer community to develop better products in more efficient ways. How RudderStack helps you build an ideal open-source data analytics stack RudderStack is a completely open-source, enterprise-ready platform to simplify data management in the most secure and reliable way. It works as a perfect data integration platform by routing your event data from data sources such as websites, mobile apps and servers, to multiple destinations of your choice - thus helping you save time and effort. RudderStack integrates effortlessly with a multitude of destinations such as Google Analytics, Amplitude, MixPanel, Salesforce, HubSpot, Facebook Ads, and more, as well as popular data warehouses such as Amazon Redshift or S3. If performing efficient clickstream analytics is your goal, RudderStack offers you the perfect data pipeline to collect and route your data securely. Learn more about Rudderstack by visiting the RudderStack website, or check out its GitHub page to find out how it works.
Read more
  • 0
  • 0
  • 10852

article-image-key-skills-for-data-professionals-to-learn-in-2020
Richard Gall
20 Dec 2019
6 min read
Save for later

Key skills for data professionals to learn in 2020

Richard Gall
20 Dec 2019
6 min read
It’s easy to fall into the trap of thinking about your next job, or even the job after that. It’s far more useful, however, to think more about the skills you want and need to learn now. This will focus your mind and ensure that you don’t waste time learning things that simply aren’t helpful. It also means you can make use of the things you’re learning almost immediately. This will make you more productive and effective - and who knows, maybe it will make the pathway to your future that little bit clearer. So, to help you focus, here are some of the things you should focus on learning as a data professional. Reinforcement learning Reinforcement learning is one of the most exciting and cutting-edge areas of machine learning. Although the area itself is relatively broad, the concept itself is fundamentally about getting systems to ‘learn’ through a process of reward. Because reinforcement learning focuses on making the best possible decision at a given moment, it naturally finds many applications where decision making is important. This includes things like robotics, digital ad-bidding, configuring software systems, and even something as prosaic as traffic light control. Of course, the list of potential applications for reinforcement learning could be endless. To a certain extent, the real challenge with it is finding new use cases that are relevant to you. But to do that, you need to learn and master it - so make 2020 the year you do just that. Get to grips with reinforcement learning with Reinforcement Learning Algorithms with Python. Learn neural networks Neural networks are closely related to reinforcement learning - they’re essentially another element within machine learning. However, neural networks are even more closely aligned with what we think of as typical artificial intelligence. Indeed, even the name itself hints at the fact that these systems are supposed to in some way mimic the human brain. Like reinforcement learning, there are a number of different applications for neural networks. These include image and language processing, as well as forecasting. The complexity of relationships that can be figured inside neural networks systems is useful for handling data with many different variables and intricacies that would otherwise be difficult to capture. If you want to find out how artificial intelligence really works under the hood, make sure you learn neural networks in 2020. Learn how to build real-world neural networks projects with Neural Network Projects with Python. Meta-learning Metalearning is another area of machine learning. It’s designed to help engineers and analysts to use the right machine learning algorithms for specific problems - it’s particularly important in automatic machine learning, where removing human agency from the analytical process can lead to the wrong systems being used on data. Meta learning does this by being applied to metadata about machine learning projects. This metadata will include information about the data, such as algorithm features, performance measures, and patterns identified previously. Once meta learning algorithms have ‘learned’ from this data, they should, in theory, be well optimized to run on other sets of data. It has been said that meta learning is important in the move towards generalized artificial intelligence, or AGI (intelligence that is more akin to human intelligence). This is because getting machines to learn about learning allow systems to move between different problems - something that is incredibly difficult with even the most sophisticated neural networks. Whether it will actually get us any closer to AGI is certainly open to debate, but if you want to be a part of the cutting edge of AI development, getting stuck into meta learning is a good place to begin in 2020. Find out how meta learning works in Hands-on Meta Learning with Python. Learn a new programming language Python is now the undisputed language of data. But that’s far from the end of the story - R still remains relevant in the field, and there are even reasons to use other languages for machine learning. It might not be immediately obvious - especially if you’re content to use R or Python for analytics and algorithmic projects - but because machine learning is shifting into many different fields, from mobile development to cybersecurity, learning how other programming languages can be used to build machine learning algorithms could be incredibly valuable. From the perspective of your skill set, it gives you a level of flexibility that will not only help you to solve a wider range of problems, but also stand out from the crowd when it comes to the job market. The most obvious non-obvious languages to learn for machine learning practitioners and other data professionals are Java and Julia. But even new and emerging languages are finding their way into machine learning - Go and Swift, for example, could be interesting routes to explore, particularly if you’re thinking about machine learning in production software and systems. Find out how to use Go for machine learning with Go Machine Learning Projects. Learn new frameworks For data professionals there are probably few things more important than learning new frameworks. While it’s useful to become a polyglot, it’s nevertheless true that learning new frameworks and ecosystem tools are going to have a more immediate impact on your work. PyTorch and TensorFlow should almost certainly be on your list for 2020. But we’ve mentioned them a lot recently, so it’s probably worth highlighting other frameworks worth your focus: Pandas, for data wrangling and manipulation, Apache Kafka, for stream-processing, scikit-learn for machine learning, and Matplotlib for data visualization. The list could be much, much longer: however, the best way to approach learning a new framework is to start with your immediate problems. What’s causing issues? What would you like to be able to do but can’t? What would you like to be able to do faster? Explore TensorFlow eBooks and videos on the Packt store. Learn how to develop and communicate a strategy It’s easy to just roll your eyes when someone talks about how important ‘soft skills’ are for data professionals. Except it’s true - being able to strategize, communicate, and influence, are what mark you out as a great data pro rather than a merely competent one. The phrase ‘soft skills’ is often what puts people off - ironically, despite the name they’re often even more difficult to master than technical skill. This is because, of course, soft skills involve working with humans in all their complexity. However, while learning these sorts of skills can be tough, it doesn’t mean it's impossible. To a certain extent it largely just requires a level of self-awareness and reflexivity, as well as a sensitivity to wider business and organizational problems. A good way of doing this is to step back and think of how problems are defined, and how they relate to other parts of the business. Find out how to deliver impactful data science projects with Managing Data Science. If you can master these skills, you’ll undoubtedly be in a great place to push your career forward as the year continues.
Read more
  • 0
  • 0
  • 4931
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-15-things-every-bi-professional-should-know-about-tableau
Fatema Patrawala
17 Dec 2019
8 min read
Save for later

15 things every BI professional should know about Tableau

Fatema Patrawala
17 Dec 2019
8 min read
“The art and practice of visualizing data is becoming ever more important in bridging the human-computer gap to mediate analytical insight in a meaningful way.” ―Edd Dumbill Tableau is a powerful data visualization and discovery tool. It is an important part of a data analyst or data scientist’s - skill set, with many organizations specifying it as a key skill in job adverts. In this article, we’ll take a look at few things in Tableau you need to know to successfully make a mark in your business intelligence career. While architecture of traditional BI tools has hardware limitations, Tableau does not have such dependencies and it can function independently and requires minimum hardware support. Traditional tools are based on a complex set of technologies when Tableau is based on Associative Search technology making it intuitive, fast and dynamic. Tableau supports in-memory, multi-thread and multi-core computing and more advanced capabilities while traditional BI tools do not offer such functionalities. Various Tableau products Tableau Desktop is a self service business analytics and data visualization suite that anyone can use. With tableau desktop, you can extract massive data offline from your data warehouse for live up to date data analysis. Tableau Online / Tableau Server is an online hosting platform designed for enterprise users. It lets users working on Tableau publish and share dashboards across organization and teams. Tableau Reader is a free desktop application that enables you to open and view visualizations that are built in Tableau Desktop. Tableau Public is a free Tableau software which you can use to make visualizations but you will need to save your workbook or worksheets in the Tableau Server for anyone else to view them. Different data types in Tableau All fields in a data source have a data type. The data type reflects the kind of information stored in that field, for example integers (410), dates (1/23/2015) and strings (“Wisconsin”). The data type of a field is identified in the Data pane by one of the icons shown below. Data type icons in Tableau Icon Data type Text (string) values Date values Date & Time values Numerical values Boolean values (relational only) for example True/False Geographic values (used with maps) Cluster Group   Source: Tableau website Measures and Dimensions in Tableau Measures contain numeric, quantitative values that you can measure. Measures can be aggregated. When you drag a measure into the view, Tableau applies an aggregation to that measure (by default). Dimensions, on the other hand, contain qualitative values (such as names, dates, or geographical data). You can use dimensions to categorize, segment, and reveal the details in your data. Dimensions affect the level of detail in the view. Ways to connect data in Tableau We can either connect live to your data set or extract data into Tableau. Live: Connecting live to a data set leverages its computational processing and storage. New queries will go to the database and will be reflected as new or updated within the data. Extract: The Extract API allows you to programmatically extract and combine any data sources for use in Tableau. There can be multiple data source connections to different sources in the same workbook. Each connection will show up under the Data tab on the left sidebar. The benefit of Tableau extract over live connection is that extract can be used anywhere without any connection and you can build your own visualization without connecting to database. You can read a complete section on how to extract data in Tableau from this book, Learning Tableau 2019 - Third Edition, written by Joshua Milligan. This book takes you through the foundations of the Tableau 2019 paradigm to the advanced topics.  Joins and Blends in Tableau Joining tables and blending data sources are two different ways to link related data together in Tableau. Joins are performed to link tables of data together on a row-by-row basis. Blends are performed to link together multiple data sources at an aggregate level.  Different filters in Tableau and different use cases in which these filters are more relevant than others In Tableau, filters are used to restrict the data from database. Often, you will want to filter data in Tableau in order to perform an analysis on a subset of data, narrow your focus, or drill into detail. Tableau offers multiple ways to filter data. If you want to limit the scope of your analysis to a subset of data, you can filter the data at the source using one of the following techniques: Data Source Filters are applied before all other filters and are useful when you want to limit your analysis to a subset of data. These filters are applied before any other filters. Extract Filters limit the data that is stored in an extract (.tde or .hyper). Data source filters are often converted into extract filters if they are present when you extract the data. Custom SQL Filters can be accomplished using a live connection with custom SQL, which has a Tableau parameter in the WHERE clause.    Dual axis in Tableau Dual Axis is an excellent phenomenon supported by Tableau that helps users view two scales of two measures in the same graph. Many websites like Indeed.com and other make use of dual axis to show the comparison between two measures and their growth rate in a septic set of years. Dual axis let you compare multiple measures at once, having two independent axis layered on top of one another.  Key components of a Tableau Dashboard Horizontal – Horizontal layout containers allow the designer to group worksheets and dashboard components left to right across your page and edit the height of all elements at once. Vertical – Vertical containers allow the user to group worksheets and dashboard components top to bottom down your page and edit the width of all elements at once. Text – All textual fields. Image Extract  – A Tableau workbook is in XML format. In order to extract images, Tableau applies some codes to extract an image which can be stored in XML. Web [URL ACTION] – A URL action is a hyperlink that points to a Web page, file, or other web-based resource outside of Tableau. You can use URL actions to link to more information about your data that may be hosted outside of your data source. To make the link relevant to your data, you can substitute field values of a selection into the URL as parameters. If you want to learn how to design dashboards in Tableau, this book Learning Tableau 2019, will give you a step by step process for designing dashboards.  Why automate reports in Tableau Once you have automated reporting, you’ll have time to spend on innovative projects. What can be done manually could be performed by automation, delivering the same results in a fraction of the time. Reducing such a time-consuming and repetitive task will make you more productive, and more efficient.  What is story in Tableau? Why would create a story and what are they used for? A story is a sheet that contains a sequence of worksheets or dashboards that work together to convey information. You can create stories to show how facts are connected, provide context, demonstrate how decisions relate to outcomes, or simply make a compelling case. Each individual sheet in a story is called a story point. The primary objective of creating stories in Tableau is to communicate data to a certain audience with an intended result.  How can you create stories in Tableau? There is a feature in Tableau named as Stories that allows you to tell a story using interactive snapshots of dashboards and views. The snapshots become points in a story. This allows you to construct guided narrative or even an entire presentation. Read this chapter, ‘Telling a Data Story with Dashboards’ from this book, Learning Tableau 2019, to create insightful dashboards in Tableau.    How to embed views into Webpages? You can embed interactive Tableau views and dashboards into web pages, blogs, wiki pages, web applications, and intranet portals. Embedded views update as the underlying data changes, or as their workbooks are updated on Tableau Server. Embedded views follow the same licensing and permission restrictions used on Tableau Server. That is, to see a Tableau view that’s embedded in a web page, the person accessing the view must also have an account on Tableau Server. Alternatively, if your organization uses a core-based license on Tableau Server, a Guest account is available. This allows people in your organization to view and interact with Tableau views embedded in web pages without having to sign in to the server. Contact your server or site administrator to find out if the Guest user is enabled for the site you publish to.  What is Tableau Prep? Can we clean messy data with Tableau? Tableau Prep extends the Tableau platform with robust options for cleaning and structuring data for analysis in Tableau. In the same way that Tableau Desktop provides a hands-on, visual experience for visualizing and analyzing data, Tableau Prep provides a hands-on, visual experience for cleaning and shaping data. If you wish to know more about Tableau Prep or how to clean messy data to create powerful data visualizations and unlock intelligent business insights, read this book Learning Tableau 2019, written by Joshua N. Milligan. ‘Tableau Day’ highlights: Augmented Analytics, Tableau Prep Builder and Conductor, and more! Alteryx vs. Tableau: Choosing the right data analytics tool for your business How to do data storytelling well with Tableau [Video]
Read more
  • 0
  • 0
  • 9998

article-image-new-qgis-3d-capabilities-and-future-plans-presented-by-martin-dobias-a-core-qgis-developer
Bhagyashree R
13 Dec 2019
8 min read
Save for later

New QGIS 3D capabilities and future plans presented by Martin Dobias, a core QGIS developer

Bhagyashree R
13 Dec 2019
8 min read
In his talk titled QGIS 3D: current state and future at FOSS4G 2019, Martin Dobias, CTO of Lutra Consulting talked about the new features in QGIS 3D. He also shared a list of features that can be added to QGIS 3D to make 3D rendering in QGIS more powerful. Free and Open Source Software for Geospatial (FOSS4G) 2019 was a five-day event that happened from Aug 26-30 at Bucharest. FOSS4G is a conference where geospatial professionals, students, professors come together to discuss about free and open-source software for geospatial storage, processing, and visualization. [box type="shadow" align="" class="" width=""] Further Learning This article explores the new features in QGIS 3D native rendering support. If you are embarking on your QGIS journey, check out our book Learn QGIS - Fourth Edition by Andrew Cutts and Anita Graser. In this book, you will explore QGIS user interface, load your data, edit, and then create data. QGIS often surprises new users with its mapping capabilities; you will discover how easily you can style and create your first map. But that’s not all! In the final part of the book, you’ll learn about spatial analysis, powerful tools in QGIS, and conclude by looking at Python processing options. [/box] 3D visualization in QGIS QGIS 3D native rendering support was introduced in QGIS 3. Prior to that, developers had to rely on third-party tools like NVIZ from GRASS GIS, GVIZ, Globe plugin, Qgis2threejs plugin, and more. Though these worked, “the integration was never great with the rest of QGIS,” remarks Dobias. In 2017, the QGIS grand proposal was accepted to start the initial work on QGIS 3D. A year later, QGIS 3 was announced with an interactive, fully integrated interface for you to work in 3D. QGIS 3 has a separate interface dedicated to 3D data visualization called 3D map view, which you can access from the View context menu. After you select this option, a new window will open that you can dock to the main panel. In the new window you will see all the layers that are visible in the main map view and rendered digital elevation and vector data in 3D. With native QGIS 3D support you can render raster, vector, and mesh layers. It also provides various methods for visualizing and styling the 3D data depending on the data or geometry type. Here are some of the features that Dobias talked about: Point-based rendering Starting with QGIS 3, you have three ways to render points: Basic symbols: You can use symbols such as spheres, cylinders, boxes, or cubes, apply a color, and apply a few transformations. 3D models loaded from a file: You can use the Open Asset Import Library (Assimp) to load the 3D models. This library allows you to import and export a wide-range of 3D model file formats including Collada, Wavefront, and more. After loading the model you can do tweaks like changing the color. However, there are currently limitations like “you can only change the color of the whole model and not the individual components,” Dobias mentioned. Billboard rendering: This feature was contributed by Ismail Sunni as a part of the Google Summer of Code (GSoC) 2019 project, QGIS 3D Improvement. The billboard support, which was released in QGIS 3.10, will allow you to render points as a billboard in 3D map view. Line rendering For line rendering, you have two options: Simple lines: In this approach, you define the width of a line in pixels and it does not change when you zoom-in or zoom-out. This technique preserves Z coordinates. Buffered lines: In this approach, you define the line width in map units. So, as soon as you start zooming in the line will appear zoomed out. Buffered rendering ignores z-coordinates. Polygon rendering For polygon rendering, you have four different options: Planar 3D entity: QGIS 3 provides a method to draw polygon geometries as planar polygons. Extrusion: Extrusion is a way to create 3D symbology from 2D features by stretching it vertically. QGIS now supports extruding a planar polygon to make it look like a box. You can specify a constant height or you can write an expression that determines it. Polyhedral surfaces or PolygonZ: QGIS 3 has a provision for creating polyhedral surfaces. Polyhedron is simply a three-dimensional solid which consists of a collection of polygons, usually joined at their edges. Triangular mesh or MultiPatch: It is similar to polyhedral surfaces, the only difference is that it consists of individual triangles. 3D map tools Navigation: You can use mouse and keyboard to navigate the map. Now, with the latest QGIS release you can also perform navigation using on-screen controls. Dobias said, “This is good for beginners when they are not completely sure about other means of moving the map.” Identify tool: With this tool, you can interact with the map canvas and get information on features in a pop-up window. It works exactly like its 2D counterpart, the only difference being it will be on a 3D entity. Measurement tool: This tool was also built as part of the GSoC project. This will enable you to measure real distances between given points. Other 3D capabilities Print layout support QGIS already had support to save the 3D map view as an image file, but for print layouts you needed to perform multiple steps. You had to first save 3D scene images and then embed them within print layouts. Also, the resolution of the saved images was limited to the size of the 3D window. To simplify the use of 3D scenes for printing and allow high resolution scene exports, QGIS 3 supports a new type of layout item that is capable of high resolution exports of 3D map scenes. Camera animation support With the QGIS 3D support, now users can define keyframes on a timeline with camera positions and view directions for various points in time. The 3D engine will interpolate camera parameters between keyframes to create animations. These resulting animations can then be played within the 3D view or exported frame-by-frame to a series of images. Configuration of lights By default, the 3D view has a single white light placed above the centre of the 3D scene. Now, users can set up light source position, color, and intensity and even define multiple lights for some interesting effects. Rule-based 3D rendering Previously, it was only possible to define one 3D renderer per layer meaning all features appear the same. QGIS 3 features rule-based rendering for 3D to make it much easier to apply more complex styling in 3D without having to duplicate vector layers and apply filters. There are many other 3D capabilities that you can explore including terrain shading, better camera control, and more. Where you can find data for 3D maps Dobias shared a few great 3D city models that are free to use including CityGML and CityJSON. To easily load CityJSON datasets in QGIS you can use the CityJSON Loader plugin. OpenStreetMap (OSM) is another project that provides buildings data. You can also use the Google dataset search. Just type CityGML in a search box and find the data you need. QGIS 3D capabilities to expect in the future Dobias further talked about the future plans for QGIS 3D. Currently, the team is working on improving support for larger 3D scenes and also make them load faster. For the far future, Dobias shared a wishlist of features that can be implemented in QGIS to make its 3D support much more powerful: Enhancing the 3D rendering performance More rendering techniques like shadows, transparency New materials to show textured objects More styles for vector layers such as lines and 3D pipes More data types such as point cloud and 3D rasters Formats support like 3D tiles, Arc SceneLayer Animation of data in scenes Profile tool Blender export Rendering of point cloud You just read about some of the latest features in QGIS 3 for 3D rendering. If you are new to QGIS and want to grasp its fundamentals, check out our book Learn QGIS - Fourth Edition by Anita Graser and Andrew Cutts. In this book, you will explore various ways to load data into QGIS, understand how to style data and present it in a map, and create maps and explore ways to expand them. You will get acquainted with the new processing toolbox in QGIS 3.4, manipulate your geospatial data and gain quality insights, and work with QGIS 3.4 in 3D. Why geospatial analysis and GIS matters more than ever today Top 7 libraries for geospatial analysis Uber’s kepler.gl, an open source toolbox for GeoSpatial Analysis
Read more
  • 0
  • 0
  • 10956

article-image-elastic-marks-its-entry-in-security-analytics-market-with-elastic-siem-and-endgame-acquisition
Bhagyashree R
13 Dec 2019
6 min read
Save for later

Elastic marks its entry in security analytics market with Elastic SIEM and Endgame acquisition

Bhagyashree R
13 Dec 2019
6 min read
For many years, Elastic Stack has served as an open-source, simple yet powerful interface for security analysts to detect and mitigate malicious behavior. However, Elastic marked its official entry into the security analytics market with Elastic SIEM in June this year. Since its initial release, Elastic SIEM has seen a number of enhancements including machine learning-based anomaly detection, maps integration, and more. To further expand its presence in the security field, Elastic in early October, completed the acquisition of Endgame, a security company focused on endpoint prevention, detection, and response. Following this acquisition, Elastic introduced the Elastic Endpoint Security solution in October to help organizations “automatically and flexibly respond to threats in real-time.” The company has also eliminated per-endpoint pricing. In this article, we will look at what is Elastic SIEM, how it fits into the Elastic Stack, its components, and how a security operations team leverages Elastic SIEM to defend its data and infrastructure against attacks. [box type="shadow" align="" class="" width=""] Further learning This is a quick overview of the Elastic Stack. To learn more check out our book, Learning Elastic Stack 7.0 - Second Edition by Pranav Shukla and Sharath Kumar M N. This book will give you a fundamental understanding of what the stack is all about, and help you use it efficiently to build powerful real-time data processing. [/box] Introducing Elastic SIEM Elastic SIEM is not a standalone product but rather builds on the existing Elastic Stack capabilities used for security analytics including search, visualizations, dashboards, alerting, machine learning features, and more. The following diagram shows how Elastic SIEM fits into the Elastic Stack: Source: Elastic The beta version of Elastic SIEM was released in June this year with Elastic Stack 7.2. It includes a new set of data integrations for security use cases and a dedicated app in Kibana. It enables users to analyze host-related and network-related security events as part of alert investigations, threat hunting, initial investigations, and triaging of events. You can access Elastic SIEM through the Elastic Cloud or by downloading its default distribution. Elastic SIEM supports the recently introduced Elastic Common Schema (ECS), a uniform way to represent data across different sources. ECS defines a common set of fields and objects to ingest data into Elasticsearch enabling users to centrally analyze information like logs, flows, and contextual data from across environments. Features of Elastic SIEM Host-related security event analysis The Hosts view shows key metrics regarding host-related security events and a set of data tables that enable interaction with the Timeline Event Viewer. For further investigation, you can drag-and-drop items of interest from the Hosts view tables to Timeline. This gives you deeper insight into hosts, unique IPs, user authentications, uncommon processes, and events. We can filter the host view with the search bar at the top. To help you search faster, SIEM provides a search experience that combines traditional text-based search with the visual query builder that’s deeply integrated with drag-and-drop throughout the SIEM app and powered by the Elastic common schema. Network-related security event analysis The Network view provides analysts the key network activity metrics and event tables. You can drag-and-drop these tables to Timeline for further investigation to get deeper insight into the source and destination IP, top DNS domains, users, transport layer security certs, and more. Starting with Elastic Stack 7.4, you have Elastic Maps integrated right into Elastic SIEM. The interactive map is created based on live data that analysts can search, filter, and explore in real-time. The map gives analysts an overview of the network traffic. They can simply hover over source and destination points to uncover more details such as hostnames and IP addresses. They can also click a hostname to go to the SIEM Host view or an IP address to open the relevant network details. This integration lets Elastic SIEM leverage geospatial analytics and search capabilities of Elastic Maps. It also uses the new point-to-point line feature to easily visualize the connections in your data. Timeline Event Viewer The Timeline Event Viewer enables security analysts to gather and store evidence of an attack. They can pin and annotate relevant events, comment on and share their findings, and do everything within Kibana. It is a collaborative workspace for investigations or threat hunting where analysts can easily drag objects of interest from Network and Hosts view for further investigation. Anomaly detection with machine learning integration Cyber attacks today have become so sophisticated that it is hard to maintain an effective defense with just a set of static rules. Looking at the importance of automated analysis and detection, Elastic integrated machine learning capabilities right into the SIEM app in 7.3. This allowed security analysts to enable and run a set of machine learning anomaly detection jobs designed to detect specific cyber attack behaviors. The detected anomalies are then displayed on the Hosts and Network views in the SIEM app. However, in Elastic SIEM 7.3, there were only three built-in anomaly detection jobs. In the latest release (7.4), Elastic has added thirteen more anomaly detection jobs some of which are anomalous network activity, anomalous process, anomalous path activity, anomalous Powershell script, and more. This machine learning integration is extensible allowing users to add their own jobs to the SIEM job group. These were some of the key features in Elastic SIEM. Check out the Elastic SIEM 7.4 release announcement to know more. Also, to get a better understanding of how Elastic SIEM works, see the webinar Hands-on with Elastic SIEM: Defending your organization with the Elastic Stack by Elastic. To get started with Elastic Stack you can check out our book Learning Elastic Stack 7.0 - Second Edition. This book will help you learn how to use Elasticsearch for distributed searching and analytics, Logstash for logging, and Kibana for data visualization.  As you work through the book, you will discover the technique of creating custom plugins using Kibana and Beats. The book also touches upon Elastic X-Pack, a useful extension for effective security and monitoring.  You’ll also find helpful tips on how to use Elastic Cloud and deploy Elastic Stack in production environments. How to push Docker images to AWS’ Elastic Container Registry(ECR) [Tutorial] Core security features of Elastic Stack are now free! Elastic Stack 6.7 releases with Elastic Maps, Elastic Update and much more!
Read more
  • 0
  • 0
  • 5226

article-image-why-decision-trees-are-more-flexible-than-linear-models-explains-stephen-klosterman
Guest Contributor
11 Dec 2019
7 min read
Save for later

Why decision trees are more flexible than linear models, explains Stephen Klosterman

Guest Contributor
11 Dec 2019
7 min read
This blog post will examine a hypothetical dataset of website visits and customer conversion, to illustrate how decision trees are a more flexible mathematical model than linear models such as logistic regression. Imagine you are monitoring the webpage of one of your products. You are keeping track of how many times individual customers visit this page, the total amount of time they've spent on the page across all their visits, and whether or not they bought the product. Your goal is to be able to predict, for future visitors, how likely they are to buy the product, based on the page visit data. You are considering presenting a discount, or some other kind of offer, to customers you think are likely to buy the product but haven't yet. Get to know more about decision trees and linear models! [box type="shadow" align="" class="" width=""]If you are interested in building your knowledge to prepare data for regularized logistic regression and random forest algorithms, read our book Data Science Projects with Python written by Stephen Klosterman. This book will give you practical guidance on industry-standard data analysis and machine learning tools in Python, with the help of realistic data. You will also learn how to use pandas and Matplotlib to critically examine a dataset with summary statistics and graphs and extract the insights you seek to derive. [/box] After logging the data on many customers, you visualize them and see the following, including some jitter to help see all the data points: There are several interesting patterns visible here. We see that in general, the longer someone spends on the page, the more likely they are to purchase the item. However, this effect seems to depend on the number of visits, in a complex way. Someone who visited the page once and spent at least two minutes there (i.e. two minutes per visit) seems likely to buy, at least up until 18 or so minutes. But someone who visited 10 times as much as this seems likely to buy after only 12 minutes cumulative time (1.2 minutes per visit). Additionally, there is a phenomenon of customers who spend a relatively long time (at least 18 or 19 minutes) over a relatively small number of visits (just one or two), who don't buy. Maybe they opened the page, but then walked away from their computer, and closed the page as soon as they came back. Whatever the reason, the patterns in this data set are interesting and complicated. If you want to create a predictive model of these data, you should consider the likely success of non-linear models, such as decision trees, versus linear models, such as logistic regression (for more information see chapter 3 of my book, Data Science Projects with Python). Logistic Regression as a linear model At a high level, linear models will take the feature space (the two-dimensional space where time is on the x-axis and number of visits is on the y-axis, as in the graph above), and seek to draw a straight line somewhere that creates an accurate division of the two classes of the response variable ("Bought" or "Did not buy"). Consider how likely this is to work. Where would you draw a straight line on the graph above, so that the two regions on either side of the line would primarily contain responses of only one class? It should be apparent that this is not likely to be an entirely successful task. The best you could probably do would be to draw a line that isolates non-buying customers who spent relatively little time on the page, represented by the region of dots to the left of the graph, from the blue dots representing buying customers to the right. While this would basically ignore the little group of customers to the lower right, it's probably the best you could do overall for most customers, using the straight-line approach. In fact, this is essentially what a logistic regression classifier looks like when the model is calibrated to these data. The above graph shows the regions of prediction ("Unlikely to buy" and "Likely to buy") as red or blue shading in the background. Deeper colors indicate a higher likelihood for either class. The conceptual straight-line decision boundary that divides the two regions mentioned above, would run right through the white portion of the background, where the probability of belonging to either class is very low. In other words, the model is "uncertain" about what prediction to make in this region. From the above graph, it can be seen that in addition to ignoring the small group of non-buying customers in the lower right, a straight line is also not a great model for isolating the non-buying customers on the left of the graph. While you can imagine that a curve might be able to define this boundary, a straight line cannot. Decision Trees as a non-linear model How can we do better? Enter non-linear models. Decision trees are a prime example of non-linear models. Decision trees work by dividing the data up into regions based on the "if-then" type of questions. For example, if a user spends less than three minutes over two or fewer visits, how likely are they to buy? Graphically, by asking many of these types of questions, a decision tree can divide up the feature space using little segments of vertical and horizontal lines. This approach can create a much more complex decision boundary, as shown below. It should be clear that decision trees can be used with more success, to model this data set. Given this, you would have a better model for the likelihood of customer conversion and could then proceed to design offers to increase conversion (for more information see chapter 5 of my book, Data Science Projects with Python). In conclusion, this post has shown how non-linear models, such as decision trees, can more effectively describe relationships in complex data sets than linear models, such as logistic regression. It should be noted that linear models can be extended to non-linearity by various means including feature engineering. On the other hand, non-linear models may suffer from overfitting, since they are so flexible. Nonetheless, approaches to prevent decision trees from overfitting have been formulated using ensemble models such as random forests and gradient boosted trees, which are among the most successful machine learning techniques in use today. As a final caveat, note this blog post presents a hypothetical, synthetic data set, which can be modeled almost perfectly with decision trees. Real-world data is messier, but the same principles hold. I hope you found this conceptual discussion helpful. For a more detailed explanation of how decision trees and logistic regression work "under the hood" with real-world data, and the python code for a similar hypothetical example to that shown here, check out my book Data Science Projects with Python. Author Bio Stephen Klosterman is a machine learning data scientist and the author of the book Data Science Projects with Python. He enjoys helping to frame problems in a data science context and delivering machine learning solutions that business stakeholders understand and value. His education includes a Ph.D. in biology from Harvard University, where he was an assistant teacher of the data science course. About the Book This book Data Science Projects with Python will help you understand the working and output of machine learning algorithms and gain insight into not only the predictive capabilities of the models but also their reasons for making these predictions. The book also provides detailed insight on how to build a classification model and how to conduct a financial analysis to find the optimal threshold for binary classification. This will help you with financial budgeting and operational strategy for a well-optimized usage model. At the end of this book, you will be able to confidently use various machine learning algorithms to perform detailed data analysis. Netflix open-sources Metaflow, its Python framework for building and managing data science projects What does a data science team look like? Get Ready for Open Data Science Conference 2019 in Europe and California How to learn data science: from data mining to machine learning Dr.Brandon explains Decision Trees to Jon
Read more
  • 0
  • 0
  • 8639
article-image-why-geospatial-analysis-and-gis-matters-more-than-ever-today
Richard Gall
18 Nov 2019
7 min read
Save for later

Why geospatial analysis and GIS matters more than ever today

Richard Gall
18 Nov 2019
7 min read
Due to the hype around big data and artificial intelligence, it can be easy to miss some of the powerful but specific ways data can be truly impactful. One of the most important areas of modern data analysis that rarely gets given its due is geospatial analysis. At a time when both the natural and human worlds are going through a period of seismic change, the ability to throw a spotlight on issues of climate and population change is as transformative as the smartest chatbot (indeed, probably much more transformative). The foundation of geospatial analysis are GIS systems. GIS, in case you’re new to the field ,is an acronym for Geographic Information System. GIS applications and tools allow you to store, manipulate, analyze, and visualize data that corresponds to different aspects of the existing environment. Central to this is topographical information, but it could also include many other aspects, from contours and slopes, the built environment, land types and bodies of water. In the context of climate and human geography it’s easy to see how this kind of data can help us see the bigger picture - quite literally - behind what’s happening in our region, across our countries, and indeed, across the whole world. The history of geospatial analysis is a testament to its power. In 1854 physician John Snow identified the source of a cholera outbreak in London by marking out the homes of victims on a map. The cluster of victims that Snow’s map revealed led him to an infected water supply. Read next: Neo4j introduces Aura, a new cloud service to supply a flexible, reliable and developer-friendly graph database How GIS and geospatial analysis is being used today While this example is, of course, incredibly low-tech, it highlights exactly why geospatial analysis and GIS tools can be so valuable. To bring us up to date, there are many more examples of how geospatial analysis is making a real impact in social and environmental issues. This article on Forbes, for example, details some of the ways in which GIS projects are helping to uncover information that offers some unique insights on the history of racism, and its continuing reality today. The list includes a map of historical lynchings occurring between 1877 and 1950, and a map by the Urban Institute that shows the reality of racial segregation in U.S. schools in the 21st century. https://twitter.com/urbaninstitute/status/504668921962577921 That’s just a small snapshot - there are a huge range of incredible GIS projects that are having a massive impact on both how we understand issues, but even on policy. That's analytics enacting real, demonstrable change. Here are a few of the different areas in which GIS is being used: How GIS can be used in agriculture GIS can be used to tackle crop diseases by identifying issues across a large area of land. It’s possible to gain a deeper insight into what can drive improvements to crop yields by looking at the geographic and environmental factors that influence successful growth. How GIS can be used in retail GIS can help provide an insight on the relationship between consumer behavior and factors such as weather and congestion. It can also be used to better understand how consumers interact with products in shops. This can influence things like store design and product placement. How GIS can be used in meteorology and climate science Without GIS, it would be impossible to properly understand and visualise rainfall around the world. GIS can also be used to make predictions about the weather. For example, identifying anomalies in patterns and trends could indicate extreme weather events. How GIS can be used in medicine and health As we saw in the example above, by identifying clusters of disease, it becomes much easier to determine the causes of certain illnesses. GIS can also help us better understand the relationship between illness and environment - like pollution and asthma. How GIS can be used for humanitarian purposes Geospatial tools can help humanitarian teams to understand patterns of violence in given areas. This can help them to better manage and distribute resources and support to where it’s needed (Map Kibera is a great example of how this can be done). GIS tools are good at helping to bridge the gap between local populations and humanitarian workers in times of crisis. For example, during the Haiti earthquake non-profit tech company Ushahidi’s product helped to collate and coordinate reports from across the island. This made it possible to align what might have otherwise been a mess of data and information. There are many, many more examples of GIS being used for both commercial and non-profit purposes. If you want an in-depth look at a huge range of examples, it’s well worth checking out this article, which features 1000 GIS projects. Although geospatial analysis can be used across many different domains, all the examples above have a trend running through them: they all help us to understand the impact of space and geography. From social mobility and academic opportunity to soil erosion, GIS and other geospatial tools are brilliant because they help us to identify relationships that we might otherwise be unable to see. GIS and geospatial analysis project ideas This is an important point if you’re not sure where to start when it comes to starting a new GIS project. Forget the data (to begin with at least) and just think about what sort of questions you’d like to answer. The list is potentially endless, but here are some questions that I thought of just off the top of my head: Are there certain parts of your region more prone to flooding? Why are certain parts of your town congested and not others? Do economically marginalised people have to travel further to receive healthcare? Does one part of your region receive more rainfall/snowfall than other parts? Are there more new buildings in one area than another? Getting this right is integral to any good analysis project. Ultimately it’s what makes the whole thing worthwhile. Read next: PostGIS 3.0.0 releases with raster support as a separate extension Where to find data for a GIS project Once you’ve decided on something you want to find out, the next part is to collect your data. This can be tricky, but there are nevertheless a massive range of free data sources you can use for your project. This web page has a comprehensive collection of datasets; while it might not have exactly what you’re looking for, it's nevertheless a good place to begin if you simply want to try something out. Conclusion: Geospatial analysis is one of the most exciting and potentially transformative fields in analytics GIS and geospatial analysis is quite literally rooted in the real world. In the maps and visualizations that we create we’re able to offer unique perspectives on history or provide practical guidance on how we should act, what we need to do. This is significant: all too often technology can feel like its divorced from reality, as if it is folded into its own world that has no connection to real people. So, be ambitious, and be bold with your next GIS project: who knows what impact it could have.
Read more
  • 0
  • 0
  • 5488

article-image-bad-metadata-can-get-you-in-legal-hot-water
Guest Contributor
21 Sep 2019
6 min read
Save for later

Bad Metadata can get you in legal hot water

Guest Contributor
21 Sep 2019
6 min read
Metadata isn't just something that concerns business intelligence and IT teams; but lawyers are extremely interested in it as well. Metadata, it turns out, can win or lose lawsuits, send politicians to jail, and even decide medical malpractice cases. It's not uncommon for attorneys who conduct discovery of electronic records in organizations to find that the claims of plaintiffs or defendants are contradicted by metadata, like time and date, type of data, etc. If a discovery process is initiated against them, an organization had better be sure that its metadata is in order. All it would take for an organization to lose a case would be for an attorney to discover a discrepancy in different databases – a different time stamp on some communication, a different job title for a principal in the case. Such discrepancies could lead to accusations of data tampering, fraud, or worse – and would most definitely put the organization in a very tough position versus a judge or jury. Metadata errors are difficult to spot The problem with that, of course, is that catching metadata errors is extremely difficult. In large organizations, data is stored in repositories that are spread throughout the organization, maybe even the world – in different departments. Each department is responsible for maintaining its own database, and the metadata in it; and on different cloud storage repositories, which may have their own system of classifying data. An enterprising attorney could have a field day with the different categories and tags data is stored under, making claims that the organization is trying to “hide something.” The organization's only defense: We're poor administrators. That may not be enough to impress the court. Types of Metadata Metadata is “data about data,” and comes in three flavors: System Metadata, which is data that is automatically generated from the computer and includes specific labeled criteria, like the date and time of creation and date a document was modified, etc. Substantive Metadata reflects changes to a document, like tracked changes. Embedded metadata is data entered into a document or file but not normally visible, such as formulas in cells in an Excel spreadsheet. All of these have increasingly become targets for attorneys in recent years. Metadata has been used in thousands of cases – medical, financial, patent and trademark law, product liability, civil rights, and many more. Metadata is both discoverable and admissible as evidence. According to one New York court, “General information about the creation of a document, including who authored a document and when it was created, is pedigree information often important for purposes of determining admissibility at trial.” According to legal experts, “from a legal standpoint metadata is evidence… that describes the characteristics, origins, usage, and validity of other electronic evidence.” The biggest metadata-linked payout until now - $10.8 million – occurred in 2017, when a jury awarded a plaintiff $8 million (eventually this was increased to nearly $11 million) after claiming he was fired from a biotechnology company after telling authorities about potential bribery in China. The key piece of evidence was the metadata timestamp on a performance review that was written after the plaintiff was fired; with that evidence, the court increased the defendant's payout for violating laws against firing whistleblowers. In that case, records claiming that the employee was fired for cause were belied by the metadata in the performance review. That, of course, was a case in which there was clear wrongdoing by an organization. But the same metadata errors could have cropped up in any number of scenarios, even if no laws were broken. The precedent in this case, and others like it, might be enough to convince the court to penalize an organization based on claims of a plaintiff. How can organizations defend themselves from this legal bind of metadata The answer would seem obvious; get control of your metadata and make sure it corresponds to the data it represents. With that kind of control over data, organizations would discover for themselves if something was amiss that could cost them in a settlement later. But execution of that obvious answer is a different story. With reams of data to pore through, it would take an organization's business intelligence team months, or even years, to manually sift through the databases. And because to err is human, there would be no guarantee they hadn't missed something. Clearly Business Intelligence and Data Analysis teams need some help in doing this. One solution would be to hire more staff, expanding teams at least temporarily to make sense of the data and metadata that could prove problematic. There are services that will lend their staff to an organization to do just that, and for companies that prefer the “human touch,” adding that temporary staff may be the best solution. Another idea is to automate the process, with advanced tools that will do a full examination of data, both across systems and within repositories themselves. Such automated tools would examine the data in the various repositories and find where the metadata for the same information is different – pointing BI teams in the right direction and cutting down on the time needed to determine what needs to be fixed. Using automated metadata management tools, companies can ensure that they remain secure. If a company is being sued and discovery has commenced, it will be too late for the organization to fix anything. Honest mistakes or disorganized file keeping can no longer be corrected, and the fate of the organization will be in the hands of a jury or a judge. Automated metadata management tools can help Business Intelligence and Data Analysis teams figure out which metadata entries are not consistent across the repositories, ensuring that things are fixed before discovery takes place. There are a variety of tools on the market, with various strengths and weaknesses. Companies will need to decide whether a data dictionary, a business glossary, or a more all-encompassing product best answers their needs. They’ll also need to make sure the enterprise software they currently use is supported by the metadata management solution they are after. As the market develops, AI will be a huge distinguishing factor between metadata solutions, as machine learning will reduce the cost and manpower investment of solution onboarding significantly. With the success of recent metadata-based lawsuits, you can be sure more attorneys will be using metadata in their discovery processes. Organizations that want to defend themselves need to get their data in order, and ensure that they won't end up losing lots of money because of their own errors. Author Bio Amnon Drori is the Co-Founder and CEO of Octopai and has over 20 years of leadership experience in technology companies. Before co-founding Octopai he led sales efforts at companies like Panaya (Acquired by Infosys), Zend Technologies (Acquired by Rogue Wave Software), ModusNovo and Alvarion. Other interesting news in Tech Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data and Society report. Open AI researchers advance multi-agent competition by training AI agents in a hide and seek environment. France and Germany reaffirm blocking Facebook’s Libra cryptocurrency
Read more
  • 0
  • 0
  • 4917

article-image-how-to-ace-a-data-science-interview
Richard Gall
02 Sep 2019
12 min read
Save for later

How to ace a data science interview

Richard Gall
02 Sep 2019
12 min read
So, you want to be a data scientist. It’s a smart move: it’s a job that’s in high demand, can command a healthy salary, and can also be richly rewarding and engaging. But to get the job, you’re going to have to pass a data science interview - something that’s notoriously tough. One of the reasons for this is that data science is a field that is incredibly diverse. I mean that in two different ways: on the one hand it’s a role that demands a variety of different skills (being a good data scientist is about much more than just being good at math). But it's also diverse in the sense that data science will be done differently at every company. That means that every data science interview is going to be different. If you specialize too much in one area, you might well be severely limiting your opportunities. There are plenty of articles out there that pretend to have all the answers to your next data science interview. And while these can be useful, they also treat job interviews like they’re just exams you need to pass. They’re not - you need to have a wide range of knowledge, but you also need to present yourself as a curious and critical thinker, and someone who is very good at communicating. You won’t get a data science by knowing all the answers. But you might get it by asking the right questions and talking in the right way. So, with all that in mind, here are what you need to do to ace your data science interview. Know the basics of data science This is obvious but it’s impossible to overstate. If you don’t know the basics, there’s no way you’ll get the job - indeed, it’s probably better for your sake that you don’t get it! But what are these basics? Basic data science interview questions "What is data science?" This seems straightforward, but proving you’ve done some thinking about what the role actually involves demonstrates that you’re thoughtful and self-aware - a sign of any good employee. "What’s the difference between supervised and unsupervised learning?" Again, this is straightforward, but it will give the interviewer confidence that you understand the basics of machine learning algorithms. "What is the bias and variance tradeoff? What is overfitting and underfitting?" Being able to explain these concepts in a clear and concise manner demonstrates your clarity of thought. It also shows that you have a strong awareness of the challenges of using machine learning and statistical systems. If you’re applying for a job as a data scientist you’ll probably already know the answers to all of these. Just make sure you have a clear answer and that you can explain each in a concise manner. Know your algorithms Knowing your algorithms is a really important part of any data science interview. However, it’s important to not get hung up on the details. Trying to learn everything you know about every algorithm you know isn’t only impossible, it’s also not going to get you the job. What’s important instead is demonstrating that you understand the differences between algorithms, and when to use one over another. Data science interview questions about algorithms you might be asked "When would you use a supervised machine learning algorithm?" "Can you name some supervised machine learning algorithms and the differences between them?" (supervised machine learning algorithms include Support Vector Machines, Naive Bayes, K-nearest Neighbor Algorithm, Regression, Decision Trees) "When would you use an unsupervised machine learning algorithm?" (unsupervised machine learning algorithms include K-Means, autoencoders, Generative Adversarial Networks, and Deep Belief Nets.) Name some unsupervised machine learning algorithms and how they’re different from one another. "What are classification algorithms?" There are others, but try to focus on these as core areas. Remember, it’s also important to always talk about your experience - that’s just as useful, if not even more useful than listing off the differences between different machine learning algorithms. Some of the questions you face in a data science interview might even be about how you use algorithms: "Tell me about the time you used an algorithm. Why did you decide to use it? Were there any other options?" "Tell me about a time you used an algorithm and it didn’t work how you expected it to. What did you do?" When talking about algorithms in a data science interview it’s useful to present them as tools for solving business problems. It can be tempting to talk about them as mathematical concepts, and although it’s good to show off your understanding, showing how algorithms help solve real-world business problems will be a big plus for your interviewer. Be confident talking about data sources and infrastructure challenges One of the biggest challenges for data scientists is dealing with incomplete or poor quality data. If that’s something you’ve faced - or even if it’s something you think you might face in the future - then make sure you talk about that. Data scientists aren’t always responsible for managing a data infrastructure (that will vary from company to company), but even if that isn’t in the job description, it’s likely that you’ll have to work with a data architect to make sure data is available and accurate to be able to carry our data science projects. This means that understanding topics like data streaming, data lakes and data warehouses is very important in a data science interview. Again, remember that it’s important that you don’t get stuck on the details. You don’t need to recite everything you know, but instead talk about your experience or how you might approach problems in different ways. Data science interview questions you might get asked about using different data sources "How do you work with data from different sources?" "How have you tackled dirty or unreliable data in the past?" Data science interview questions you might get asked about infrastructure "Talk me through a data infrastructure challenge you’ve faced in the past" "What’s the difference between a data lake and data warehouse? How would you approach each one differently?" Show that you have a robust understanding of data science tools You can’t get through a data science interview without demonstrating that you have knowledge and experience of data science tools. It’s likely that the job you’re applying for will mention a number of different skill requirements in the job description, so make sure you have a good knowledge of them all. Obviously, the best case scenario is that you know all the tools mentioned in the job description inside out - but this is unlikely. If you don’t know one - or more - make sure you understand what they’re for and how they work. The hiring manager probably won’t expect candidates to know everything, but they will expect them to be ready and willing to learn. If you can talk about a time you learned a new tool that will give the interviewer a lot of confidence that you’re someone that can pick up knowledge and skills quickly. Show you can evaluate different tools and programming languages Another element here is to be able to talk about the advantages and disadvantages of different tools. Why might you use R over Python? Which Python libraries should you use to solve a specific problem? And when should you just use Excel? Sometimes the interviewer might ask for your own personal preferences. Don’t be scared about giving your opinion - as long as you’ve got a considered explanation for why you hold the opinion that you do, you’re fine! Read next: Why is Python so good for AI and Machine Learning? 5 Python Experts Explain Data science interview questions about tools that you might be asked "What tools have you - or could you - use for data processing and cleaning? What are their benefits and disadvantages?" (These include tools such as Hadoop, Pentaho, Flink, Storm, Kafka.) "What tools do you think are best for data visualization and why?" (This includes tools like Tableau, PowerBI, D3.js, Infogram, Chartblocks - there are so many different products in this space that it’s important that you are able to talk about what you value most about data visualization tools.) "Do you prefer using Python or R? Are there times when you’d use one over another?" "Talk me through machine learning libraries. How do they compare to one another?" (This includes tools like TensorFlow, Keras, and PyTorch. If you don’t have any experience with them, make sure you’re aware of the differences, and talk about which you are most curious about learning.) Always focus on business goals and results This sounds obvious, but it’s so easy to forget. This is especially true if you’re a data geek that loves to talk about statistical models and machine learning. To combat this, make sure you’re very clear on how your experience was tied to business goals. Take some time to think about why you were doing what you were doing. What were you trying to find out? What metrics were you trying to drive? Interpersonal and communication skills Another element to this is talking about your interpersonal skills and your ability to work with a range of different stakeholders. Think carefully about how you worked alongside other teams, how you went about capturing requirements and building solutions for them. Think also about how you managed - or would manage - expectations. It’s well known that business leaders can expect data to be a silver bullet when it comes to results, so how do you make sure that people are realistic. Show off your data science portfolio A good way of showing your business acumen as a data scientist is to build a portfolio of work. Portfolios are typically viewed as something for creative professionals, but they’re becoming increasingly popular in the tech industry as competition for roles gets tougher. This post explains everything you need to build a great data science portfolio. Broadly, the most important thing is that it demonstrates how you have added value to an organization. This could be: Insights you’ve shared in reports with management Building customer-facing applications that rely on data Building internal dashboards and applications Bringing a portfolio to an interview can give you a solid foundation on which you can answer questions. But remember - you might be asked questions about your work, so make sure you have an answer prepared! Data science interview questions about business performance "Talk about a time you have worked across different teams." "How do you manage stakeholder expectations?" "What do you think are the most important elements in communicating data insights to management?" If you can talk fluently about how your work impacts business performance and how you worked alongside others in non-technical positions, you will give yourself a good chance of landing the job! Show that you understand ethical and privacy issues in data science This might seem like a superfluous point but given the events of recent years - like the Cambridge Analytica scandal - ethics has become a big topic of conversation. Employers will expect prospective data scientists to have an awareness of some of these problems and how you can go about mitigating them. To some extent, this is an extension of the previous point. Showing you are aware of ethical issues, such as privacy and discrimination, proves that you are fully engaged with the needs and risks a business might face. It also underlines that you are aware of the consequences and potential impact of data science activities on customers - what your work does in the real-world. Read next: Introducing Deon, a tool for data scientists to add an ethics checklist Data science interview questions about ethics and privacy "What are some of the ethical issues around machine learning and artificial intelligence?" "How can you mitigate any of these issues? What steps would you take?" "Has GDPR impacted the way you do data science?"  "What are some other privacy implications for data scientists?" "How do you understand explainability and interpretability in machine learning?" Ethics is a topic that’s easy to overlook but it’s essential for every data scientist. To get a good grasp of the issues it’s worth investigating more technical content on things like machine learning interpretability, as well as following news and commentary around emergent issues in artificial intelligence. Conclusion: Don’t treat a data science interview like an exam Data science is a complex and multi-faceted field. That can make data science interviews feel like a serious test of your knowledge - and it can be tempting to revise like you would for an exam. But, as we’ve seen, that’s foolish. To ace a data science interview you can’t just recite information and facts. You need to talk clearly and confidently about your experience and demonstrate your drive and curiosity. That doesn’t mean you shouldn’t make sure you know the basics. But rather than getting too hung up on definitions and statistical details, it’s a better use of your time to consider how you have performed your roles in the past, and what you might do in the future. A thoughtful, curious data scientist is immensely valuable. Show your interviewer that you are one.
Read more
  • 0
  • 0
  • 4836
article-image-brett-lantz-on-implementing-a-decision-tree-using-c5-0-algorithm-in-r
Packt Editorial Staff
29 Mar 2019
9 min read
Save for later

Brett Lantz on implementing a decision tree using C5.0 algorithm in R

Packt Editorial Staff
29 Mar 2019
9 min read
Decision tree learners are powerful classifiers that utilize a tree structure to model the relationships among the features and the potential outcomes. This structure earned its name due to the fact that it mirrors the way a literal tree begins at a wide trunk and splits into narrower and narrower branches as it is followed upward. In much the same way, a decision tree classifier uses a structure of branching decisions that channel examples into a final predicted class value. In this article, we demonstrate the implementation of decision tree using C5.0 algorithm in R. This article is taken from the book, Machine Learning with R, Fourth Edition written by Brett Lantz. This 10th Anniversary Edition of the classic R data science book is updated to R 4.0.0 with newer and better libraries. This book features several new chapters that reflect the progress of machine learning in the last few years and help you build your data science skills and tackle more challenging problems There are numerous implementations of decision trees, but the most well-known is the C5.0 algorithm. This algorithm was developed by computer scientist J. Ross Quinlan as an improved version of his prior algorithm, C4.5 (C4.5 itself is an improvement over his Iterative Dichotomiser 3 (ID3) algorithm). Although Quinlan markets C5.0 to commercial clients (see http://www.rulequest.com/ for details), the source code for a single-threaded version of the algorithm was made public, and has therefore been incorporated into programs such as R. The C5.0 decision tree algorithm The C5.0 algorithm has become the industry standard for producing decision trees because it does well for most types of problems directly out of the box. Compared to other advanced machine learning models, the decision trees built by C5.0 generally perform nearly as well but are much easier to understand and deploy. Additionally, as shown in the following table, the algorithm's weaknesses are relatively minor and can be largely avoided. Strengths An all-purpose classifier that does well on many types of problems. Highly automatic learning process, which can handle numeric or nominal features, as well as missing data. Excludes unimportant features. Can be used on both small and large datasets. Results in a model that can be interpreted without a mathematical background (for relatively small trees). More efficient than other complex models. Weaknesses Decision tree models are often biased toward splits on features having a large number of levels. It is easy to overfit or underfit the model. Can have trouble modeling some relationships due to reliance on axis-parallel splits. Small changes in training data can result in large changes to decision logic. Large trees can be difficult to interpret and the decisions they make may seem counterintuitive. To keep things simple, our earlier decision tree example ignored the mathematics involved with how a machine would employ a divide and conquer strategy. Let's explore this in more detail to examine how this heuristic works in practice. Choosing the best split The first challenge that a decision tree will face is to identify which feature to split upon. In the previous example, we looked for a way to split the data such that the resulting partitions contained examples primarily of a single class. The degree to which a subset of examples contains only a single class is known as purity, and any subset composed of only a single class is called pure. There are various measurements of purity that can be used to identify the best decision tree splitting candidate. C5.0 uses entropy, a concept borrowed from information theory that quantifies the randomness, or disorder, within a set of class values. Sets with high entropy are very diverse and provide little information about other items that may also belong in the set, as there is no apparent commonality. The decision tree hopes to find splits that reduce entropy, ultimately increasing homogeneity within the groups. Typically, entropy is measured in bits. If there are only two possible classes, entropy values can range from 0 to 1. For n classes, entropy ranges from 0 to log2(n). In each case, the minimum value indicates that the sample is completely homogenous, while the maximum value indicates that the data are as diverse as possible, and no group has even a small plurality. In mathematical notion, entropy is specified as: In this formula, for a given segment of data (S), the term c refers to the number of class levels, and pi  refers to the proportion of values falling into class level i. For example, suppose we have a partition of data with two classes: red (60 percent) and white (40 percent). We can calculate the entropy as: > -0.60 * log2(0.60) - 0.40 * log2(0.40) [1] 0.9709506 We can visualize the entropy for all possible two-class arrangements. If we know the proportion of examples in one class is x, then the proportion in the other class is (1 – x). Using the curve() function, we can then plot the entropy for all possible values of x: > curve(-x * log2(x) - (1 - x) * log2(1 - x),     col = "red", xlab = "x", ylab = "Entropy", lwd = 4) This results in the following figure: The total entropy as the proportion of one class varies in a two-class outcome As illustrated by the peak in entropy at x = 0.50, a 50-50 split results in the maximum entropy. As one class increasingly dominates the other, the entropy reduces to zero. To use entropy to determine the optimal feature to split upon, the algorithm calculates the change in homogeneity that would result from a split on each possible feature, a measure known as information gain. The information gain for a feature F is calculated as the difference between the entropy in the segment before the split (S1) and the partitions resulting from the split (S2): One complication is that after a split, the data is divided into more than one partition. Therefore, the function to calculate Entropy(S2) needs to consider the total entropy across all of the partitions. It does this by weighting each partition's entropy according to the proportion of all records falling into that partition. This can be stated in a formula as: In simple terms, the total entropy resulting from a split is the sum of entropy of each of the n partitions weighted by the proportion of examples falling in the partition (wi). The higher the information gain, the better a feature is at creating homogeneous groups after a split on that feature. If the information gain is zero, there is no reduction in entropy for splitting on this feature. On the other hand, the maximum information gain is equal to the entropy prior to the split. This would imply the entropy after the split is zero, which means that the split results in completely homogeneous groups. The previous formulas assume nominal features, but decision trees use information gain for splitting on numeric features as well. To do so, a common practice is to test various splits that divide the values into groups greater than or less than a threshold. This reduces the numeric feature into a two-level categorical feature that allows information gain to be calculated as usual. The numeric cut point yielding the largest information gain is chosen for the split. Note: Though it is used by C5.0, information gain is not the only splitting criterion that can be used to build decision trees. Other commonly used criteria are Gini index, chi-squared statistic, and gain ratio. For a review of these (and many more) criteria, refer to An Empirical Comparison of Selection Measures for Decision-Tree Induction, Mingers, J, Machine Learning, 1989, Vol. 3, pp. 319-342. Pruning the decision tree As mentioned earlier, a decision tree can continue to grow indefinitely, choosing splitting features and dividing into smaller and smaller partitions until each example is perfectly classified or the algorithm runs out of features to split on. However, if the tree grows overly large, many of the decisions it makes will be overly specific and the model will be overfitted to the training data. The process of pruning a decision tree involves reducing its size such that it generalizes better to unseen data. One solution to this problem is to stop the tree from growing once it reaches a certain number of decisions or when the decision nodes contain only a small number of examples. This is called early stopping or prepruning the decision tree. As the tree avoids doing needless work, this is an appealing strategy. However, one downside to this approach is that there is no way to know whether the tree will miss subtle but important patterns that it would have learned had it grown to a larger size. An alternative, called post-pruning, involves growing a tree that is intentionally too large and pruning leaf nodes to reduce the size of the tree to a more appropriate level. This is often a more effective approach than prepruning because it is quite difficult to determine the optimal depth of a decision tree without growing it first. Pruning the tree later on allows the algorithm to be certain that all of the important data structures were discovered. Note: The implementation details of pruning operations are very technical and beyond the scope of this book. For a comparison of some of the available methods, see A Comparative Analysis of Methods for Pruning Decision Trees, Esposito, F, Malerba, D, Semeraro, G, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, Vol. 19, pp. 476-491. One of the benefits of the C5.0 algorithm is that it is opinionated about pruning—it takes care of many of the decisions automatically using fairly reasonable defaults. Its overall strategy is to post-prune the tree. It first grows a large tree that overfits the training data. Later, the nodes and branches that have little effect on the classification errors are removed. In some cases, entire branches are moved further up the tree or replaced by simpler decisions. These processes of grafting branches are known as subtree raising and subtree replacement, respectively. Getting the right balance of overfitting and underfitting is a bit of an art, but if model accuracy is vital, it may be worth investing some time with various pruning options to see if it improves the test dataset performance. To summarize , decision trees are widely used due to their high accuracy and ability to formulate a statistical model in plain language.  Here, we looked at a highly popular and easily configurable decision tree algorithm C5.0. The major strength of the C5.0 algorithm over other decision tree implementations is that it is very easy to adjust the training options. Harness the power of R to build flexible, effective, and transparent machine learning models with Brett Lantz’s latest book Machine Learning with R, Fourth Edition. Dr.Brandon explains Decision Trees to Jon Building a classification system with Decision Trees in Apache Spark 2.0 Implementing Decision Trees
Read more
  • 0
  • 0
  • 6896

article-image-7-tips-for-using-git-and-github-the-right-way
Sugandha Lahoti
06 Oct 2018
3 min read
Save for later

7 tips for using Git and GitHub the right way

Sugandha Lahoti
06 Oct 2018
3 min read
GitHub has become a widely accepted and integral part of software development owing to the imperative features of change tracking that it offers. It was created in 2005 by Linus Torvalds to support the development of the Linux kernel. In this post, Alex Magana and Joseph Muli, the authors of Introduction to Git and GitHub course, discuss some of the best practices you should keep in mind while learning or using Git and GitHub. Document everything A good best practice that eases work in any team is ample documentation. Documenting something as simple as a repository goes a long way in presenting work and attracting contributors. It’s more of a first impression aspect when it comes to looking for a tool to aid in development. Utilize the README.MD and wikis One should also utilize the README.MD and wikis to elucidate the functionality delivered by the application and categorize guide topics and material. You should Communicate the solution of the code in the repository avails. Specify the guidelines and rules of engagement that govern contributing to the codebase. Indicate the dependencies required to set-up the working environment. Stipulate set-up instructions to get a working version of the application in a contributor’s local environment. Keep simple and concise naming conventions Naming conventions are also highly encouraged when it comes to repositories and branches. They should be simple, descriptive and concise. For instance, a repository that houses code intended for a Git course could be simply named as “git-tutorial-material”. As a learner on the bespoke course, it’s easier for a user to get the material, compared to a repository with a name such as “material”. Adopt naming prefixes You should also adopt institute naming prefixes for different task types for branch naming. For example, you may use feat- for feature branches, bug- for bugs and fix- for fix branches. Also, make use of templates that encompass a checklist for Pull Requests. Correspond a PR and Branch to a ticket or task A PR and Branch should correspond to a ticket or task on the project management board. This aids in aligning efforts employed on a product to the appropriate milestones and product vision. Organize and track tasks using issues Tasks such as technical debts, bugs, and workflow improvements should be organized and tracked using Issues. You should also enforce Push and Pull restrictions on the default branch and use webhooks to automate deployment and run pre-merge test suites. Use atomic commits Another best practice is to use atomic commits. An atomic commit is an operation that applies a set of distinct changes as a single operation. You should persist changes in small changesets and use descriptive and concise commit messages to record effected changes. You read a guest post from Alex Magana and Joseph Muli, the authors of Introduction to Git and GitHub. We hope that these best practices help you manage your Git and GitHub more smoothly. Don’t forget to check out Alex and Joseph’s Introduction to Git and GitHub course to learn how to create and enforce checks and controls for the introduction, scrutiny, approval, merging, and reversal of changes. GitHub introduces ‘Experiments’, a platform to share live demos of their research projects. Packt’s GitHub portal hits 2,000 repositories
Read more
  • 0
  • 0
  • 4306