Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-nuxt-js-2-0-released-with-a-new-scaffolding-tool-webpack-4-upgrade-and-more
Bhagyashree R
24 Sep 2018
3 min read
Save for later

Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more!

Bhagyashree R
24 Sep 2018
3 min read
Last week, the Nuxt.js community announced the release of Nuxt.js 2.0 with major improvements. This release comes with a scaffolding tool, create-nuxt-app to quickly get you started with Nuxt.js development. To provide a faster boot-up and re-compilation, this release is upgraded to Webpack 4 (Legato) and Babel 7. Nuxt.js is an open source web application framework for creating Vue.js applications. You can choose between universal, static generated or single page application. What is new in Nuxt.js 2.0? Introduced features and upgradations create-nuxt-app To get you quickly started with Nuxt.js development, you can use the newly introduced create-nuxt-app tool. This tool includes all the Nuxt templates such as starter template, express templates, and so on. With create-nuxt-app you can choose between integrated server-side framework, UI frameworks, and add axios module. Introducing nuxt-start and nuxt-legacy To start Nuxt.js application in production mode nuxt-start is introduced. To support legacy build of Nuxt.js for Node.js < 8.0.0,  nuxt-legacy is added. Upgraded to Webpack 4 and Babel 7 To provide faster boot-up time and faster re-compilation, this release uses Webpack 4 (Legato) and Babel 7. ESM supported everywhere In this release, ESM is supported everywhere. You can now use export/import syntax in nuxt.config.js, serverMiddleware, and modules. Replace postcss-next with postcss-preset-env Due to the deprecation of cssnext, you have to use postcss-preset-env instead of postcss-cssnext. Use ~assets instead of ~/assets Due to css-loader upgradation, use ~assets instead of ~/assets for alias in <url> CSS data type, for example, background: url("~assets/banner.svg"). Improvements The HTML script tag in core/renderer.js is fixed to pass W3C validation. The background-color property is now replaced with background in loadingIndicator, to allow the use of images and gradients for your background in SPA mode. Due to server/client artifact isolation, external build.publicPath need to upload built content to .nuxt/dist/client directory instead of .nuxt/dist. webpackbar and consola provide a improved CLI experience and better CI compatibility. Template literals in lodash templates are disabled. Better error handling if the specified plugin isn't found. Deprecated features The vendor array isn't supported anymore. DLL support is removed because it was not stable enough. AggressiveSplittingPlugin is obsoleted, users can use optimization.splitChunks.maxSize instead. The render.gzip option is deprecated. Users can use render.compressor instead. To read more about the updates, check out Nuxt’s official announcement on Medium and also see the release notes on its GitHub repository. Next.js 7, a framework for server-rendered React applications, releases with support for React context API and Webassembly low.js, a Node.js port for embedded systems Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js
Read more
  • 0
  • 0
  • 33076

article-image-your-quick-introduction-to-extended-events-in-analysis-services-from-blog-posts-sqlservercentral
Anonymous
01 Jan 2021
9 min read
Save for later

Your Quick Introduction to Extended Events in Analysis Services from Blog Posts - SQLServerCentral

Anonymous
01 Jan 2021
9 min read
The Extended Events (XEvents) feature in SQL Server is a really powerful tool and it is one of my favorites. The tool is so powerful and flexible, it can even be used in SQL Server Analysis Services (SSAS). Furthermore, it is such a cool tool, there is an entire site dedicated to XEvents. Sadly, despite the flexibility and power that comes with XEvents, there isn’t terribly much information about what it can do with SSAS. This article intends to help shed some light on XEvents within SSAS from an internals and introductory point of view – with the hopes of getting more in-depth articles on how to use XEvents with SSAS. Introducing your Heavy Weight Champion of the SQLverse – XEvents With all of the power, might, strength and flexibility of XEvents, it is practically next to nothing in the realm of SSAS. Much of that is due to three factors: 1) lack of a GUI, 2) addiction to Profiler, and 3) inadequate information about XEvents in SSAS. This last reason can be coupled with a sub-reason of “nobody is pushing XEvents in SSAS”. For me, these are all just excuses to remain attached to a bad habit. While it is true that, just like in SQL Server, earlier versions of SSAS did not have a GUI for XEvents, it is no longer valid. As for the inadequate information about the feature, I am hopeful that we can treat that excuse starting with this article. In regards to the Profiler addiction, never fear there is a GUI and the profiler events are accessible via the GUI just the same the new XEvents events are accessible. How do we know this? Well, the GUI tells us just as much, as shown here. In the preceding image, I have two sections highlighted with red. The first of note is evidence that this is the gui for SSAS. Note that the connection box states “Group of Olap servers.” The second area of note is the highlight demonstrating the two types of categories in XEvents for SSAS. These two categories, as you can see, are “profiler” and “purexevent” not to be confused with “Purex® event”. In short, yes Virginia there is an XEvent GUI, and that GUI incorporates your favorite profiler events as well. Let’s See the Nuts and Bolts This article is not about introducing the GUI for XEvents in SSAS. I will get to that in a future article. This article is to introduce you to the stuff behind the scenes. In other words, we want to look at the metadata that helps govern the XEvents feature within the sphere of SSAS. In order to, in my opinion, efficiently explore the underpinnings of XEvents in SSAS, we first need to setup a linked server to make querying the metadata easier. EXEC master.dbo.sp_addlinkedserver @server = N'SSASDIXNEUFLATIN1' --whatever LinkedServer name you desire , @srvproduct=N'MSOLAP' , @provider=N'MSOLAP' , @datasrc=N'SSASServerSSASInstance' --change your data source to an appropriate SSAS instance , @catalog=N'DemoDays' --change your default database go EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'SSASDIXNEUFLATIN1' , @useself=N'False' , @locallogin=NULL , @rmtuser=NULL , @rmtpassword=NULL GO Once the linked server is created, you are primed and ready to start exploring SSAS and the XEvent feature metadata. The first thing to do is familiarize yourself with the system views that drive XEvents. You can do this with the following query. SELECT lq.* FROM OPENQUERY(SSASDIXNEUFLATIN1, 'SELECT * FROM $system.dbschema_tables') as lq WHERE CONVERT(VARCHAR(100),lq.TABLE_NAME) LIKE '%XEVENT%' OR CONVERT(VARCHAR(100),lq.TABLE_NAME) LIKE '%TRACE%' ORDER BY CONVERT(VARCHAR(100),lq.TABLE_NAME); When the preceding query is executed, you will see results similar to the following. In this image you will note that I have two sections highlighted. The first section, in red, is the group of views that is related to the trace/profiler functionality. The second section, in blue, is the group of views that is related the XEvents feature in SSAS. Unfortunately, this does demonstrate that XEvents in SSAS is a bit less mature than what one may expect and definitely shows that it is less mature in SSAS than it is in the SQL Engine. That shortcoming aside, we will use these views to explore further into the world of XEvents in SSAS. Exploring Further Knowing what the group of tables looks like, we have a fair idea of where we need to look next in order to become more familiar with XEvents in SSAS. The tables I would primarily focus on (at least for this article) are: DISCOVER_TRACE_EVENT_CATEGORIES, DISCOVER_XEVENT_OBJECTS, and DISCOVER_XEVENT_PACKAGES. Granted, I will only be using the DISCOVER_XEVENT_PACKAGES view very minimally. From here is where things get to be a little more tricky. I will take advantage of temp tables  and some more openquery trickery to dump the data in order to be able to relate it and use it in an easily consumable format. Before getting into the queries I will use, first a description of the objects I am using. DISCOVER_TRACE_EVENT_CATEGORIES is stored in XML format and is basically a definition document of the Profiler style events. In order to consume it, the XML needs to be parsed and formatted in a better format. DISCOVER_XEVENT_PACKAGES is the object that lets us know what area of SSAS the event is related to and is a very basic attempt at grouping some of the events into common domains. DISCOVER_XEVENT_OBJECTS is where the majority of the action resides for Extended Events. This object defines the different object types (actions, targets, maps, messages, and events – more on that in a separate article). Script Fun Now for the fun in the article! IF OBJECT_ID('tempdb..#SSASXE') IS NOT NULL BEGIN DROP TABLE #SSASXE; END; IF OBJECT_ID('tempdb..#SSASTrace') IS NOT NULL BEGIN DROP TABLE #SSASTrace; END; SELECT CONVERT(VARCHAR(MAX), xo.Name) AS EventName , xo.description AS EventDescription , CASE WHEN xp.description LIKE 'SQL%' THEN 'SSAS XEvent' WHEN xp.description LIKE 'Ext%' THEN 'DLL XEvents' ELSE xp.name END AS PackageName , xp.description AS CategoryDescription --very generic due to it being the package description , NULL AS CategoryType , 'XE Category Unknown' AS EventCategory , 'PureXEvent' AS EventSource , ROW_NUMBER() OVER (ORDER BY CONVERT(VARCHAR(MAX), xo.name)) + 126 AS EventID INTO #SSASXE FROM ( SELECT * FROM OPENQUERY (SSASDIXNEUFLATIN1, 'select * From $system.Discover_Xevent_Objects') ) xo INNER JOIN ( SELECT * FROM OPENQUERY (SSASDIXNEUFLATIN1, 'select * FROM $system.DISCOVER_XEVENT_PACKAGES') ) xp ON xo.package_id = xp.id WHERE CONVERT(VARCHAR(MAX), xo.object_type) = 'event' AND xp.ID <> 'AE103B7F-8DA0-4C3B-AC64-589E79D4DD0A' ORDER BY CONVERT(VARCHAR(MAX), xo.[name]); SELECT ec.x.value('(./NAME)[1]', 'VARCHAR(MAX)') AS EventCategory , ec.x.value('(./DESCRIPTION)[1]', 'VARCHAR(MAX)') AS CategoryDescription , REPLACE(d.x.value('(./NAME)[1]', 'VARCHAR(MAX)'), ' ', '') AS EventName , d.x.value('(./ID)[1]', 'INT') AS EventID , d.x.value('(./DESCRIPTION)[1]', 'VARCHAR(MAX)') AS EventDescription , CASE ec.x.value('(./TYPE)[1]', 'INT') WHEN 0 THEN 'Normal' WHEN 1 THEN 'Connection' WHEN 2 THEN 'Error' END AS CategoryType , 'Profiler' AS EventSource INTO #SSASTrace FROM ( SELECT CONVERT(XML, lq.[Data]) FROM OPENQUERY (SSASDIXNEUFLATIN1, 'Select * from $system.Discover_trace_event_categories') lq ) AS evts(event_data) CROSS APPLY event_data.nodes('/EVENTCATEGORY/EVENTLIST/EVENT') AS d(x) CROSS APPLY event_data.nodes('/EVENTCATEGORY') AS ec(x) ORDER BY EventID; SELECT ISNULL(trace.EventCategory, xe.EventCategory) AS EventCategory , ISNULL(trace.CategoryDescription, xe.CategoryDescription) AS CategoryDescription , ISNULL(trace.EventName, xe.EventName) AS EventName , ISNULL(trace.EventID, xe.EventID) AS EventID , ISNULL(trace.EventDescription, xe.EventDescription) AS EventDescription , ISNULL(trace.CategoryType, xe.CategoryType) AS CategoryType , ISNULL(CONVERT(VARCHAR(20), trace.EventSource), xe.EventSource) AS EventSource , xe.PackageName FROM #SSASTrace trace FULL OUTER JOIN #SSASXE xe ON trace.EventName = xe.EventName ORDER BY EventName; Thanks to the level of maturity with XEvents in SSAS, there is some massaging of the data that has to be done so that we can correlate the trace events to the XEvents events. Little things like missing EventIDs in the XEvents events or missing categories and so forth. That’s fine, we are able to work around it and produce results similar to the following. If you compare it to the GUI, you will see that it is somewhat similar and should help bridge the gap between the metadata and the GUI for you. Put a bow on it Extended Events is a power tool for many facets of SQL Server. While it may still be rather immature in the world of SSAS, it still has a great deal of benefit and power to offer. Getting to know XEvents in SSAS can be a crucial skill in improving your Data Superpowers and it is well worth the time spent trying to learn such a cool feature. Interested in learning more about the depth and breadth of Extended Events? Check these out or check out the XE website here. Want to learn more about your indexes? Try this index maintenance article or this index size article. This is the seventh article in the 2020 “12 Days of Christmas” series. For the full list of articles, please visit this page. The post Your Quick Introduction to Extended Events in Analysis Services first appeared on SQL RNNR. Related Posts: Extended Events Gets a New Home May 18, 2020 Profiler for Extended Events: Quick Settings March 5, 2018 How To: XEvents as Profiler December 25, 2018 Easy Open Event Log Files June 7, 2019 Azure Data Studio and XEvents November 21, 2018 The post Your Quick Introduction to Extended Events in Analysis Services appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 33023

article-image-the-tor-project-on-browser-fingerprinting-and-how-it-is-taking-a-stand-against-it
Bhagyashree R
06 Sep 2019
4 min read
Save for later

The Tor Project on browser fingerprinting and how it is taking a stand against it

Bhagyashree R
06 Sep 2019
4 min read
In a blog post shared on Wednesday, Pierre Laperdrix, a postdoctoral researcher in the Secure Web Applications Group at CISPA, talked about browser fingerprinting, its risks, and the efforts taken by the Tor Project to prevent it. He also talked about his Fingerprint Central website, which is officially a part of the Tor project since 2017. What is browser fingerprinting Browser fingerprinting is the systematic collection of information about a remote computing device for the purposes of identification. There are several techniques through which a third-party can get a “rich fingerprint.” These include the availability of JavaScript or other client-side scripting languages, the user-agent and the accept headers, HTML5 Canvas element, and more. The browser fingerprints may include information like browser and operating system type and version, active plugins, timezone, language, screen resolution and various other active settings. Some users may think that these are too generic to identify a particular person. However, a study by Panopticlick, a browser fingerprinting test website, says that only 1 in 286,777 other browsers will share its fingerprint. Here’s an example of fingerprint Pierre Laperdrix shared in his post: Source: The Tor Project As with any technology, browser fingerprinting can be used or misused. The fingerprints can enable a remote application to prevent potential frauds or online identity thefts. On the other hand, these can also be used to track users across websites and collect information about their online behavior, without their consent. Advertisers and marketers can use this data for targeted advertising. Read also: All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Steps taken by the Tor Project to prevent browser fingerprinting Laperdrix said that Tor was the very first browser to understand and address the privacy threats browser fingerprinting poses. The Tor browser, which goes by the tagline “anonymity online”, is designed to reduce online tracking and identification of users. The browser takes a very simple approach to prevent the identification of users. “In the end, the approach chosen by Tor developers is simple: all Tor users should have the exact same fingerprint. No matter what device or operating system you are using, your browser fingerprint should be the same as any device running Tor Browser,” Laperdrix wrote. There are many other changes that have been made to the Tor browser over the years to prevent the unique identification of users. Tor warns users when they maximize their browser window as it is also one attribute that can be used to identify them. It has introduced default fallback fonts to prevent font and canvas fingerprinting. It has all the JS clock sources and event timestamps set to a specific resolution to prevent JS from measuring the time intervals of things like typing to produce a fingerprint. Talking about his contribution towards preventing browser fingerprinting, Laperdrix wrote, “As part of the effort to reduce fingerprinting, I also developed a fingerprinting website called FP Central to help Tor developers find fingerprint regressions between different Tor builds.” As a part of Google Summer of Code 2016, Laperdrix proposed to develop a website called Fingerprint Central, which is now officially included in the Tor Project. Similar to AmIUnique.org or Panopticlick, FP Central is developed to study the diversity of browser fingerprints. It runs a fingerprinting test suite and collects data from Tor browsers to help developers design and test new fingerprinting protection. They can also use it to ensure that fingerprinting-related bugs are correctly fixed with specific regression tests. Explaining the long-term goal of the website he said, “The expected long-term impact of this project is to reduce the differences between Tor users and reinforce their privacy and anonymity online.” There are a whole lot of modifications made under the hood to prevent browser fingerprinting that you can check out using the “tbb-fingerprinting” tag in the bug tracker. These modifications will also make their way into future releases of Firefox under the Tor Uplift program. Many organizations have taken a step against browser fingerprinting including browser companies Mozilla and Brave. Earlier this week, Firefox 69 was shipped with browser fingerprinting blocked by default. Brave also comes with a Fingerprinting Protection Mode enabled by default. In 2018, Apple updated Safari to only share a simplified system profile making it difficult to uniquely identify or track users. Read also: Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Check out Laperdrix’s post on Tor blog to know more in detail about browser fingerprinting. Other news in Web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 Google Chrome 76 now supports native lazy-loading #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 33002

article-image-low-js-a-node-js-port-for-embedded-systems
Prasad Ramesh
17 Sep 2018
3 min read
Save for later

low.js, a Node.js port for embedded systems

Prasad Ramesh
17 Sep 2018
3 min read
Node.JS is a popular backend widely for web development despite some of its flaws. For embedded systems, now there is low.js, a Node.js port with far lower system requirements. In low.js you can program JavaScript applications by utilizing the full Node.js API. You can run these on regular computers and also on embedded devices, which are based on the $3 ESP32 microcontroller. The JavaScript V8 engine at the center of Node.js is replaced with Duktape. Duktape is an embeddable ECMAScript E5/E5.1 engine with a compact footprint. Some parts of the Node.js system library are rewritten for more compact footprint and use more native code. low.js currently uses under 2 MB of disk space with a minimum requirement of around 1.5 MB of RAM for the ESP32 version. low.js features low.js is good for hobbyists and people interested in electronics. It allows using Node.JS scripts on smaller devices like routers which are based on Linux or uClinux without using much of the resources. This is great for scripting especially if they communicate over the internet. The neonious one is a microcontroller board based on low.js for ESP32, which can be programmed in JavaScript ES 6 with the Node API. It includes Wifi, Ethernet, additional flash and an extra I/O controller. The lower systems requirements in low.js allow you to run it comfortably on the ESP32-WROVER module. The ESP32-WROVER costs under $3 for large orders and is a very cost effective solution for IoT devices requiring a microcontroller and Wifi. low.js for ESP32 also adds the additional benefit of fast software development and maintenance. Specialized software developers are not needed for the microcontroller software. How to install? The community edition of low.js can be run on POSIX based systems including Linux, uClinux, and Mac OS X. It is available on Github and currently ./configure is not present. You might need some programming skills and knowledge to get low.js up and running on your systems. The commands are as follows: git clone https://github.com/neonious/lowjs cd lowjs git submodule update --init --recursive make low.js for ESP32 is the same as the community edition, but adapted for the ESP32 microcontroller. This version is not open source and is pre-flashed on the neonious one. For more information and documentation visit the low.js website. Deno, an attempt to fix Node.js flaws, is rewritten in Rust Node.js announces security updates for all their active release lines for August 2018 Deploying Node.js apps on Google App Engine is now easy
Read more
  • 0
  • 0
  • 32707

article-image-mozilla-building-bridge-rust-javascript
Richard Gall
10 Apr 2018
2 min read
Save for later

Mozilla is building a bridge between Rust and JavaScript

Richard Gall
10 Apr 2018
2 min read
Mozilla is building a bridge between Rust and JavaScript. The bridge - Wasm-bindgen - allows Rust to interact with Web Assembly modules and JavaScript. One of Mozilla's aims is to turn Rust into a language for web development; building a link to WebAssembly is perhaps the most straightforward way for it to run on the web. But this doesn't mean Mozilla wants Rust to replace JavaScript - far from it. The plan is instead for Rust to be a language that runs alongside JavaScript on the backend. Read more about Mozilla's plan here. What is wasm-bindgen? It's worth reading this from the Mozilla blog to see the logic behind wasm-bidgen and how it's going to work: "Today the WebAssembly specification only defines four types: two integer types and two floating-point types. Most of the time, however, JS and Rust developers are working with much richer types! For example, JS developers often interact with document to add or modify HTML nodes, while Rust developers work with types like Result for error handling, and almost all programmers work with strings. Being limited only to the types that WebAssembly provides today would be far too restrictive, and this is where wasm-bindgen comes into the picture. The goal of wasm-bindgen is to provide a bridge between the types of JS and Rust. It allows JS to call a Rust API with a string, or a Rust function to catch a JS exception. wasm-bindgen erases the impedance mismatch between WebAssembly and JavaScript, ensuring that JavaScript can invoke WebAssembly functions efficiently and without boilerplate, and that WebAssembly can do the same with JavaScript functions." This is exciting as it is a valuable step in expanding the capabilities of Rust. But the thinking behind wasm-bindgen will also help to further drive the adoption of web assembly. While the focus is, of course, on Rust at the moment, there's a possibility that wasm-bindgen could eventually be used with other languages, such as C and C++. This might take some time, owever. Download wasm-bindgen from GitHub.
Read more
  • 0
  • 0
  • 32275

article-image-tableau-cloud-migration-to-hyperforce-pixel-perfect-reports-with-amazon-q-microsofts-data-api-builder-dab-for-sql-server-creating-simulated-data-in-python
Derek Banas
15 Jul 2024
14 min read
Save for later

Tableau Cloud Migration to Hyperforce, Pixel-Perfect Reports with Amazon Q, Microsoft's Data API Builder (DAB) for SQL Server, Creating Simulated Data in Python

Derek Banas
15 Jul 2024
14 min read
Subscribe to our BI Pro newsletter for the latest insights. Don't miss out – sign up today!Welcome to this week's BIPro #65—your essential dose of Business Intelligence wisdom! We're thrilled to share the latest BI techniques and updates to boost your data savvy. Get ready to uncover transformative insights and strategies. Here's what we have in store for you!🌟 Essential Reads to Ignite Your Curiosity:Sigma Computing's Embedded Analytics WebinarData API Builder Unlocked: Discover advanced features of DAB for REST API creation.Data Simulation Guide: Step-by-step methods for creating simulated data in Python.SQL Performance Tuning: Tips for diagnosing and speeding up slow SQL queries.Analytics Collaboration: Explore sharing options in Analytics Hub.Tableau's Cloud Leap: Everything you need to know about migrating to Hyperforce.Dive into these topics and fuel your BI journey with cutting-edge knowledge and techniques!Sponsored PostFast-Track Analytics: Sigma Computing's Embedded Analytics Webinar - Concept to Launch in 10 DaysSigma Computing announces the release of its on-demand webinar, "Embedded Analytics Made Easy," a step-by-step guide to revolutionizing your data strategy and delivering actionable insights quickly, with practical knowledge on modern data integration and security.Transforming Data Strategy with Embedded AnalyticsQuickly deploy embedded analytics to overcome traditional challenges like lengthy development cycles, complex deployments, and ongoing maintenance.  Learn actionable solutions to achieve fully functional embedded analytics in just 10 days. Accelerate deployment, saving time, reducing costs, and enhancing responsiveness to market demands. Ensuring Advanced Data SecurityLearn best practices for integrating advanced data security into embedded analytics.Discover how to ensure data security and compliance while providing powerful analytics.Understand data governance, user authentication, and secure data transmission for robust security. Creating Interactive and User-Friendly ExperiencesLearn techniques to design interactive, user-friendly analytics experiences that boost engagement and decision-making. Get insights on creating visually appealing, functional dashboards and reports for deeper data interaction.Discover tips for optimizing dashboard layouts, using visual elements effectively, and ensuring accessibility for all users.Leveraging the Modern Data StackLearn the key components of the modern data stack: cloud data warehouses, data integration tools, and advanced analytics platforms. Discover how to choose the right tools and set up efficient data pipelines for scalable embedded analytics. Optimize your infrastructure for high performance and cost-effectiveness.Real-World Examples and Best PracticesDiscover real-world examples and best practices for embedded analytics.Learn from case studies showcasing successful implementations, key steps, and solutions.Gain valuable strategies to apply to your own analytics projects for success and impact.Detailed AgendaEmbedded Analytics Basics: Learn benefits and essentials, differentiating from traditional BI.Rapid Deployment: Step-by-step guide from planning to dashboard creation.Data Security: Integrate advanced measures like encryption and access controls.Interactive UX: Design engaging, accessible analytics interfaces.Modern Data Stack: Leverage cloud warehouses, ETL tools, and analytics platforms.Case Studies: Proven strategies and real-world examples of successful implementations.Who Should Watch       Ideal for data professionals, business analysts, and IT managers.Learn to enhance current analytics or start with embedded analytics.Gain tools and knowledge to improve data strategy and deliver better insights.Improve leadership and team effectiveness in implementing analytics solutions. Why Should You Watch?Practical Strategies: Learn to deploy and optimize embedded analyticsExpert Insights: Gain valuable perspectives from experienced speakers.Stay Competitive: Understand the latest analytics trends and technologies.User-Friendly Solutions: Create engaging, decision-making tools.Data Security: Ensure secure and compliant analytics implementations. Watch Sigma Computing’s webinar to master embedded analytics, gain expert insights, and see real-world success. Elevate your data strategy and secure a competitive edge.About Sigma ComputingSigma Computing empowers businesses with secure, scalable analytics solutions, enabling data-driven decisions and driving growth and efficiency with innovative insights.About SigmaSigma redefines BI with instant, in-depth data analysis on billions of records via an intuitive spreadsheet interface, boosting growth and innovation.🚀 GitHub's Most Sought-After ReposReal-time Data Processing➤ allinurl/goaccess: GoAccess is a real-time web log analyzer for *nix systems and browsers, offering fast HTTP statistics. More details: goaccess.io.➤ feathersjs/feathers: Feathers is a TypeScript/JavaScript framework for building APIs and real-time apps, compatible with various backends and frontends.➤ apache/age: Apache AGE extends PostgreSQL with graph database capabilities, supporting both relational SQL and openCypher graph queries seamlessly.➤ zephyrproject-rtos/zephyr: Real-time OS for diverse hardware, from IoT sensors to smart watches, emphasizing scalability, security, and resource efficiency.➤ hazelcast/hazelcast: Hazelcast integrates stream processing and fast data storage for real-time insights, enabling immediate action on data-in-motion within unified platform.Access 100+ data tools in this specially curated blog, covering everything from data analytics to business intelligence—all in one place. Check out "Top 100+ Essential Data Science Tools & Repos: Streamline Your Workflow Today!" on PacktPub.com. 🔮 Data Viz with Python Libraries ➤ How to Merge Large DataFrames Efficiently with Pandas? The blog explains efficient merging of large Pandas DataFrames. It covers optimizing memory usage with data types, setting indices for faster merges, and using `DataFrame.merge` for performance. Debugging methods are also detailed for clarity in merging operations.➤ How to Use the Hugging Face Tokenizers Library to Preprocess Text Data? This blog explores text preprocessing with the Hugging Face Tokenizers library in NLP. It covers tokenization methods such as Byte-Pair Encoding (BPE), SentencePiece and WordPiece, demonstrates usage with BERT, and discusses techniques like padding and truncation for model input preparation.➤ Writing a Simple Pulumi Provider for Airbyte: This tutorial demonstrates creating a Pulumi provider for Airbyte using Python, leveraging Airbyte's REST API for managing Sources, Destinations, and Connections programmatically. It integrates Pulumi's infrastructure as code capabilities with Airbyte's simplicity and flexibility, offering an alternative to Terraform for managing cloud resources.➤ Advanced Features of DAB (Data API Builder) to Build a REST API: This article explores using Microsoft's Data API Builder (DAB) for SQL Server, focusing on advanced features like setting up REST and GraphQL endpoints, handling POST operations via stored procedures, and configuring secure, production-ready environments on Azure VMs. It emphasizes secure connection string management and exposing APIs securely over the internet.➤ Step-by-Step Guide to Creating Simulated Data in Python: This article introduces various methods for generating synthetic and simulated datasets using Python libraries like NumPy, Scikit-learn, SciPy, Faker, and SDV. It covers creating artificial data for tasks such as linear regression, time series analysis, classification, clustering, and statistical distributions, offering practical examples and applications for data projects and academic research.⚡Stay Informed with Industry HighlightsPower BI➤ Retirement of the Windows installer for Analysis Services managed client libraries: The update announces the retirement of the Windows installer (.msi) for Analysis Services managed client libraries, effective July. Users are urged to transition to NuGet packages for AMO and ADOMD, available indefinitely. This shift ensures compatibility with current .NET frameworks and mitigates security risks by the end of 2024.Microsoft Fabric➤ Manage Fabric’s OneLake Storage with PowerShell: This post explores managing files in Microsoft Fabric's OneLake using PowerShell. It details logging into Azure with a service principal, listing files, renaming files and folders, and configuring workspace access. PowerShell scripts automate tasks, leveraging Azure Data Lake Storage via Fabric's familiar environment for data management.AWS BI➤ Build pixel-perfect reports with ease using Amazon Q in QuickSight: This blog post introduces Amazon Q's generative AI capabilities now available in Amazon QuickSight, emphasizing pixel-perfect report creation. Users can leverage natural language to rapidly design and distribute visually rich reports, enhancing data presentation, decision-making, and security in business contexts, all seamlessly integrated within QuickSight's ecosystem.➤ Author data integration jobs with an interactive data preparation experience with AWS Glue visual ETL: This article introduces the new data preparation capabilities in AWS Glue Studio's visual editor, offering a spreadsheet-style interface for creating and managing data transformations without coding. Users can leverage prebuilt transformations to preprocess data efficiently for analytics, demonstrating a streamlined ETL process within the AWS ecosystem.Google Cloud Data➤ Run your PostgreSQL database in an AlloyDB free trial cluster: Google's AlloyDB introduces advanced PostgreSQL-compatible capabilities, offering up to 2x better price-performance than self-managed PostgreSQL. It includes AI-assisted management, seamless integration with Vertex AI for generative AI, and innovative features like Gemini in Databases for enhanced development and scalability.➤ Share Pub/Sub topics in Analytics Hub: Google introduces Pub/Sub topics sharing in Analytics Hub, enabling organizations to curate, share, and monetize streaming data assets securely. This feature integrates with Analytics Hub to manage accessibility across teams and external partners, facilitating real-time data exchange for various industries like retail, finance, advertising, and healthcare.Tableau➤ What to Know About Tableau Cloud Migration to Hyperforce? Salesforce's Hyperforce platform revolutionizes cloud computing with enhanced scalability, security, and compliance. Tableau Cloud is transitioning to Hyperforce in 2024, promising unchanged user experience with improved resiliency, expanded global availability, and faster compliance certifications, leveraging Salesforce's advanced infrastructure for innovation in cloud analytics.✨ Expert Insights from Packt CommunityActive Machine Learning with Python: Refine and elevate data quality over quantity with active learningBy Margaux Masson-ForsytheGetting familiar with the active ML toolsThroughout this book, we’ve introduced and discussed several key active ML tools and labeling platforms, including Lightly, Encord, LabelBox, Snorkel AI, Prodigy, modAL, and Roboflow. To further enhance your understanding and assist you in selecting the most suitable tool for your specific project needs, let’s revisit these tools with expanded insights and introduce a few additional ones:modAL: This is a flexible and modular active ML framework in Python, designed to seamlessly integrate with scikit-learn. It stands out for its extensive range of query strategies, which can be tailored to various active ML scenarios. Whether you are dealing with classification, regression, or clustering tasks, modAL provides a robust and intuitive interface for implementing active learning workflows.Label Studio: An open source, multi-type data labeling tool, Label Studio excels in its adaptability to different forms of data, including text, images, and audio. It allows for the integration of ML models into the labeling process, thereby enhancing labeling efficiency through active ML. Its flexibility extends to customizable labeling interfaces, making it suitable for a broad range of applications in data annotation.Prodigy: Prodigy offers a unique blend of active ML and human-in-the-loop approaches. It’s a highly efficient annotation tool, particularly for refining training data for NLP models. Its real-time feedback loop allows for rapid iteration and model improvement, making it an ideal choice for projects that require quick adaptation and precision in data annotation.Lightly: Specializing in image datasets, Lightly uses active ML to identify the most representative and diverse set of images for training. This ensures that models are trained on a balanced and varied dataset, leading to improved generalization and performance. Lightly is particularly useful for projects where data is abundant but labeling resources are limited.Encord Active: Focused on active ML for image and video data, Encord Active is integrated within a comprehensive labeling platform. It streamlines the labeling process by identifying and prioritizing the most informative samples, thereby enhancing efficiency and reducing the manual annotation workload. This platform is particularly beneficial for large-scale computer vision projects.Cleanlab: Cleanlab stands out for its ability to detect, quantify, and rectify label errors in datasets. This capability is invaluable for active ML, where the quality of the labeled data directly impacts model performance. It offers a systematic approach to ensuring data integrity, which is crucial for training robust and reliable models.Voxel51: With a focus on video and image data, Voxel51 provides an active ML platform that prioritizes the most informative data for labeling. This enhances the annotation workflow, making it more efficient and effective. The platform is particularly adept at handling complex, large-scale video datasets, offering powerful tools for video analytics and MLUBIAI:UBIAI is a tool that specializes in text annotation and supports active ML. It simplifies the process of training and deploying NLP models by streamlining the annotation workflow. Its active ML capabilities ensure that the most informative text samples are prioritized for annotation, thus improving model accuracy with fewer labeled examples.Snorkel AI: Renowned for its novel approach to creating, modeling, and managing training data, Snorkel AI uses a technique called weak supervision. This method combines various labeling sources to reduce the dependency on large labeled datasets, complementing active ML strategies to create efficient training data pipelines.Deepchecks: Deepchecks offers a comprehensive suite of validation checks that are essential in an active ML context. These checks ensure the quality and diversity of datasets and models, thereby facilitating the development of more accurate and robust ML systems. It’s an essential tool for maintaining data integrity and model reliability throughout the ML lifecycle.LabelBox: As a comprehensive data labeling platform, LabelBox excels in managing the entire data labeling process. It provides a suite of tools for creating, managing, and iterating on labeled data, applicable to a wide range of data types such as images, videos, and text. Its support for active learning methodologies further enhances the efficiency of the labeling process, making it an ideal choice for large-scale ML projects.Roboflow: Designed for computer vision projects, Roboflow streamlines the process of preparing image data. It is especially valuable for tasks involving image recognition and object detection. Roboflow’s focus on easing the preparation, annotation, and management of image data makes it a key resource for teams and individuals working in the field of computer vision.Each tool in this extended list brings unique capabilities to the table, addressing specific challenges in ML projects. From image and video annotation to text processing and data integrity checks, these tools provide the necessary functionalities to enhance project efficiency and efficacy through active ML strategies.This excerpt is from the latest book, "Active Machine Learning with Python: Refine and elevate data quality over quantity with active learning" by Margaux Masson-Forsythe. Unlock access to the full book and a wealth of other titles with a 7-day free trial in the Packt Library. Start exploring today! 💡 What's the Latest Scoop from the BI Community?➤ Data Orchestration: The Dividing Line Between Generative AI Success and Failure. This blog explores data orchestration's pivotal role in scaling generative AI deployments, using Apache Airflow via Astronomer’s Astro. It highlights real-world cases where Airflow optimizes workflows, ensuring efficient resource use, stability, and scalability in AI applications from conversational AI to content generation and reasoning.➤ Data Migration From GaussDB to GBase8a: This tutorial discusses exporting data from GaussDB to GBase8a, comparing methods like using the GDS tool for remote and local exports, and gs_dump for database exports. It includes practical examples and considerations for importing data into GBase8a MPP.➤ Diagnosing and Optimizing Running Slow SQL: This tutorial covers detecting and optimizing slow SQL queries for enhanced database performance. Methods include using SQL queries to identify high-cost statements and system commands like `onstat` to monitor active threads and session details, aiding in pinpointing bottlenecks and applying optimization strategies effectively.➤ Migrate a SQL Server Database to a PostgreSQL Database: This article outlines migrating a marketing database from SQL Server to PostgreSQL using PySpark and JDBC drivers. Steps include schema creation, table migration with constraints, setting up Spark sessions, connecting databases, and optimizing PostgreSQL performance with indexing. It emphasizes data integrity and efficiency in data warehousing practices.➤ Create Document Templates in a SQL Server Database Table: This blog discusses the use of content templates to streamline the management and storage of standardized information across various documents and databases, focusing on enhancing efficiency, consistency, and data accuracy in fields such as contracts, legal agreements, and medical interactions.➤ OMOP & DataSHIELD: A Perfect Match to Elevate Privacy-Enhancing Healthcare Analytics? The blog discusses Federated Analytics as a solution for cross-border data challenges in healthcare studies. It promotes decentralized statistical analysis to preserve data privacy and enable multi-site collaborations without moving sensitive data. Integration efforts between DataSHIELD and OHDSI aim to enhance analytical capabilities while maintaining data security and quality in federated networks.
Read more
  • 0
  • 0
  • 32271
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-craftassist-an-open-source-framework-to-enable-interactive-bots-in-minecraft-by-facebook-researchers
Vincy Davis
19 Jul 2019
5 min read
Save for later

CraftAssist: An open-source framework to enable interactive bots in Minecraft by Facebook researchers

Vincy Davis
19 Jul 2019
5 min read
Two days ago, researchers from Facebook AI Research published a paper titled “CraftAssist: A Framework for Dialogue-enabled Interactive Agents”. The authors of this research are Facebook AI research engineers Jonathan Gray and Kavya Srinet, Facebook AI research scientist C. Lawrence Zitnick and Arthur Szlam and Yacine Jernite, Haonan Yu, Zhuoyuan Chen, Demi Guo and Siddharth Goyal. The paper describes the implementation of an assistant bot called CraftAssist which appears and interacts like another player, in the open sandbox game of Minecraft. The framework enables players to interact with the bot via in-game chat through various implemented tools and platforms. The players can also record these interactions through an in-game chat. The main aim of the bot is to be a useful and entertaining assistant to all the tasks listed and evaluated by the human players. Image Source: CraftAssist paper For motivating the wider AI research community to use the CraftAssist platform in their own experiments, Facebook researchers have open-sourced the framework, the baseline assistant, data and the models. The released data includes the functions which was used to build the 2,586 houses in Minecraft, the labeling data of the walls, roofs, etc. of the houses, human rephrasing of fixed commands, and the conversion of natural language commands to bot interpretable logical forms. The technology that allows the recording of human and bot interaction on a Minecraft server has also been released so that researcher will be able to independently collect data. Why is the Minecraft protocol used? Minecraft is a popular multiplayer volumetric pixel (voxel) 3D game based on building and crafting which allows multiplayer servers and players to collaborate and build, survive or compete with each other. It operates through a client and server architecture. The CraftAssist bot acts as a client and communicates with the Minecraft server using the Minecraft network protocol. The Minecraft protocol allows the bot to connect to any Minecraft server without the need for installing server-side mods. This lets the bot to easily join a multiplayer server along with human players or other bots. It also lets the bot to join an alternative server which implements the server-side component of the Minecraft network protocol. The CraftAssist bot uses a 3rd-party open source Cuberite server. It is a fast and extensible game server used for Minecraft. Read More: Introducing Minecraft Earth, Minecraft’s AR-based game for Android and iOS users How does the CraftAssist function? The block diagram below demonstrates how the bot interacts with incoming in-game chats and reaches the desired target. Image Source: CraftAssist paper Firstly, the incoming text is transformed into a logical form called the action dictionary. The action dictionary is then translated by a dialogue object which interacts with the memory module of the bot. This produces an action or a chat response to the user. The bot’s memory uses a relational database which is structured to recognize the relation between stored items of information. The major advantage of this type of memory is the easy to convert semantic parser, which is converted into a fully specified tasks. The bot responds to higher-level actions, called Tasks. Tasks are an interruptible process which follows a clear objective of step by step actions. It can adjust to long pauses between steps and can also push other Tasks onto a stack, like the way functions can call other functions in a standard programming language. Move, Build and Destroy are few of the many basic Tasks assigned to the bot. The The Dialogue Manager checks for illegal or profane words, then queries the semantic parser. The semantic parser takes the chat as input and produces an action dictionary. The action dictionary indicates that the text is a command given by a human and then specifies the high-level action to be performed by the bot. Once the task is created and pushed onto the Task stack, it is the responsibility of the command task ‘Move’ to compare the bot’s current location to the target location. This will make the bot to undertake a sequence of low-level step movements to reach the target. The core of the bot’s understanding of natural language depends on a neural semantic parser called the Text-toAction-Dictionary (TTAD) model. This model receives the incoming command/chat and then classifies it into an action dictionary which is interpreted by the Dialogue Object. The CraftAssist framework thus enables the bots in Minecraft to interact and play with players by understanding human interactions, using the implemented tools. The researchers hope that since the dataset of CraftAssist is now open-sourced, more developers will be empowered to contribute to this framework by assisting or training the bots, which might lead to the bots learning from human dialogue interactions, in the future. Developers have found the CraftAssist framework interesting. https://twitter.com/zehavoc/status/1151944917859688448 A user on Hacker News comments, “Wow, this is some amazing stuff! Congratulations!” Check out the paper CraftAssist: A Framework for Dialogue-enabled Interactive Agents for more details. Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects What to expect in Unreal Engine 4.23? A study confirms that pre-bunk game reduces susceptibility to disinformation and increases resistance to fake news
Read more
  • 0
  • 0
  • 32220

article-image-no-more-free-java-se-8-updates-for-commercial-use-after-january-2019
Prasad Ramesh
20 Aug 2018
2 min read
Save for later

No more free Java SE 8 updates for commercial use after January 2019

Prasad Ramesh
20 Aug 2018
2 min read
Oracle owned Java will no longer provide free public updates of Java SE 8 for commercial use after January 2019. This move is a part of their long term support (LTS) plan. However, for individual personal use the public updates for Oracle Java SE 8 will be available at least till December 2020. Borrowing ideas from Linux releases, Oracle Java releases will now follow LTS. The bi yearly updates will now have minor updates. One of the releases being termed as LTS and Oracle will provide long term support for it (3 years). Other releases will cease getting support when the next version is released. If you are using Java for personal use individually, you will have the same access to Oracle Java SE 8 updates till end of 2020. In most cases, the Java-based applications run on your PCs are licensed by a company other than Oracle. For example, games developed by gaming companies. You won’t have access to public updates beyond the date mentioned, it depends on the gaming company on how they plan to provide application support. Developers are recommended to view Oracle’s release roadmap for Java SE 8 and other versions. Accordingly, you can take action to support your applications. The next LTS version, Java 11 is set to roll out in September 2018. Here is the Oracle Java SE Support Roadmap. Release GA Date Premier Support Until Notification Extended Support Until Sustaining Support 6 December 2006 December 2015 December 2018 Indefinite 7 July 2011 July 2019 July 2022 Indefinite 8 March 2014 March 2022 March 2025 Indefinite 9 (non‑LTS) September 2017 March 2018 Not Available Indefinite 10 (18.3^)(non‑LTS) March 2018 September 2018 Not Available Indefinite 11 (18.9^ LTS) September 2018 September 2023 September 2026 Indefinite 12 (19.3^ non‑LTS) March 2019 September 2019 Not Available Indefinite The roadmap for web deployment and Java FX is different and is listed on their website. This video explains the LTS release model for Java, for more information, visit the official update. Mark Reinhold on the evolution of Java platform and OpenJDK Build Java EE containers using Docker [Tutorial] 5 Things you need to know about Java 10
Read more
  • 0
  • 0
  • 32027

article-image-introducing-postgrest-a-rest-api-for-any-postgresql-database-written-in-haskell
Bhagyashree R
04 Nov 2019
3 min read
Save for later

Introducing PostgREST, a REST API for any PostgreSQL database written in Haskell

Bhagyashree R
04 Nov 2019
3 min read
Written in Haskell, PostgREST is a standalone web server that enables you to turn your existing PostgreSQL database into a RESTful API. It offers you a much “cleaner, more standards-compliant, faster AP than you are likely to write from scratch.” The PostgREST documentation describes it as an “alternative to manual CRUD programming.” Explaining the motivation behind this tool, the documentation reads, “Writing business logic often duplicates, ignores or hobbles database structure. Object-relational mapping is a leaky abstraction leading to slow imperative code. The PostgREST philosophy establishes a single declarative source of truth: the data itself.” Performant by design In terms of performance, PostgREST shows subsecond response times for up to 2000 requests/sec on Heroku free tier. The main contributor to this impressive performance is its Haskell implementation using the Warp HTTP server. To maintain fast response times, it delegates most of the calculation part to the database including serializing JSON responses directly in SQL, data validation, and more. Along with that, it takes the help of the Hasql library to efficiently use the database. A single declarative source of truth for security PostgREST is responsible for handling authentication via JSON Web Tokens. You can also build other forms of authentication on top of the JWT primitive. It delegates authorization to the role information defined in the database to ensure there is a single declarative source of truth for security. Data integrity PostgREST does not rely on an Object Relational Mapper (ORM) and custom imperative coding. Instead, developers need to put declarative constraints directly into their database preventing any kind of data corruption. In a Hacker News discussion, many users praised the tool. “I think PostgREST is the first big tool written in Haskell that I’ve used in production. From my experience, it’s flawless. Kudos to the team,” a user commented. Some others also expressed that using this tool for systems in production can further complicate things. A user added, “Somebody in our team put this on production. I guess this solution has some merits if you need something quick, but in the long run it turned out to be painful. It's basically SQL over REST. Additionally, your DB schema becomes your API schema and that either means you force one for the purposes of the other or you build DB views to fix that.” You can read about PostgREST on its official website. Also, check out its GitHub repository. After PostgreSQL, DigitalOcean now adds MySQL and Redis to its managed databases’ offering Amazon Aurora makes PostgreSQL Serverless generally available PostgreSQL 12 progress update
Read more
  • 0
  • 0
  • 31906

article-image-snapchat-source-code-leaked-and-posted-to-github
Richard Gall
09 Aug 2018
2 min read
Save for later

Snapchat source code leaked and posted to GitHub

Richard Gall
09 Aug 2018
2 min read
Source code for what is believed to be a small part of Snapchat's iOS application was posted on GitHub after being leaked back in May. After being notified, Snap Inc., Snapchat's parent company, immediately filed a DMCA request to GitHub to get the code removed. A copy of the request was found by a 'security researcher' tweeting from the handle @x0rz, who shared a link to a copy of the request on GitHub: https://twitter.com/x0rz/status/1026735377955086337 You can read the DMCA request in full here. [caption id="attachment_21477" align="aligncenter" width="916"] Part of the Snap Inc. DMCA request to GitHub[/caption] The initial leak back in May was caused by an update to the Snapchat iOS application. A spokesperson for Snap Inc. explained to CNET: "An iOS update in May exposed a small amount of our source code and we were able to identify the mistake and rectify it immediately... We discovered that some of this code had been posted online and it has been subsequently removed. This did not compromise our application and had no impact on our community." This code was then published by a someone using the name Khaled Alshehri, believed to be based in Pakistan, on GitHub. The repository created - called Source-SnapChat - has now been taken down. A number of posts linked to the GitHub account suggests that the leaker had tried to contact Snapchat but had been ignored. "I will post it again until I get a reply" they said. https://twitter.com/i5aaaald/status/1025639490696691712 Leaked Snapchat code is still being traded privately Although GitHub has taken the repo down, it's not hard to find people claiming they have a copy of the code that they're willing to trade: https://twitter.com/iSn0we/status/1026738393353465858 Now the code is out in the wild it will take more than a DMCA request to get things under control. Although it would appear the leaked code isn't substantial enough to give much away to potential cybercriminals, it's likely that Snapchat is now working hard to make the changes required to tighten its security.  Read next Snapchat is losing users – but revenue is up 15 year old uncovers Snapchat’s secret visual search function
Read more
  • 0
  • 0
  • 31836
article-image-mongodb-withdraws-controversial-server-side-public-license-from-the-open-source-initiatives-approval-process
Richard Gall
12 Mar 2019
4 min read
Save for later

MongoDB withdraws controversial Server Side Public License from the Open Source Initiative's approval process

Richard Gall
12 Mar 2019
4 min read
MongoDB's Server Side Public License was controversial when it was first announced back in October. But the team were, back then, confident that the new license met the Open Source Initiative's approval criteria. However, things seem to have changed. The news that Red Hat was dropping MongoDB over the SSPL in January was a critical blow and appears to have dented MongoDB's ambitions. Last Friday, Co-founder and CTO Eliot Horowitz announced that MongoDB had withdrawn its submission to the Open Source Initiative. Horowitz wrote on the OSI approval mailing list that "the community consensus required to support OSI approval does not currently appear to exist regarding the copyleft provision of SSPL." Put simply, the debate around MongoDB's SSPL appears to have led its leadership to reconsider its approach. Update: this article was amended 19.03.2019 to clarify that the Server Side Public License only requires commercial users (ie. X-as-a-Service products) to open source their modified code. Any other users can still modify and use MongoDB code for free. What's the purpose of MongoDB's Server Side Public License? The Server Side Public License was developed by MongoDB as a means of protecting the project from "large cloud vendors" who want to "capture all of the value but contribute nothing back to the community." Essentially the license included a key modification to section 13 of the standard GPL (General Public License) that governs most open source software available today. You can read the SSPL in full here , but this is the crucial sentence: "If you make the functionality of the Program or a modified version available to third parties as a service, you must make the Service Source Code available via network download to everyone at no charge, under the terms of this License." This would mean that users are free to review, modify, and distribute the software or redistribute modifications to the software. It's only if a user modifies or uses the source code as part of an as-a-service offering that the full service must be open sourced. So essentially, anyone is free to modify MongoDB. It's only when you offer MongoDB as a commercial service that the conditions of the SSPL state that you must open source the entire service. What issues do people have with the Server Side Public License? The logic behind the SSPL seems sound, and probably makes a lot of sense in the context of an open source landscape that's almost being bled dry. But it presents a challenge to the very concept of open source software where the idea that software should be free to use and modify - and, indeed, to profit from - is absolutely central. Moreover, even if it makes sense as a way of defending open source projects from the power of multinational tech conglomerates, it could be argued that the consequences of the license could harm smaller tech companies. As one user on Hacker News explained back in October: "Let [sic] say you are a young startup building a cool SaaS solution. E.g. A data analytics solution. If you make heavy use of MongoDB it is very possible that down the line the good folks at MongoDB come calling since 'the value of your SaaS derives primarily from MongoDB...' So at that point you have two options - buy a license from MongoDB or open source your work (which they can conveniently leverage at no cost)." The Hacker News thread is very insightful on the reasons why the license has been so controversial. Another Hacker News user, for example, described the license as "either idiotic or malevolent." Read next: We need to encourage the meta-conversation around open source, says Nadia Eghbal [Interview] What next for the Server Side Public License? The license might have been defeated but Horowitz and MongoDB are still optimistic that they can find a solution. "We are big believers in the importance of open source and we intend to continue to work with these parties to either refine the SSPL or develop an alternative license that addresses this issue in a way that will be accepted by the broader FOSS community," he said. Whatever happens next, it's clear that there are some significant challenges for the open source world that will require imagination and maybe even some risk-taking to properly solve.
Read more
  • 0
  • 0
  • 31824

article-image-storage-savings-with-table-compression-from-blog-posts-sqlservercentral
Anonymous
31 Dec 2020
2 min read
Save for later

Storage savings with Table Compression from Blog Posts - SQLServerCentral

Anonymous
31 Dec 2020
2 min read
In one of my recent assignments, my client asked me for a solution, to reduce the disk space requirement, of the staging database of an ETL workload. It made me study and compare the Table Compression feature of SQL Server. This article will not explain Compression but will compare the storage and performance aspects of Compressed vs Non Compressed tables. I found a useful article on Compression written by Gerald Britton. It’s quite comprehensive and covers most of the aspects of Compression. For my POC, I made use of the SSIS package. I kept 2 data flows, with the same table and file structure, but one with Table Compression enabled and another without Table Compression. Table and file had around 100 columns with only VARCHAR datatype, since the POC was for Staging database, to temporarily hold the raw data from flat files. I’d to also work on the conversion of flat file source output columns, to make it compatible with the destination SQL Server table structure. The POC was done with various file sizes because we also covered the POC for identifying the optimal value for file size. So we did 2 things in a single POC – Comparison of Compression and finding the optimal file size for the ETL process. The POC was very simple, with 2 data flows. Both had flat files as source and SQL Server table as the destination. Here is the comparison recorded post POC. I think you would find it useful in deciding if it’s worth implementing Compression in your respective workload. Findings Space-saving: Approx. 87% of space-saving. Write execution time: No difference. Read execution time: Slight / negligible difference. The plain SELECT statement was executed for comparing the Read execution time. The Compressed table took 10-20 seconds more, which is approx. <2%. As compared to the disk space saved, this slight overhead was acceptable in our workload. However, you need to review thoroughly your case before taking any decision. The post Storage savings with Table Compression appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 31653

article-image-grain-functional-programming-language
Richa Tripathi
03 Aug 2018
2 min read
Save for later

Grain: A new functional programming language that compiles to Webassembly

Richa Tripathi
03 Aug 2018
2 min read
Grain is a strongly-typed functional programming language built for the modern web by leveraging the brilliant work done by the WebAssembly project. Unlike other languages used on the web today (like TypeScript or Elm), that compile into JavaScript, Grain doesn’t compile into JavaScript but compiles all the way down to WebAssembly, supported by a tiny JavaScript runtime to give it access to web features that WebAssembly doesn’t support yet. It was designed with the purpose to specifically serve web developers. Following are the language features: No runtime type errors Grain does not need any kind of type annotations. All the pieces of Grain code that developers write is thoroughly sifted for type errors. Developers do not have to deal with runtime exceptions and thus achieving full type safety with none of the fuss. Being functional, but flexible Grain is geared towards functional programming, but understands the web isn't as pure as we would like it to be. It enables one to easily write what's appropriate for the scenario. Embracing new web standards Grain is built on top of WebAssembly, a brand-new technology that represents a paradigm shift in web development. WebAssembly is nothing but a bytecode format which is executed in a web browser. This allows an application to be deployed to a device with a compliant web browser having to go through any explicit installation steps. TypeScript 3.0 is finally released with ‘improved errors’, editor productivity and more Elm and TypeScript – Static typing on the Frontend Tools in TypeScript
Read more
  • 0
  • 0
  • 31528
article-image-youtube-to-reduce-recommendations-of-conspiracy-theory-videos-that-misinform-users-in-the-us
Natasha Mathur
28 Jan 2019
3 min read
Save for later

YouTube to reduce recommendations of ‘conspiracy theory’ videos that misinform users in the US

Natasha Mathur
28 Jan 2019
3 min read
YouTube announced an update regarding YouTube recommendations last week. As per the new update, YouTube aims to reduce the recommendations of videos that promote misinformation ( eg; conspiracy videos, false claims about historical events, flat earth videos, etc) that affect users in harmful ways, to better the user experience on the platform. YouTube states that the new change is going to be gradual and will be applicable for less than 1% of the overall videos on YouTube as of now. “To be clear, this will only affect recommendations of what videos to watch, not whether a video is available on YouTube. As always, people can still access all videos that comply with our Community Guidelines”, states the YouTube team. YouTube is also working on eliminating the presence of content that “comes close” to violating its community guidelines. The new change makes use of machine learning along with human evaluators and experts from all over the United States to train these machine learning systems responsible for generating recommendations. Evaluators are trained using public guidelines and help offer their input on the quality of a video. Currently, the change is applied only to a small set of videos in the US as the machine learning systems are not very accurate currently. The new update will roll out in different countries once the systems become more efficient. YouTube is continually updating its system to improve the user experience on its platform. For instance, YouTube has taken steps against clickbait content in the past and keeps updating its system to put more focus on viewer satisfaction instead of views, while also making sure to not recommend clickbait videos as often. YouTube team also mentions that Youtube now presents recommendations from a wider set of topics (instead of many similar recommendations) to its users and hundreds of changes were made to optimize the quality of recommendations for users.   “It's just another step in an ongoing process, but it reflects our commitment and sense of responsibility to improve the recommendations experience on YouTube. We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users”, states the YouTube team. Public reaction to this news  is varied, with some calling YouTube’s new move as ‘censorship’ while others appreciating it: https://twitter.com/Purresnol/status/1089022759546601472 https://twitter.com/therealTTG/status/1088826997189591040 https://twitter.com/nomo_BS/status/1089007519706550272 https://twitter.com/Mattlennial/status/1089008644589604866 https://twitter.com/politic_sky/status/1089006646288941056 YouTube bans dangerous pranks and challenges YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law Is YouTube’s AI Algorithm evil?
Read more
  • 0
  • 0
  • 31446

article-image-react-introduces-hooks-a-javascript-function-to-allow-using-react-without-classes
Bhagyashree R
26 Oct 2018
3 min read
Save for later

React introduces Hooks, a JavaScript function to allow using React without classes

Bhagyashree R
26 Oct 2018
3 min read
Yesterday, the React community introduced Hooks, a new feature proposal which has landed in the React 16.7.0-alpha. With the help of Hooks, you will be able “hook into” or use React state and other React features from function components. The biggest advantage is that Hooks don’t work inside classes and let you use React without classes. Why Hooks are being introduced? Easily reuse React components Currently, there is no way to attach reusable behavior to a component. There are some patterns like render props and high-order components that try to solve this problem to some extent. But you need to restructure your components to use them. Hooks make it easier for you to extract stateful logic from a component so it can be tested independently and reused. All of this, without having to change your component hierarchy. You can also easily share them among many components or with the community. Splitting related components In React, each lifecycle method often contains a mix of unrelated logic. And many times the mutually unrelated code that changes together splits apart, but completely unrelated code ends up combined in a single method. This could end up introducing bugs and inconsistencies. Rather than forcing a component split based on lifecycle methods, Hooks allow you to split them into smaller functions based on what pieces are related. Use React without classes One of the hurdles that people face while learning React is classes. You need to understand that how this works in JavaScript is very different from how it works in most languages. You also have to remember to bind the event handlers. Hooks solve these problems by letting you use more of React’s features without classes. React components have always been closer to functions. It embraces functions, but without sacrificing the practical spirit of React. With Hooks, you will get access to imperative escape hatches that don’t require you to learn complex functional or reactive programming techniques. Hooks are completely opt-in and are 100% backward compatible. After receiving the community feedback, which seems to be positive going by their ongoing discussion on RFC, this feature, which is currently in alpha release, will be introduced in React 16.7. To know more in detail about React Hooks, check out their official announcement. React 16.6.0 releases with a new way of code splitting, and more! InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out RxDB 8.0.0, a reactive, offline-first, multiplatform database for JavaScript released!
Read more
  • 0
  • 0
  • 31226
Modal Close icon
Modal Close icon