Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-activestate-platform-adds-thousands-of-curated-python-packages-to-its-platform
Fatema Patrawala
28 Nov 2019
3 min read
Save for later

ActiveState adds thousands of curated Python packages to its platform

Fatema Patrawala
28 Nov 2019
3 min read
On Tuesday, ActiveState a Canadian software company announced to add thousands of Python packages to its ActiveState Platform. ActiveState helps enterprises scale securely with open source languages and offers developers various tools to work on. More than 2 million developers and 97% of Fortune 1,000 enterprises use ActiveState to support mission-critical systems and speed up their software development process. The ActiveState Platform is a SaaS platform for open source language automation to centrally build, certify and resolve runtime environments. It incorporates more than 20 years of engineering expertise in order to automate much of the complexity associated with building, maintaining and sharing Python and Perl runtimes. With minimal knowledge, a developer can automatically build open source language runtimes, resolve dependencies, and certify it against compliance and security criteria. The result is a consistent, reproducible runtime from development to production. In this latest installment, the company has added more than 50,000 package versions covering the most popular Python 2 and 3 packages, as well as their dependencies. These dependencies can be automatically resolved, built and packaged into runtimes to eliminate issues. “Python is one of the most popular programming languages on the planet right now, so it's no wonder that the majority of the more than 200,000 developers on the ActiveState Platform are asking us to do more to support their Python development efforts. In order to ensure our customers can automatically build all Python packages, even those that contain C code, we're designing systems to vet the code and metadata for every package in PyPI. Today's release is a significant first step toward that goal.” says Jeff Rouse, Vice president, product management at ActiveState. The company is preparing itself for Python 2 EOL and in the process, it has vetted thousands of key Python 2 packages critical to the support of customers' Python 2 applications. In addition, the company has added many of the most popular Python 3 packages to support the efforts of their broad and wide customer base. It is a significant milestone on the road to make all of the Python Package Index (PyPI) available on the ActiveState Platform. To know more about this news, check out the official press release by the company. Listen: How ActiveState is tackling “dependency hell” by providing enterprise-level support for open source programming languages [Podcast] Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks Python 3.9 alpha 1 is now ready for testing PyPI announces 2FA for securing Python package downloads Getting Started with Python Packages
Read more
  • 0
  • 0
  • 20101

article-image-unreal-engine-4-22-update-support-added-for-microsofts-directx-raytracing-dxr
Melisha Dsouza
15 Feb 2019
3 min read
Save for later

Unreal Engine 4.22 update: support added for Microsoft’s DirectX Raytracing (DXR)

Melisha Dsouza
15 Feb 2019
3 min read
On 12th February, Epic Games released a preview build of Unreal Engine 4.22, and a major upgrade among numerous other features and fixes is the support for real-time ray tracing and path tracing. The new build will extend its preliminary support for Microsoft's DirectX Ray-tracing (DXR) extensions to the DirectX 12 API. Developers can now try their hands at ray-traced games developed through Unreal Engine 4. There are very limited games that support raytracing. Currently, only  Battlefield V (Ray Traced Reflections) and Metro Exodus (Ray Traced Global Illumination) feature ray tracing effects, which are developed in the proprietary Frostbite 3 and 4A Game Engines. [box type="shadow" align="" class="" width=""]Fun Fact: Ray tracing is a much more advanced and lifelike way of rendering light and shadows in a scene. Movies and TV shows use this to create and blend in amazing CG work with real-life scenes leading to more life-like, interactive and immersive game worlds with more realistic lighting, shadows, and materials.[/box] The patch notes released by the team states that they have added low level support for ray tracing: Added ray tracing low-level support. Implemented a low-level layer on top of UE DirectX 12 that provides support for DXR and allows creating and using ray tracing shaders (ray generation shaders, hit shaders, etc) to add ray tracing effects. Added high-level ray tracing features Rect area lights Soft shadows Reflections Reflected shadows Ambient occlusion RTGI (ray traced global illumination) Translucency Clearcoat IBL Sky Geometry types Triangle meshes Static Skeletal (Morph targets & Skin cache) Niagara particles support Texture LOD Denoiser Shadows, Reflections, AO Path Tracert Unbiased, full GI path tracer for making ground truth reference renders inside UE4. According to HardOCP,  the feature isn't technically tied to Nvidia RTX but since turing cards are the only ones with driver support for DirectX Raytracing at the moment, developers need an RTX 2000 series GPU to test out Unreal's Raytracing. There has been much debate about the RTX offered by NVIDIA in the past. While the concept did sound interesting at the beginning, very few engines adopted the idea- simply because previous generation processors cannot support all the features of NVIDIA’s RTX. Now, with DXR in the picture, It will be interesting to see the outcome of games developed using ray tracing. Head over to Unreal Engine’s official post to know more about this news. Implementing an AI in Unreal Engine 4 with AI Perception components [Tutorial] Unreal Engine 4.20 released with focus on mobile and immersive (AR/VR/MR) devices Game Engine Wars: Unity vs Unreal Engine
Read more
  • 0
  • 0
  • 20091

article-image-microsoft-introduces-static-typescript-as-an-alternative-to-embedded-interpreters-for-programming-mcu-based-devices
Sugandha Lahoti
04 Sep 2019
4 min read
Save for later

Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices

Sugandha Lahoti
04 Sep 2019
4 min read
Microsoft yesterday unveiled Static TypeScript as an alternative to embedded interpreters. Static TypeScript (STS) is an implementation of a Static Compiler for TypeScript which runs in the web browser. It is primarily designed to aid school children in their computer science programming projects. STS is supported by a compiler that is also written in Typescript. It generates machine code that runs efficiently on Microcontrollers in the target RAM range of 16-256kB. Microsoft’s plan behind building Static TypeScript Microcontrollers are typically programmed in C, C++, or in assembly, none of which are particularly beginner friendly. MCUs that can run on modern languages such as JavaScript and Python usually involve interpreters like IoT.js, Duktape, or MicroPython. The problem with interpreters is high memory usage, leaving little room on the devices themselves for the program developers have written. Microsoft therefore decided to come with STS which is a more efficient alternative to the embedded interpreter approach. It is statically typed, which makes for a less surprising programming experience. Features of Static TypeScript STS eliminates most of the “bad parts” of JavaScript; following StrongScript, STS uses nominal typing for statically declared classes and supports efficient compilation of classes using classic techniques for vtables. The STS toolchain runs offline, once loaded into a web browser, without the need for a C/C++ compiler. The STS compiler generates efficient and compact machine code, which unlocks a range of application domains such as game programming for low resource devices . Deployment of STS user programs to embedded devices does not require app or device driver installation, just access to a web browser. The relatively simple compilation scheme for STS leads to surprisingly good performance on a collection of small JavaScript benchmarks, often comparable to advanced, state of the art JIT compilers like V8, with orders of magnitude smaller memory requirements. Differences with TypeScript In contrast to TypeScript, where all object types are bags of properties, STS has at runtime four kinds of unrelated object types: A dynamic map type has named (string-indexed) properties that can hold values of any type A function (closure) type A class type describes instances of a class, which are treated nominally, via an efficient runtime subtype check on each field/method access An array (collection) type STS Compiler and Runtime The STS compiler and toolchain (linker, etc.) are written solely in TypeScript. The source TypeScript program is processed by the regular TypeScript compiler to perform syntactic and semantic analysis, including type checking. The STS device runtime is mainly written in C++ and includes a bespoke garbage collector. The regular TypeScript compiler, the STS code generators, assembler, and linker are all implemented in TypeScript and run both in the web browser and on command line.  The STS toolchain, implemented in TypeScript, compiles STS to Thumb machine code and links this code against a pre-compiled C++ runtime in the browser, which is often the only available execution environment in schools. Static TypeScript is used in all MakeCode editors STS is the core language supported by Microsoft’s MakeCode Framework. MakeCode provides hands on computing education for students with projects. It enables the creation of custom programming experiences for MCU-based devices. Each MakeCode editor targets programming of a specific device or device class via STS. STS supports the concept of a package, a collection of STS, C++ and assembly files, that also can list other packages as dependencies. This capability has been used by third parties to extend the MakeCode editors, mainly to accommodate hardware peripherals for various boards. STS is also used in MakeCode Arcade. With Arcade, STS lets developers of all skill levels easily write cool retro-style pixelated games. The games are designed by the user to be run either inside a virtual game console in the browser or on inexpensive microcontroller-based handhelds. For more in-depth information, please read the research paper. People were quite interested in this development. A comment on Hacker News reads, “This looks very interesting. If all it takes is dropping “with, eval, and prototype inheritance” to get fast and efficient JS execution, I’m all for it.” Other news in tech TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support and more Microsoft announces XLOOKUP for Excel users that fixes most VLOOKUP issues Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs
Read more
  • 0
  • 0
  • 20090

article-image-typescript-3-6-releases-with-stricter-generators-new-functions-in-typescript-playground-better-unicode-support-for-identifiers-and-more
Vincy Davis
29 Aug 2019
4 min read
Save for later

TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more

Vincy Davis
29 Aug 2019
4 min read
Yesterday, the Program Manager at Typescript, Daniel Rosenwasser announced the release of TypeScript 3.6. This is a major release of TypeScript as it contains many new features in Language and Compiler such as stricter generators, more accurate array spread, improved UX around Promises, better Unicode support for identifiers, and more. TypeScript 3.6 also explores a new TypeScript playground, new Editor features, and many breaking changes. TypeScript 3.6 beta was released last month. Language and Compiler improvements Stricter checking to Iterators and Generators Previously, generator users in TypeScript could not differentiate if a value was yielded or returned from a generator. In TypeScript 3.6, due to changes in the Iterator and IteratorResult type declarations, a new type called the Generator type has been introduced. It is an Iterator that will have both the return and throw methods present. This will allow a stricter generator checker to easily understand the difference between the values from their iterators. TypeScript 3.6 also infers certain uses of yield within the body of a generator function. The yield expression can be used explicitly to enforce the type of values that can be returned, yielded, and evaluated. More accurate array spread In pre-ES2015 targets, TypeScript uses a by default --downlevelIteration flag to use iterative constructs with arrays. However, many users found it undesirable that emits produced by it had no defined property slots. To address this problem, TypeScript 3.6 presents a new __spreadArrays helper. It will “accurately model what happens in ECMAScript 2015 in older targets outside of --downlevelIteration.” Improved UX around Promises TypeScript 3.6 explores new improvements in the Promise API, which is one of the most common ways to work with asynchronous data. TypeScript’s error messages will now inform the user if a then() or await content of a Promise API is not written before passing it to another function. The Promise API will also provide quick fixes in some cases. Better Unicode support for Identifiers TypeScript 3.6 contains better support for Unicode characters in identifiers when emitting to ES2015 and later targets. import.meta support in SystemJS: The new version supports the transformation of import.meta to context.meta when the module target is set to system. get and set accessors are allowed in ambient contexts: The previous versions of TypeScript did not allow the use of get and set accessors in ambient contexts. This feature has been changed in TypeScript 3.6, since the ECMAScript’s class fields proposal have differing behavior from an existing version of TypeScript. The official post also adds, “In TypeScript 3.7, the compiler itself will take advantage of this feature so that generated .d.ts files will also emit get/set accessors.” Read Also: Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript New functions in TypeScript playground The TypeScript playground allows users to compile TypeScript and check the JavaScript output. It has more compiler options than typescriptlang and all the strict options are turned on by default in the playground. Following new functions are added in TypeScript Playground: The target option which allows users to switch out of es5 to es3, es2015, esnext, etc All the strictness flags Support for plain JavaScript files The post also states that in the future versions of TypeScript, more features like JSX support, and polishing automatic type acquisition can be expected. Breaking Changes Class members named "constructor" are now simply constructor functions. DOM updates like the global window will no longer be defined as type Window. Instead, it is defined as type Window & typeof globalThis. In JavaScript files, TypeScript will only consult immediately preceding JSDoc comments to figure out declared types. TypeScript 3.6 will not allow the escape sequences. Developers have liked the new features in TypeScript 3.6. https://twitter.com/zachcodes/status/1166840093849473024 https://twitter.com/joshghent/status/1167005999204638722 https://twitter.com/FlorianRappl/status/1166842492718899200 Interested users can check out TypeScript’s 6-month roadmap. Visit the Microsoft blog for full updates of TypeScript 3.6. Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more Babel 7.5.0 releases with F# pipeline operator, experimental TypeScript namespaces support, and more TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more
Read more
  • 0
  • 0
  • 20088

article-image-facebook-apple-spotify-pull-alex-jones-content
Richard Gall
06 Aug 2018
4 min read
Save for later

Facebook, Apple, Spotify pull Alex Jones content

Richard Gall
06 Aug 2018
4 min read
Social media platforms have come under considerable criticism for allowing controversial media outlets for as long as 'fake news' has been in the public lexicon. But over the past week, major actions against Alex Jones' content channels suggests that things might be changing. Apple has pulled 5 out of Jones 6 podcasts from iTunes (first reported by Buzzfeed News), while hours later on Monday 6 August Facebook announced it was removing four of Jones' pages for breaching the platform's content guidelines. Alongside Facebook's and Apple's actions, Spotify also made the decision to remove Jones' content from the streaming platform and revoke his ability to publish "due to repeated violations of Spotify's prohibited content policies" according to a Spotify spokesperson. This news comes just weeks after YouTube removed a number of Infowars videos over 'hate speech' and initiated a 90 day ban on Infowars broadcasting live via YouTube. Unsurprisingly, the move has come under attack from those who see the move as an example of censorship. Even people critical of Jones' politics have come out to voice their unease: https://twitter.com/realJoeBarnes/status/1026466888744947721 However, elsewhere, the move is viewed positively with commentators suggesting social media platforms are starting to take responsibility for the content published on their systems. https://twitter.com/shannoncoulter/status/1025401502033039362 One thing that can be agreed is that the situation is a little confusing at the moment. And although it's true that it's time for Facebook, and other platforms to take more responsibility for what they publish, there are still issues around governance and consistency that need to be worked through and resolved. Facebook's action against Alex Jones - a recent timeline On July 27, Alex Jones was hit with a 30 day suspension by Facebook after the company removed 4 videos from its site that contravened its content guidelines. However, as numerous outlets reported at the time, this ban only effected Jones personally. His channels (like The Alex Jones Channel and Infowars) weren't impacted. However, those pages that weren't hit by Jones' personal ban have now been removed by Facebook. In a post published August 6, Facebook explained: "...we removed four videos on four Facebook Pages for violating our hate speech and bullying policies. These pages were the Alex Jones Channel Page, the Alex Jones Page, the InfoWars Page and the Infowars Nightly News Page..." The post also asserts that the ban is about violation of community standards not 'false news'. "While much of the discussion around Infowars has been related to false news, which is a serious issue that we are working to address by demoting links marked wrong by fact checkers and suggesting additional content, none of the violations that spurred today’s removals were related to this." Apple's action against Alex Jones Apple's decision to remove 5 of Alex Jones podcasts is, according to Buzzfeed News, "one of the largest enforcement actions intended to curb conspiratorial news content by a technology company to date." Like Facebook, Apple's decision was based on the content's "hate speech" rather than anything to do with 'fake news'. An Apple spokesperson explained to Buzzfeed News: "Apple does not tolerate hate speech, and we have clear guidelines that creators and developers must follow to ensure we provide a safe environment for all of our users... Podcasts that violate these guidelines are removed from our directory making them no longer searchable or available for download or streaming. We believe in representing a wide range of views, so long as people are respectful to those with differing opinions.” Spotify's action against Alex Jones' podcasts Spotify removed all episodes of The Alex Jones Show podcast on Monday 6 August. This follows the music streaming platform pulling a number of individual episodes of Jones' podcast at the beginning of August. This appears to be a consequence of Spotify's new content guidelines, updated in May 2018, which prohibits "hate content." The takeaway: there's still considerable confusion over content What this debacle shows is that there's confusion about how social media platforms should deal with content that it effectively publishes. Clearly, the likes of Facebook are trying to walk a tightrope that's going to take some time to resolve. The broader question is not just do we want to police the platforms billions of people use, but how we do that as well. Arguably, social media is at the center of today's political struggles, with many of them unsure how to manage the levels of responsibility that have landed on on their algorithms. Read next Time for Facebook, Twitter and other social media to take responsibility or face regulation Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call Spotify has “one of the most intricate uses of JavaScript in the world,” says former engineer  
Read more
  • 0
  • 0
  • 20083

article-image-node-v11-0-0-released
Prasad Ramesh
24 Oct 2018
2 min read
Save for later

Node v11.0.0 released

Prasad Ramesh
24 Oct 2018
2 min read
Node v11.0.0 is released. The focus of this current release is primarily towards improving internals, and performance. It is an update to the stable V8 7.0. Build and console changes in Node v11.0.0 Build: FreeBSD 10 supported is removed. child_process: The default value of the windowsHide option is now to true. console: The console.countReset() function will emit a warning if the timer being reset does not exist. If a timer already exists, console.time() will no longer reset it. Dependency and http changes Under dependencies, the Chrome V8 engine has been updated to the v7.0. fs: The fs.read() method now needs a callback. The fs.SyncWriteStream utility was deprecated previously, it has now been removed. http: In Node v11.0.0 the http, https, and tls modules use the WHATWG URL parser by default. General changes In general changes, process.binding() has been deprecated and can no longer be used. Userland code using process.binding() should re-evaluate its use initiate migration. There is an experimental implementation of queueMicrotask() added. Internal changes Under internal changes, the Windows performance-counter support has been removed. The --expose-http2 command-line option has also been removed. In Timers, interval timers will be rescheduled even if previous interval gave an error. The nextTick queue will be run after each immediate and timer. Changes in utilities The WHATWG TextEncoder and TextDecoder APIs are now global. The util.inspect() method’s output size is limited to 128 MB by default. When NODE_DEBUG is set for either http or http2, a runtime warning will be emitted. Some other additions Some other utilities have been added like: '-z relro -z now' linker flags internal PriorityQueue class InitializeV8Platform function string-decoder fuzz test new_large_object_space heap space dns memory error test warnings when NODE_DEBUG is set as http/http2 Inspect suffix to BigInt64Array elements For more details and a complete list of changes, visit the Node website. Deno, an attempt to fix Node.js flaws, is rewritten in Rust npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more The top 5 reasons why Node.js could topple Java
Read more
  • 0
  • 0
  • 20081
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-redhats-quarkus-announces-plans-for-quarkus-1-0-releases-its-rc1
Vincy Davis
11 Nov 2019
3 min read
Save for later

Red Hat’s Quarkus announces plans for Quarkus 1.0, releases its rc1 

Vincy Davis
11 Nov 2019
3 min read
Update: On 25th November, the Quarkus team announced the release of Quarkus 1.0.0.Final bits. Head over to the Quarkus blog for more details on the official announcement. Last week, RedHat’s Quarkus, the Kubernetes native Java framework for GraalVM & OpenJDK HotSpot announced the availability of its first release candidate. It also notified users that its first stable version will be released by the end of this month. Launched in March this year, Quarkus framework uses Java libraries and standards to provide an effective solution for running Java on new deployment environments like serverless, microservices, containers, Kubernetes, and more. Java developers can employ this framework to build apps with faster startup time and less memory than traditional Java-based microservices frameworks. It also provides flexible and easy to use APIs that can help developers to build cloud-native apps, and best-of-breed frameworks. “The community has worked really hard to up the quality of Quarkus in the last few weeks: bug fixes, documentation improvements, new extensions and above all upping the standards for developer experience,” states the Quarkus team. Latest updates added in Quarkus 1.0 A new reactive core based on Vert.x with support for reactive and imperative programming models. This feature aims to make reactive programming a first-class feature of Quarkus. A new non-blocking security layer that allows reactive authentications and authorization. It also enables reactive security operations to integrate with Vert.x. Improved Spring API compatibility, including Spring Web and Spring Data JPA, as well as Spring DI. A Quarkus ecosystem also called as “universe”, is a set of extensions that fully supports native compilation via GraalVM native image. It supports Java 8, 11 and 13 when using Quarkus on the JVM. It will also support Java 11 native compilation in the near future. RedHat says, “Looking ahead, the community is focused on adding additional extensions like enhanced Spring API compatibility, improved observability, and support for long-running transactions.” Many users are excited about Quarkus and are looking forward to trying the stable version. https://twitter.com/zemiak/status/1192125163472637952 https://twitter.com/loicrouchon/status/1192206531045085186 https://twitter.com/lasombra_br/status/1192114234349563905 How Quarkus brings Java into the modern world of enterprise tech Apple shares tentative goals for WebKit 2020 Apple introduces Swift Numerics to support numerical computing in Swift Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more Fastly announces the next-gen edge computing services available in private beta
Read more
  • 0
  • 0
  • 20073

article-image-macos-terminal-emulator-iterm2-3-3-is-here-with-new-python-scripting-api-a-scriptable-status-bar-minimal-theme-and-more
Vincy Davis
02 Aug 2019
4 min read
Save for later

MacOS terminal emulator, iTerm2 3.3.0 is here with new Python scripting API, a scriptable status bar, Minimal theme, and more

Vincy Davis
02 Aug 2019
4 min read
Yesterday, the team behind iTerm2, the GPL-licensed terminal emulator for macOS, announced the release of iTerm2 3.3.0. It is a major release with many new features such as the new Python scripting API, a new scriptable status bar, two new themes, and more. iTerm2 is a successor to iTerm and works on all macOS. It is an open source replacement for Apple's Terminal and is highly customizable as comes with a lot of useful features. Major highlights in iTerm2 3.3.0 A new Python scripting API which can control iTerm2 and extend its behavior has been added. It allows users to write Python scripts easily, thus enabling them to do extensive configuration and customization in iTerm2 3.3.0. A new scriptable status bar has been added with 13 built-in configurable components. iTerm2 3.3.0 comes with two new themes. The first theme is called as Minimal and it helps reducing visual cluster. The second theme can move tabs into the title bar, thus saving space while maintaining the general appearance of a macOS app and is called Compact. Other new features in iTerm2 3.3.0 The session, tab and window titles have been given a new appearance to make it more flexible and comprehensible. It is now possible to configure these titles separately and also to select what type of information it shows per profile. These titles are integrated with the new Python scripting API. The tabs title has new icons, which either indicates a running app or a fixed icon per profile. A new tool belt called ‘Actions’ has been introduced in iTerm2 3.3.0. It provides shortcuts  to frequent actions like sending a snippet of a text. A new utility ‘it2git’ which allows the git status bar component to show git state on a remote host, has been added. New support for crossed-out text (SGR 9) and automatically restarting a session when it ends has also been added in iTerm2 3.3.0. Other Improvements in iTerm2 3.3.0 Many visual improvements Updated app icon Various pages of preferences have been rearranged to make it more visually appealing The password manager can be used to enter a password securely A new option to log Automatic Profile Switching messages to the scripting console has been added The long scrollback history’s performance has been improved Users love the new features in iTerm2 3.3.0 release, specially the new Python API, the scriptable status bar and the new Minimal mode. https://twitter.com/lambdanerd/status/1157004396808552448 https://twitter.com/alloydwhitlock/status/1156962293760036865 https://twitter.com/josephcs/status/1157193431162036224 https://twitter.com/dump/status/1156900168127713280 A user on Hacker News comments, “First off, wow love the status bar idea.” Another user on Hacker News says “Kudos to Mr. Nachman on continuing to develop a terrific piece of macOS software! I've been running the 3.3 betas for a while and some of the new functionality is really great. Exporting a recording of a terminal session from the "Instant Replay" panel is very handy!” Few users are not impressed with iTerm2 3.3.0 features and are comparing it with the Terminal app. A comment on Hacker News reads, “I like having options but wouldn’t recommend iTerm. Apple’s Terminal.app is more performant rendering text and more responsive to input while admittedly having somewhat less unnecessary features. In fact, iTerm is one of the slowest terminals out there! iTerm used to have a lot of really compelling stuff that was missing from the official terminal like tabs, etc that made straying away from the canonical terminal app worth it but most of them eventually made their way to Terminal.app so nowadays it’s mostly just fluff.” For the full list of improvements in iTerm2 3.3.0, visit the iTerm2 changelog page. Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more! WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Safari Technology Preview release 83 now available for macOS Mojave and macOS High Sierra
Read more
  • 0
  • 0
  • 20025

article-image-elastic-launches-helm-charts-alpha-for-faster-deployment-of-elasticsearch-and-kibana-to-kubernetes
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes

Melisha Dsouza
12 Dec 2018
3 min read
At the KubeCon+CloudNativeCon happening at Seattle this week, Elastic N.V., the pioneer behind Elasticsearch and the Elastic Stack, announced the alpha availability of Helm Charts for Elasticsearch on Kubernetes. Helm Charts will make it possible to deploy Elasticsearch and Kibana to Kubernetes almost instantly. Developers use Helm charts for its flexibility in creating, publishing and sharing Kubernetes applications. The ease of using Kubernetes to manage containerized workloads has also lead to Elastic users deploying their ElasticSearch workloads to Kubernetes. Now, with the Helm chart support provided for Elasticsearch on Kubernetes, developers can harness the benefits of both, Helm charts and Kubernetes, to instal, configure, upgrade and run their applications on Kubernetes. With this new functionality in place, users can now take advantage of the best practices and templates to deploy Elasticsearch and Kibana. They will obtain access to some basic free features like monitoring, Kibana Canvas and spaces. According to the blog post, Helm charts will serve as a “ way to help enable Elastic users to run the Elastic Stack using modern, cloud-native deployment models and technologies.” Why should developers consider Helm charts? Helm charts have been known to provide users with the ability to leverage Kubernetes packages through the click of a button or single CLI command. Kubernetes is sometimes complex to use, thus impairing developer productivity. Helm charts improve their productivity as follows: With helm charts, developers can focus on developing applications rather than  deploying dev-test environments. They can author their own chart, which in turn automates deployment of their dev-test environment It comes with a “push button” deployment and deletion of apps, making adoption and development of Kubernetes apps easier for those with little container or microservices experience. Combating the complexity related of deploying a Kubernetes-orchestrated container application, Helm Charts allows software vendors and developers to preconfigure their applications with sensible defaults. This enables users/deployers to change parameters of the application/chart using a consistent interface. Developers can incorporate production-ready packages while building applications in a Kubernetes environment thus eliminating deployment errors due to incorrect configuration file entries or mangled deployment recipes. Deploying and maintaining Kubernetes applications can be tedious and error prone. Helm Charts reduces the complexity of maintaining an App Catalog in a Kubernetes environment. Central App Catalog reduces duplication of charts (when shared within or between organizations) and spreads best practices by encoding them into Charts. To know more about Helm charts, check out the README files for the Elasticsearch and Kibana charts available on GitHub. In addition to this announcement, Elastic also announced its collaboration with Cloud Native Computing Foundation (CNCF) to promote and support open cloud native technologies and companies. This is another step towards Elastic’s mission towards building products in an open and transparent way. You can head over to Elastic’s official blog for an in-depth coverage of this news. Alternatively, check out MarketWatch for more insights on this article. Dejavu 2.0, the open source browser by ElasticSearch, now lets you build search UIs visually Elasticsearch 6.5 is here with cross-cluster replication and JDK 11 support How to perform Numeric Metric Aggregations with Elasticsearch
Read more
  • 0
  • 0
  • 20011

article-image-everything-new-in-angular-6-angular-elements-cli-commands-and-more
Guest Contributor
05 Jun 2018
6 min read
Save for later

Everything new in Angular 6: Angular Elements, CLI commands and more

Guest Contributor
05 Jun 2018
6 min read
Angular started as a simple frontend library. Today it has transformed in a complete framework as simply ‘Angular’ with continuous version progression from 2 to the recent 6. This progression added some amazing features to Angular, making the overall development process easier. Angular 6, is the latest version, is packed with exciting new features for all of the Angular community. In this article we are going to cover some amazing features which are out with Angular 6. So let’s get started! Angular Elements Consider a search component that we would like to have for a specific Angular application. It can be visualized as follows. In above application the search component uses the input ‘bat’ to fetch the results on the basis of its text similarity. A class named `SearchComponent` must be working beneath the app. With the advent of Angular 6, we can wrap such Angular components into custom elements. Such elements are nothing but DOM elements; in our case a combination of textbox and divs with a composition of javascript function. These elements once segregated can be used independently irrespective of any other frontend libraries like react.js, view or simple jquery. The custom elements are a new way to set the component individually out of the ng framework and use it independently. Ivy: Support for new Angular engine version 6 onwards Angular 6 will introduce us (in the near future) to a new Ivy engine that contributes to great performance and the decrease in load time of an application. Here are some important features of Ivy you need to know. Shaking Tree It is an optimization step that makes sure that unused code is not present in your build bundle. The tree shaking compilation is often used while executing `ng build` command to generate the build. New to what is a build or a bundle? A build or a bundle is a ready-to-go-live set of files that needs to be deployed on the production environment. Let’s  say a frontend project will be needing the following files in a bundle : In your Angular project there might be a component included but is not required. Assume, it falls under a specific if-condition and is not at all executed. The normal dead code elimination tools using static analysis work by retaining the symbols/characters of the reference already present in the unbundled code. Hence the component that was conditionally not at all used, unfortunately remains inside the bundle. The new rendering mechanism Render 2 is built to solve such issues. Now we can specify configuration through instruction based rendering technique. This may include only things that are required which in turn minimizes the size of builds bundles to the great extent. The new Ivy engine seems cool! New cli commands With upgradation to Angular 6, the ng cli package provides two new commands. ng add As its name suggests, the ‘ng add’ command provides you the capability to add a new module/package to your current application. This may be rxjs, material UI libraries etc. Don’t get confused, it doesn’t install the package but simply adds one to your project whenever required. So if you are planning to add a third party library to your Angular app make sure you install it using npm, and then add it using ng add. The automatic addition of such modules helps reduce development time by avoiding errors while adding up a module. ng update The new Angular version 6 cli has the most awaited ‘ng update’ command. This command when run, yields a command line that provides a list of packages that need to be updated over time. In case they are already updated, the command just provides a confirmation that everything is in order. Upgrading to ng 6 A fresh Angular 6 installation is not a problem. You can always follow https://update.Angular.io/ for incorporating changes with respect to updates. Here are a few set of things to do if you are planning to upgrade in your current project. Node.js version 8.9+ Update your Angular configuration //Globally npm i -g @Angular/cli //locally npm i @Angular/cli Once the Angular cli has its latest code, the ng update command is available for use. So let us use it for updating the packages under Angular/cli as follows npm update @Angular/cli Update the Angular/core packages using ng update as follows ng update @Angular/core Angular has rxjs for handling asynchronousity in the application. This library also needs to be updated to rxjs 6. Here is the link for the detailed updation process Update Angular material library that provides beautiful UI components ng update @Angular/material Finally run `ng serve` and test the new setup Besides all the amazing features listed above, Angular 6 provides support to rxJS6, Typescript 2.7 with conditional type declarations and not to forget the service-workers package in Angular’s core. At the time of Angular 6 launch, there were small break points with respect to command line commands like ng updates which are fixed by now and stable. The Angular team is already working towards some more incredible features like new ng-compiler engine, @aiStore (an AI powered solutions store), @mine package for bitcoins and much more in Angular 7. Over the years, the Angular team has continued to provide dedicated support to evolve the project into one of the  best that technology has to offer. With such tenacity, looks like the whole Angular ecosystem is poised to scale even greater heights than before. I, for one, can’t wait to see what they do next in Angular! [author title="Author Bio"] Erina is an assistant professor in the computer science department of Thakur college, Mumbai. Her enthusiasm in web technologies inspires her to contribute to freelance JavaScript projects, especially on Node.js. Her research topics were SDN and IoT, which according to her create amazing solutions for various web technologies when used together. Nowadays, she focuses on blockchain and enjoys fiddling with its concepts in JavaScript.[/author] Why switch to Angular for web development – Interview with Minko Gechev ng-conf 2018 highlights, the popular angular conference Getting started with Angular CLI and build your first Angular Component
Read more
  • 0
  • 0
  • 20009
article-image-announcing-feathers-4-a-framework-for-real-time-apps-and-rest-apis-with-javascript-or-typescript
Bhagyashree R
16 Sep 2019
3 min read
Save for later

Announcing Feathers 4, a framework for real-time apps and REST APIs with JavaScript or TypeScript

Bhagyashree R
16 Sep 2019
3 min read
Last month, the creator of the Feathers web-framework, David Luecke announced the release of Feathers 4. This release brings built-in TypeScript definitions, a framework-independent authentication mechanism, improved documentation, security updates in database adapters, and more. Feathers is a web framework for building real-time applications and REST APIs with JavaScript or TypeScript. It supports various frontend technologies including React, VueJS, Angular, and works with any backend. Read also: Getting started with React Hooks by building a counter with useState and useEffect It basically serves as an API layer between any backend and frontend: Source: Feathers Unlike traditional MVC and low-level HTTP frameworks that rely on routes, controllers, or HTTP requests and response handlers, Feathers uses services and hooks. This makes the application easier to understand and test and lets developers focus on their application logic regardless of how it is being accessed. This also enables developers to add new communication protocols without the need for updating their application code. Key updates in Feathers 4 Built-in TypeScript definitions The core libraries and database adapters in Feathers 4 now have built-in TypeScript definitions. With this update, you will be able to create a TypeScript Feathers application with the command-line interface (CLI). Read also: TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more A new framework-independent authentication mechanism Feathers 4 comes with a new framework-independent authentication mechanism that is both flexible and easier to use. It provides a collection of tools for managing username/password, JSON web tokens (JWT) and OAuth authentication, and custom authentication mechanisms. The authentication mechanism includes the following core modules: A Feathers service named ‘AuthenticationService’ to register authentication mechanisms and create authentication tokens. The ‘JWTStrategy’ authentication strategy for authenticating JSON web token service methods calls and HTTP requests. The ‘authenticate’ hook to limit service calls to an authentication strategy. Security updates in database adapters The database adapters in Feathers 4 are updated to include crucial security and usability features, some of which are: Querying by id: The database adapters now support additional query parameters for ‘get’, ‘remove’, ‘update’, and ‘patch’. In this release, a ‘NotFound’ error will be thrown if the record does not match the query, even if the id is valid. Hook-less service methods: Starting from this release, you can call a service method by simply adding ‘a _’ in front instead of using a hook. This will be useful in the cases when you need the raw data from the service without triggering any of its hooks. Multi updates: Mulitple update means you can create, update, or remove multiple records at once. Though it is convenient, it can also open your application to queries that you never intended for. This is why, in Feathers 4, the team has made multiple updates opt-in by disabling it by default. You can enable it by explicitly setting the ‘multi’ option. Along with these updates, the team has also worked on the website and documentation. “The Feathers guide is more concise while still teaching all the important things about Feathers. You get to create your first REST API and real-time web-application in less than 15 minutes and a complete chat application with a REST and websocket API, a web frontend, unit tests, user registration and GitHub login in under two hours,” Luecke writes. Read Luecke’s official announcement to know what else has landed in Feathers 4. Other news in web 5 pitfalls of React Hooks you should avoid – Kent C. Dodds Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users How to integrate a Medium editor in Angular 8
Read more
  • 0
  • 0
  • 20008

article-image-postgresql-11-is-here-with-improved-partitioning-performance-query-parallelism-and-jit-compilation
Natasha Mathur
19 Oct 2018
3 min read
Save for later

PostgreSQL 11 is here with improved partitioning performance, query parallelism, and JIT compilation

Natasha Mathur
19 Oct 2018
3 min read
After releasing PostgreSQL 11 beta 1, back in May, the PostgreSQL Global Development Group finally released PostgreSQL 11, yesterday. PostgreSQL 11 explores features such as increased performance for partitioning, support for transactions in stored procedures, improved capabilities for query parallelism, and Just-in-Time (JIT) compilation for expressions among other updates. PostgreSQL is a popular open source relational database management system that offers better reliability, robustness, and enhanced performance measures. Let’s have a look at these features in PostgreSQL 11. Increased performance for partitioning PostgreSQL 11 comes with an ability to partition the data using a hash key, which is known as hash partitioning. This adds to the already existing ability to partition data in PostgreSQL using a list of values or by a range. Moreover, PostgreSQL 11 also improves the data federation abilities by implementing functionality improvements for partitions using PostgreSQL foreign data wrapper, and postgres_fdw. For managing these partitions, PostgreSQL 11 comes with a “catch-all” default partition for data that doesn’t match a partition key. It also comes with an ability to create primary keys, foreign keys, indexes as well as triggers on partitioned tables. The latest release also offers support for automatic movement of rows to the correct partition, given that the partition key for that row is updated. Additionally, PostgreSQL 11 enhances the query performance when reading from partitions with the help of a new partition elimination strategy. It also offers support for the popular "upsert" feature on partitioned tables. The upsert feature helps users simplify the application code as well as reduce the network overhead when interacting with their data. Support for transactions in stored procedures With PostgreSQL 11 comes newly added SQL procedures that help perform full transaction management within the body of a function. This enables the developers to build advanced server-side applications like the ones that involve incremental bulk data loading. Also, SQL procedures can now be created using the CREATE PROCEDURE command which is executed using the CALL command. These SQL procedures are supported by the server-side procedural languages such as PL/pgSQL, PL/Perl, PL/Python, and PL/Tcl. Improved capabilities for query parallelism PostgreSQL 11 enhances the parallel query performance, using the performance gains in parallel sequential scans and hash joins. It also performs more efficient scans of the partitioned data. PostgreSQL 11 comes with added parallelism for a range of data definitions commands, especially for the creation of B-tree indexes generated by executing the standard CREATE INDEX command. Other data definition commands that either create tables or materialize the views from queries are also enabled with parallelism. This includes the CREATE TABLE .. AS, SELECT INTO, and CREATE MATERIALIZED VIEW. Just-in-Time (JIT) compilation for expressions PostgreSQL 11 offers support for Just-In-Time (JIT) compilation, This helps to accelerate the execution of certain expressions during query execution. The JIT expression compilation uses the LLVM project to boost the execution of expressions in WHERE clauses, target lists, aggregates, projections, as well as some other internal operations. Other Improvements ALTER TABLE .. ADD COLUMN .. DEFAULT ..have been replaced with a not NULL default to rewrite the whole table on execution. This offers a significant performance boost when running this command. Additional functionality has been added for working with window functions, including allowing RANGE to use PRECEDING/FOLLOWING, GROUPS, and frame exclusion. Keywords such as "quit" and "exit" have been added to the PostgreSQL command-line interface to help make it easier to leave the command-line tool. For more information, check out the official release notes. PostgreSQL group releases an update to 9.6.10, 9.5.14, 9.4.19, 9.3.24 How to perform data partitioning in PostgreSQL 10 How to write effective Stored Procedures in PostgreSQL
Read more
  • 0
  • 0
  • 19956

article-image-its-a-win-for-web-accessibility-as-courts-can-now-order-companies-to-make-their-sites-wcag-2-0-compliant
Sugandha Lahoti
17 Jan 2019
3 min read
Save for later

It’s a win for Web accessibility as courts can now order companies to make their sites WCAG 2.0 compliant

Sugandha Lahoti
17 Jan 2019
3 min read
Yesterday, the ninth circuit court of appeals gave a big win for web accessibility in a case against Domino’s Pizza. In 2016, a blind man filed a federal lawsuit against Domino’s stating that it wasn’t compatible with standard screen reading software which is designed to vocalize text and visual information. This did not allow him to use the pizza builder feature to personalize his order. Per his claim, Domino’s violated the Americans With Disabilities Act and should make its online presence compatible with Web Content Accessibility Guidelines. A blog post published by the National Retail Federation highlights that such lawsuits are on the rise, with 1,053 filed in the first half of last year compared to 814 in all of 2017. All of them voiced how there is a lack of clarity in how the ADA applies to the modern internet. [box type="shadow" align="" class="" width=""]Web Content Accessibility Guidelines (WCAG) is developed through the W3C process with the goal of providing a single shared standard for web content accessibility. The WCAG documents explain how to make web content more accessible to people with disabilities.[/box] Earlier, a lower court ruled in favor of Domino’s and tossed the case out of court. However, the court of appeals court reversed the ruling saying that the ADA covers websites and mobile applications and so the case is relevant. Domino’s argued that there was an absence of regulations specifically requiring web accessibility or referencing the Web Content Accessibility Guidelines. However the appellate judges explained the case was not about whether Domino’s did not comply with WCAG  “While we understand why Domino’s wants DOJ to issue specific guidelines for website and app accessibility, the Constitution only requires that Domino’s receive fair notice of its legal duties, not a blueprint for compliance with its statutory obligations,” U.S. Circuit Judges John B. Owens wrote in a 25-page opinion. The judges' panel said the case was relevant and send it back to the district court. They will talk about whether the Domino’s website and app comply with the ADA mandate to “provide the blind with effective communication and full and equal enjoyment of its products and services.” A Twitter thread by Jared Spool, applauded the court’s actions to apply web accessibility to ADA penalties and talked about the long and short term implications of this news. The first will likely come when insurance companies raise rates for any company that doesn’t meet WCAG compliance. This will create a bigger market for compliance certification firms. Insurance companies will demand certification sign-off to give preferred premiums. This will likely push companies to require WCAG understanding from the designers they hire. In the short term, we’ll likely see a higher demand for UX professionals with specialized knowledge in accessibility. In the long term, this knowledge will be required for all UX professionals. The demand for specialists will likely decrease as it becomes common practice. Also, toolkits, frameworks, and other standard platforms will build accessibility in. This will also reduce the demand for specialists, as it will become more difficult to build things that aren’t accessible. Good, accessible design will become the path of least resistance. You may go through the full appeal from the United States District Court for the Central District of California. EFF asks California Supreme Court to hear a case on government data accessibility and anonymization under CPRA 7 Web design trends and predictions for 2019 We discuss the key trends for web and app developers in 2019 [Podcast]
Read more
  • 0
  • 0
  • 19954
article-image-improve-interpretability-machine-learning-systems
Sugandha Lahoti
12 Mar 2018
6 min read
Save for later

How to improve interpretability of machine learning systems

Sugandha Lahoti
12 Mar 2018
6 min read
Advances in machine learning have greatly improved products, processes, and research, and how people might interact with computers. One of the factors lacking in machine learning processes is the ability to give an explanation for their predictions. The inability to give a proper explanation of results leads to end-users losing their trust over the system, which ultimately acts as a barrier to the adoption of machine learning. Hence, along with the impressive results from machine learning, it is also important to understand why and where it works, and when it won’t. In this article, we will talk about some ways to increase machine learning interpretability and make predictions from machine learning models understandable. 3 interesting methods for interpreting Machine Learning predictions According to Miller, interpretability is the degree to which a human can understand the cause of a decision. Interpretable predictions lead to better trust and provide insight into how the model may be improved. The kind of machine learning developments happening in the present times require a lot of complex models, which lack in interpretability. Simpler models (e.g. linear models), on the other hand,  often give a correct interpretation of a prediction model’s output, but they are often less accurate than complex models. Thus creating a tension between accuracy and interpretability. Complex models are less interpretable as their relationships are generally not concisely summarized. However, if we focus on a prediction made on a particular sample, we can describe the relationships more easily. Balancing the trade-off between model complexity and interpretability lies at the heart of the research done in the area of developing interpretable deep learning and machine learning models. We will discuss a few methods to increase the interpretability of complex ML models by summarizing model behavior with respect to a single prediction. LIME or Local Interpretable Model-Agnostic Explanations, is a method developed in the paper Why should I trust you? for interpreting individual model predictions based on locally approximating the model around a given prediction. LIME uses two approaches to explain specific predictions: perturbation and linear approximation. With Perturbation, LIME takes a prediction that requires explanation and systematically perturbs its inputs. These perturbed inputs become new, labeled training data for a simpler approximate model. It then does local linear approximation by fitting a linear model to describe the relationships between the (perturbed) inputs and outputs. Thus a simple linear algorithm approximates the more complex, nonlinear function. DeepLIFT (Deep Learning Important FeaTures) is another method which serves as a recursive prediction explanation method for deep learning.  This method decomposes the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT assigns contribution scores based on the difference between activation of each neuron and its ‘reference activation’. DeepLIFT can also reveal dependencies which are missed by other approaches by optionally giving separate consideration to positive and negative contributions. Layer-wise relevance propagation is another method for interpreting the predictions of deep learning models. It determines which features in a particular input vector contribute most strongly to a neural network’s output.  It defines a set of constraints to derive a number of different relevance propagation functions. Thus we saw 3 different ways of summarizing model behavior with a single prediction to increase model interpretability. Another important avenue to interpret machine learning models is to understand (and rethink) generalization. What is generalization and how it affects Machine learning interpretability Machine learning algorithms are trained on certain datasets, called training sets. During training, a model learns intrinsic patterns in data and updates its internal parameters to better understand the data. Once training is over, the model is tried upon test data to predict results based on what it has learned. In an ideal scenario, the model would always accurately predict the results for the test data. In reality, what happens is that the model is able to identify all the relevant information in the training data, but sometimes fails when presented with the new data. This difference between “training error” and “test error” is called the generalization error. The ultimate aim of turning a machine learning system to a scalable product is generalization. Every task in ML wants to create a generalized algorithm that acts in the same way for all kind of distributions. And the ability to distinguish models that generalize well from those that do not, will not only help to make ML models more interpretable, but it might also lead to more principled and reliable model architecture design. According to the conventional statistical theory, small generalization error is either due to properties of the model family or because of the regularization techniques used during training. A recent paper at ICLR 2017,  Understanding deep learning requires rethinking generalization shows that current machine learning theoretical frameworks fail to explain the impressive results of deep learning approaches and why understanding deep learning requires rethinking generalization. They support their findings through extensive systematic experiments. Developing human understanding through visualizing ML models Interpretability also means creating models that support human understanding of machine learning. Human interpretation is enhanced when visual and interactive diagrams and figures are used for the purpose of explaining the results of ML models. This is why a tight interplay of UX design with Machine learning is essential for increasing Machine learning interpretability. Walking along the lines of Human-centered Machine Learning, researchers at Google, OpenAI, DeepMind, YC Research and others have come up with Distill. This open science journal features articles which have a clear exposition of machine learning concepts using excellent interactive visualization tools. Most of these articles are aimed at understanding the inner working of various machine learning techniques. Some of these include: An article on attention and Augmented Recurrent Neural Networks which has a beautiful visualization of attention distribution in RNN. Another one on feature visualization, which talks about how neural networks build up their understanding of images Google has also launched the PAIR initiative to study and design the most effective ways for people to interact with AI systems. It helps researchers understand ML systems through work on interpretability and expanding the community of developers. R2D3 is another website, which provides an excellent visual introduction to machine learning. Facets is another tool for visualizing and understanding training datasets to provide a human-centered approach to ML engineering. Conclusion Human-Centered Machine Learning is all about increasing machine learning interpretability of ML systems and in developing their human understanding. It is about ML and AI systems understanding how humans reason, communicate and collaborate. As algorithms are used to make decisions in more angles of everyday life, it’s important for data scientists to train them thoughtfully to ensure the models make decisions for the right reasons. As more progress is done in this area, ML systems will not make commonsense errors or violate user expectations or place themselves in situations that can lead to conflict and harm, making such systems safer to use.  As research continues in this area, machines will soon be able to completely explain their decisions and their results in the most humane way possible.
Read more
  • 0
  • 0
  • 19951

article-image-requesting-an-update-for-my-sqlsaturday-com-bid-from-blog-posts-sqlservercentral
Anonymous
29 Dec 2020
1 min read
Save for later

Requesting an Update for My SQLSaturday.com Bid from Blog Posts - SQLServerCentral

Anonymous
29 Dec 2020
1 min read
Someone asked about the bid, and I have had no response, so I sent this. The post Requesting an Update for My SQLSaturday.com Bid appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 19942
Modal Close icon
Modal Close icon