Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-react-in-the-streets-d3-in-the-sheets-from-ui-devs-rss-feed
Matthew Emerick
28 Sep 2020
6 min read
Save for later

React in the streets, D3 in the sheets from ui.dev's RSS Feed

Matthew Emerick
28 Sep 2020
6 min read
Got a real spicy one for you today. Airbnb releases visx, Elder.js is a new Svelte framework, and CGId buttholes. Airbnb releases visx 1.0 Visualizing a future where we can stay in Airbnbs again Tony Robbins taught us to visualize success… Last week, Airbnb officially released visx 1.0, a collection of reusable, low-level visualization components that combine the powers of React and D3 (the JavaScript library, not, sadly, the Mighty Ducks). Airbnb has been using visx internally for over two years to “unify our visualization stack across the company.” By “visualization stack”, they definitely mean “all of our random charts and graphs”, but that shouldn’t stop you from listing yourself as a full-stack visualization engineer once you’ve played around with visx a few times. Why tho? The main sales pitch for visx is that it’s low-level enough to be powerful but high-level enough to, well, be useful. The way it does this is by leveraging React for the UI layer and D3 for the under the hood mathy stuff. Said differently - React in the streets, D3 in the sheets. The library itself features 30 separate packages of React visualization primitives and offers 3 main advantages compared to other visualization libraries: Smaller bundles: because visx is split into multiple packages. BYOL: Like your boyfriend around Dinner time, visx is intentionally unopinionated. Use any animation library, state management solution, or CSS-in-JS tools you want - visx DGAF. Not a charting library: visx wants to teach you how to fish, not catch a fish for you. It’s designed to be built on top of. The bottom line Are data visualizations the hottest thing to work on? Not really. But are they important? Also not really. But the marketing and biz dev people at your company love them. And those nerds esteemed colleagues will probably force you to help them create some visualizations in the future (if they haven’t already). visx seems like a great way to help you pump those out faster, easier, and with greater design consistency. Plus, there’s always a chance that the product you’re working on could require some sort of visualizations (i.e. tooltips, gradients, patterns, etc.). visx can help with that too. So, thanks Airbnb. No word yet on if Airbnb is planning on charging their usual 3% service fee on top of an overinflated cleaning fee for usage of visx, but we’ll keep you update. Elder.js 1.0 - a new Svelte framework Whoever Svelte it, dealt it dear Respect your generators… Elder.js 1.0 is a new, opinionated Svelte framework and static site generator that is very SEO-focused. It was created by Nick Reese in order to try and solve some of the unique challenges that come with building flagship SEO sites with 10-100k+ pages, like his site elderguide.com. Quick Svelte review: Svelte is a JavaScript library way of life that was first released ~4 years ago. It compiles all your code to vanilla JS at build time with less dependencies and no virtual DOM, and its syntax is known for being pretty simple and readable. So, what’s unique about Elder.js? Partial hydration: It hydrates just the parts of the client that need to be interactive, allowing you to significantly reduce your payloads and still maintain full control over component lazy-loading, preloading, and eager-loading. Hooks, shortcodes, and plugins: You can customize the framework with hooks that are designed to be “modular, sharable, and easily bundled in to Elder.js plugins for common use cases.” Straightforward data flow: Associating a data function in your route.js gives you complete control over how you fetch, prepare, and manipulate data before sending it to your Svelte template. These features (plus its tiny bundle sizes) should make a Elder.js a faster and simpler alternative to Sapper (the default Svelte framework) for a lot of use cases. Sapper is still probably the way to go if you’re building a full-fledged Svelte app, but Elder.js seems pretty awesome for content-heavy Svelte sites. The bottom line We’re super interested in who will lead Elder.js’s $10 million seed round. That’s how this works, right? JS Quiz - Answer Below Why does this code work? const friends = ['Alex', 'AB', 'Mikenzi'] friends.hasOwnProperty('push') // false Specifically, why does friends.hasOwnProperty('push') work even though friends doesn’t have a hasOwnProperty property and neither does Array.prototype? Cool bits Vime is an open-source media player that’s customizable, extensible, and framework-agnostic. The React Core Team wrote about the new JSX transform in React 17. Speaking of React 17, Happy 3rd Birthday React 16 😅. Nathan wrote an in-depth post about how to understand React rendering. Urlcat is a cool JavaScript library that helps you build URLs faster and avoid common mistakes. Is it cooler than the live-action CATS movie tho? Only one of those has CGId buttholes, so you tell us. (My search history is getting real weird writing this newsletter.) Everyone’s favorite Googler Addy Osmani wrote about visualizing data structures using the VSCode Debug Visualizer. Smolpxl is a JavaScript library for creating retro, pixelated games. There’s some strong Miniclip-in-2006 vibes in this one. Lea Verou wrote a great article about the failed promise of Web Components. “The failed promise” sounds like a mix between a T-Swift song and one of my middle school journal entries, but sometimes web development requires strong language, ok? Billboard.js released v2.1 because I guess this is now Chart Library Week 2020. JS Quiz - Answer const friends = ['Alex', 'AB', 'Mikenzi'] friends.hasOwnProperty('push') // false As mentioned earlier, if you look at Array.prototype, it doesn’t have a hasOwnProperty method. How then, does the friends array have access to hasOwnProperty? The reason is because the Array class extends the Object class. So when the JavaScript interpreter sees that friends doesn’t have a hasOwnProperty property, it checks if Array.prototype does. When Array.prototype doesn’t, it checks if Object.prototype does, it does, then it invokes it. const friends = ['Alex', 'AB', 'Mikenzi'] console.log(Object.prototype) /* constructor: ƒ Object() hasOwnProperty: ƒ hasOwnProperty() isPrototypeOf: ƒ isPrototypeOf() propertyIsEnumerable: ƒ propertyIsEnumerable() toLocaleString: ƒ toLocaleString() toString: ƒ toString() valueOf: ƒ valueOf() */ friends instanceof Array // true friends instanceof Object // true friends.hasOwnProperty('push') // false
Read more
  • 0
  • 0
  • 20956

article-image-google-podcasts-is-transcribing-full-podcast-episodes-for-improving-search-results
Bhagyashree R
28 Mar 2019
2 min read
Save for later

Google Podcasts is transcribing full podcast episodes for improving search results

Bhagyashree R
28 Mar 2019
2 min read
On Tuesday, Android Police reported that Google Podcasts is automatically transcribing episodes. It is using these transcripts as metadata to help users find the podcasts they want to listen even if they don’t know its title or when it was published. Though this is coming into light now, Google’s plan of using transcripts for improving search results has already been shared even before the app was actually launched. In an interview with Pacific Content, Zack Reneau-Wedeen, Google Podcasts product manager, said that Google could “transcribe the podcast and use that to understand more details about the podcast, including when they are discussing different topics in the episode.” This is not a user-facing feature but instead works in the background. You can see the transcription of these podcasts in the web page source of the Google Podcasts web portal. After getting a hint from a user, Android Police searched for “Corbin dabbing port” instead of Corbin Davenport, a writer for Android Police. Sure enough, the app’s search engine showed Episode 312 of the Android Police Podcast, his podcast, as the top result: Source: Android Police The transcription is enabled by Google’s Cloud Speech-to-Text transcription technology. Using transcriptions of such a huge number of podcasts Google can do things like including timestamps, index the contents, and make text easily searchable. This will also allow Google to actually “understand” what is being discussed in the podcasts without having to solely rely on the not-so-detailed notes and descriptions given by the podcasters. This could prove to be quite helpful if users don’t remember much about the shows other than a quote or interesting subject matter and make searching frictionless. As a user-facing feature, this could be beneficial for both a listener and a creator. “It would be great if they would surface this as feature/benefit to both the creator and the listener. It would be amazing to be able to timestamp, tag, clip, collect and share all the amazing moments I've found in podcasts over the years, “ said a Twitter user. Read the full story on Android Police. Google announces the general availability of AMP for email, faces serious backlash from users European Union fined Google 1.49 billion euros for antitrust violations in online advertising Google announces Stadia, a cloud-based game streaming service, at GDC 2019
Read more
  • 0
  • 0
  • 20944

article-image-flutter-challenges-electron-soon-to-release-a-desktop-client-to-accelerate-mobile-development
Bhagyashree R
03 Dec 2018
3 min read
Save for later

Flutter challenges Electron, soon to release a desktop client to accelerate mobile development

Bhagyashree R
03 Dec 2018
3 min read
On Saturday, the Flutter team announced that, as a competition to Electron, they will soon be releasing native desktop client to accelerate mobile development. Flutter native desktop client will come with support for resizing the emulator during runtime, using assets from PC, better RAM usage, and more. Flutter is Google’s open source mobile app SDK, which enables developers to write once and deploy natively on different platforms such as Android, iOS, Windows, Mac, and Linux. Additionally, they can also share the business logic to the web using AngularDart. Here’s what Flutter for desktop brings in: Resizable emulator during runtime To check how your layout looks on different screen sizes you need to create different emulators, which is quite cumbersome. To solve this issue Flutter desktop client will provide resizable emulator. Use assets saved on your PC When working with apps that interact with assets on the phone, developers have to first move all the testing files to the emulator or the device. With this desktop client, you can simply pick the file you want with your native file picker. Additionally, you don’t have to make any changes to the code as the desktop implementation uses the same method as the mobile implementation. Hot reloads and debugging Hot reload and debugging allows quickly experimenting, building UIs, adding new features, and fixing bugs. The desktop client supports these capabilities for better productivity. Better RAM usage The Android emulator consumes up to 1 GM RAM, but the RAM usage becomes worse when you are running the IntelliJ and the ever-RAM-hungry Chrome. Since the embedder is running natively, there will be no need for Android. Universally usable widgets You will be able to universally use most of the widgets such as buttons, loading indicators, etc that you create. And those widgets that require a different look as per the platform can be encapsulated pretty easily but checking the TargetPlatfrom property. Pages and plugins Pages differ in layout, depending on the platform and screen sizes, but not in functionality. You will be able to easily create accurate layouts for each platform with PageLayoutWidget. With regards to plugins, you do not have to make any changes in the Flutter code when using a plugin that also supports the desktop embedder. The Flutter desktop client is still in alpha, which means there will be more changes in the future. Read the official announcement on Medium. Google Flutter moves out of beta with release preview 1 Google Dart 2.1 released with improved performance and usability JavaScript mobile frameworks comparison: React Native vs Ionic vs NativeScript
Read more
  • 0
  • 0
  • 20914

article-image-cncf-announces-helm-3-a-kubernetes-package-manager-and-tool-to-manage-charts-and-libraries
Fatema Patrawala
14 Nov 2019
3 min read
Save for later

CNCF announces Helm 3, a Kubernetes package manager and tool to manage charts and libraries

Fatema Patrawala
14 Nov 2019
3 min read
The Cloud Native Computing Foundation (CNCF), which builds sustainable ecosystems for cloud native software, yesterday announced the stable release of Helm 3. Helm is a package manager for Kubernetes and a tool for managing charts of pre-configured Kubernetes resources. “Helm is one of our fastest-growing projects in contributors and users contributing back to the project,” said Chris Aniszczyk, CTO, CNCF. “Helm is a powerful tool for all Kubernetes users to streamline deployments, and we’re impressed by the progress the community has made with this release in growing their community.” As per the team the internal implementation of Helm 3 has changed considerably from Helm 2. The most important change is the removal of Tiller, a service that communicates with the Kubernetes API to manage Helm packages. Then there are improvements to chart repositories, release management, security, and library charts. Helm uses a packaging format called charts, which are collections of files describing a related set of Kubernetes resources. These charts can then be packaged into versioned archives to be deployed. Helm 2 defined a workflow for creating, installing, and managing these charts. Helm 3 builds upon that workflow, changing the underlying infrastructure to reflect the needs of the community as they change and evolve. In this release, the Helm maintainers incorporated feedback and requests from the community to better address the needs of Kubernetes users and the broad cloud native ecosystem. Helm 3 is ready for public deployment Last week, third party security firm Cure53 completed their open source security audit of Helm 3, mentioning Helm’s mature focus on security, and concluded that Helm 3 is “recommended for public deployment.” According to the report, “in light of the findings stemming from this CNCF-funded project, Cure53 can only state that the Helm projects the impression of being highly mature. This verdict is driven by a number of different factors… and essentially means that Helm can be recommended for public deployment, particularly when properly configured and secured in accordance to recommendations specified by the development team.” “When we built Helm, we set out to create a tool to serve as an ‘on-ramp’ to Kubernetes. With Helm 3, we have really accomplished that,” said Matt Fisher, the Helm 3 release manager. “Our goal has always been to make it easier for the Kubernetes user to create, share, and run production-grade workloads. The core maintainers are really excited to hit this major milestone, and we look forward to hearing how the community is using Helm 3.” Helm 3 is a joint community effort, with core maintainers from organizations including Microsoft, Samsung SDS, IBM, and Blood Orange. As per the team the next phase of Helm’s development will see new features targeted toward stability and enhancements to existing features. Features on the roadmap include enhanced functionality for helm test, improvements to Helm’s OCI integration, and enhanced functionality for the Go client libraries. To know more about this news, read the official announcement from the Cloud Native Computing Foundation. StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements StackRox App integrates into the Sumo Logic Dashboard for improved Kubernetes security  
Read more
  • 0
  • 0
  • 20901

article-image-electron-7-0-releases-in-beta-with-windows-on-arm-64-bit-faster-ipc-methods-nativetheme-api-and-more
Fatema Patrawala
24 Oct 2019
3 min read
Save for later

Electron 7.0 releases in beta with Windows on Arm 64 bit, faster IPC methods, nativetheme API and more

Fatema Patrawala
24 Oct 2019
3 min read
Last week the team at Electron announced the release of Electron 7.0 in beta. It includes upgrades to Chromium 78, V8 7.8, and Node.js 12.8.1. The team has added a Window on Arm 64 release, faster IPC methods, a new nativeTheme API, and much more. This release is published to npm under the beta tag and can be installed via npm install electron@beta, or npm i electron@7.0.0-beta.7. It is packed with upgrades, fixes, and new features. Notable changes in Electron 7.0 There are stack upgrades in this release, Electron 7.0 will be compatible on Chromium 78, V8 7.8 and Node.js 12.8.1. In this release they have added Windows on Arm (64 bit). The team has added ipcRenderer.invoke() and ipcMain.handle() for asynchronous request/response-style IPC. These are strongly recommended over the remote module. They have added nativeTheme API to read and respond to changes in the OS's theme and color scheme. In this release they have switched to a new TypeScript Definitions generator, which generates more precise definitions files (d.ts) from C# model classes to build strongly typed web application where the server- and client-side models are in sync. Earlier Electron used Doc Linter and Doc Parser but it had a few issues and hence shifted to TypeScript to make definition files better without losing any information on docs. Other breaking changes The team has removed deprecated APIs in this release: Callback-based versions of functions that now use Promises. Tray.setHighlightMode() (macOS). app.enableMixedSandbox() app.getApplicationMenu(), app.setApplicationMenu(), powerMonitor.querySystemIdleState(), powerMonitor.querySystemIdleTime(), webFrame.setIsolatedWorldContentSecurityPolicy(), webFrame.setIsolatedWorldHumanReadableName(), webFrame.setIsolatedWorldSecurityOrigin() Session.clearAuthCache() no longer allows filtering the cleared cache entries. Native interfaces on macOS (menus, dialogs, etc.) now automatically match the dark mode setting on the user's machine. The team has updated the electron module to use @electron/get. Node 8 is the minimum supported node version in this release. The electron.asar file no longer exists. Any packaging scripts that depend on its existence should be updated by the developers. Additionally the team announced that Electron 4.x.y has reached end-of-support as per the project's support policy. Developers and applications are encouraged to upgrade to a newer version of Electron. To know more about this release, check out the Electron 7.0 GitHub page and the official blog post. Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more Electron 5.0 ships with new versions of Chromium, V8, and Node.js The Electron team publicly shares the release timeline for Electron 5.0
Read more
  • 0
  • 0
  • 20887

article-image-microsoft-introduces-pyright-a-static-type-checker-for-the-python-language-written-in-typescript
Bhagyashree R
25 Mar 2019
2 min read
Save for later

Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript

Bhagyashree R
25 Mar 2019
2 min read
Yesterday, Microsoft released a new static type checker for Python called Pyright to fill in the gaps in existing Python type checkers like mypy. Currently, this type checker supports Python 3.0 and its newer versions. What are the type checking features Pyright brings in? It comes with support for PEP 484 (type hints including generics), PEP 526 (syntax for variable annotations), and PEP 544 (structural subtyping). It supports type inference for function return values, instance variables, class variables, and globals. It provides smart type constraints that can understand conditional code flow constructs like if/else statements. Increased speed Pyright shows 5x speed as compared to mypy and other existing type checkers written in Python. It was built keeping large source bases in mind and can perform incremental updates when files are modified. No need for setting up a Python Environment Since Pyright is written in TypeScript and runs within Node, you do not need to set up a Python environment or import third-party packages for installation. This proves really helpful when using the VS Code editor, which has Node as its extension runtime. Flexible configurability Pyright enables users to have granular control over settings. You can specify different execution environments for different subsets of a source base. For each environment, you can specify different PYTHONPATH settings, Python version, and platform target. To know more in detail about Pyright, check out its GitHub repository. Debugging and Profiling Python Scripts [Tutorial] Python 3.8 alpha 2 is now available for testing Core CPython developer unveils a new project that can analyze his phone’s ‘silent connections’  
Read more
  • 0
  • 0
  • 20866
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-electron-6-0-releases-with-improved-promise-support-native-touch-id-authentication-support-and-more
Bhagyashree R
01 Aug 2019
3 min read
Save for later

Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more

Bhagyashree R
01 Aug 2019
3 min read
On Tuesday, the team behind Electron, the web framework for building desktop apps, announced the release of Electron 6.0. It comes with further improvement in the ‘Promise’ support, native Touch ID authentication support for macOS, native emoji and color picker methods, and more. This release is upgraded to Chrome 76, Node.js 12.4.0, and V8 7.6. https://twitter.com/electronjs/status/1156273653635407872 Promisification of functions continue Starting from Electron 5.0, the team introduced a process called “promisification” in which callback-based functions are converted to return ‘Promises’. In Electron 6.0, the team has converted 26 functions to return Promises and also supported callback-based invocation. Among these “promisified” functions are ‘contentTracing.getCategories()’, ‘cookies.flushStore()’, ‘dialog.showCertificateTrustDialog()’, and more. Three new variants of the Helper app The hardened runtime was introduced to prevent exploits like code injection, DLL hijacking, and process memory space tampering. However, to serve the purpose it does restricts things like writable-executable memory and loading code signed by a different Team ID.  If your app relies on such functionalities, you can add an entitlement to disable individual protection. To enable a hardened runtime in an Electron app, special code signing entitlements were granted to Electron Helper. Starting from Electron 6.0, three new variants of the Helper app are added to keep these granted entitlements scoped to the process types that require them. These are ‘Electron Helper (Renderer).app)’, ‘(Electron Helper (GPU).app)’, and ‘(Electron Helper (Plugin).app)’. Developers using ‘electron-osx-sign’ to codesign their Electron app, do not have to make any changes to their build logic. But if you are using custom scripts instead, then you will need to ensure that the three Helper apps are correctly codesigned. To correctly package your application with these new helpers, use ‘electron-packager@14.0.4’ or higher. Miscellaneous changes to Electron 6.0 Electron 6.0 brings native Touch ID authentication support for macOS. There are now native emoji and color picker methods for Windows and macOS. The ‘chrome.runtime.getManifest’ API for Chrome extensions is added that returns details about the app or extension from the manifest. The ‘<webview>.getWebContentsId()’ method is added that allows getting the WebContents ID of WebViews when the remote module is disabled. Support is added for the Chrome extension content script option ‘all_frames’. This option allows an extension to specify whether JS and CSS files should be injected into all frames or only into the topmost frame in a tab. With Electron 6.0, the team has laid out the groundwork for a future requirement, which says that all native Node modules loaded in the renderer process will be either N-API or Context Aware. This is done for faster performance, better security, and reduced maintenance workload. Along with the release announcement, the team also announced the end of life of Electron 3.x.y and has recommended upgrading to a newer version of Electron. To know all the new features in Electron 6.0, check out the official announcement. Electron 5.0 ships with new versions of Chromium, V8, and Node.js The Electron team publicly shares the release timeline for Electron 5.0 How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 20865

article-image-pytorch-org-revamps-for-pytorch-1-0-with-design-changes-and-added-static-graph-support
Natasha Mathur
21 Sep 2018
2 min read
Save for later

Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support

Natasha Mathur
21 Sep 2018
2 min read
The Pytorch team updated their official website “Pytorch.org” for PyTorch 1.0 yesterday. The new update comprises minor changes to the overall look and feel of the website. In addition to that, more information has been added under the tutorials section for converting your PyTorch models to a static graph. PyTorch is a Python-based scientific computing package which uses the power of graphics processing units. It is also one of the preferred deep learning research platforms built to offer maximum flexibility and speed. Key Updates Design Changes The layout of the webpage is still the same. But color changes have been made with additional tabs included on top of the webpage. Revamped Pytorch.org Previously, there were only five tabs, namely, get started, about, support, discuss and docs. Now, there are eight tabs included namely, Get Started, Features, Ecosystem, Blog, Tutorials, Docs, Resources, and Github. Older Python.org Updated Tutorials With new tutorial tab, additional information has been provided for users to convert their models into a static graph, which is a feature in the upcoming PyTorch 1.0 version. Added static graph support One of the main differences between TensorFlow and PyTorch is that TensorFlow uses static computational graphs while PyTorch uses dynamic computational graphs. In TensorFlow we first set up the computational graph, then execute the same graph many times. There has been an additional section under tutorials on static graphs. This implementation makes use of basic TensorFlow operations to set up a computational graph, then executes the graph many times to actually train a fully-connected ReLU network. For more details on the changes, visit the official PyTorch website. What is PyTorch and how does it work? Can a production ready PyTorch 1.0 give TensorFlow a tough time? PyTorch 0.3.0 releases, ending stochastic functions
Read more
  • 0
  • 0
  • 20856

article-image-automobile-repair-self-diagnosis-and-traffic-light-management-enabled-by-ai-from-ai-trends
Matthew Emerick
15 Oct 2020
5 min read
Save for later

Automobile Repair Self-Diagnosis and Traffic Light Management Enabled by AI from AI Trends

Matthew Emerick
15 Oct 2020
5 min read
By AI Trends Staff Looking inside and outside, AI is being applied to the self-diagnosis of automobiles and to the connection of vehicles to traffic infrastructure. A data scientist at BMW Group in Munich, while working on his PhD, created a system for self-diagnosis called the Automated Damage Assessment Service, according to an account in  Mirage. Milan Koch was completing his studies at the Leiden Institute of Advanced Computer Science in the Netherlands when he got the idea. “It should be a nice experience for customers,” he stated. The system gathers data over time from sensors in different parts of the car. “From scratch, we have developed a service idea that is about detecting damaged parts from low speed accidents,” Koch stated. “The car itself is able to detect the parts that are broken and can estimate the costs and the time of the repair.” Milan Koch, data scientist, BMW Group, Munich Koch developed and compared different multivariate time series methods, based on machine learning, deep learning and also state-of-the-art automated machine learning (AutoML) models. He tested different levels of complexity to find the best way to solve the time series problems. Two of the AutoML methods and his hand-crafted machine learning pipeline showed the best results. The system may have application to other multivariate time series problems, where multiple time-dependent variables must be considered, outside the automotive field. Koch collaborated with researchers from the Leiden University Medical Center (LUMC) to use his hand-crafted pipeline to analyze Electroencephalography (EEG) data.  Koch stated, ‘We predicted the cognition of patients based on EEG data, because an accurate assessment of cognitive function is required during the screening process for Deep Brain Stimulation (DBS) surgery. Patients with advanced cognitive deterioration are considered suboptimal candidates for DBS as cognitive function may deteriorate after surgery. However, cognitive function is sometimes difficult to assess accurately, and analysis of EEG patterns may provide additional biomarkers. Our machine learning pipeline was well suited to apply to this problem.” He added, “We developed algorithms for the automotive domain and initially we didn’t have the intention to apply it to the medical domain, but it worked out really well.” His models are now also applied to Electromyography (EMG) data, to distinguish between people with a motor disease and healthy people. Koch intends to continue his work at BMW Group, where he will focus on customer-oriented services, predictive maintenance applications and optimization of vehicle diagnostics. DOE Grant to Research Traffic Management Delays Aims to Reduce Emissions Getting automobiles to talk to the traffic management infrastructure is the goal of research at the University of Tennesse at Chattanooga, which has been awarded $1.89 million from the US Department of Energy to create a new model for traffic intersections that would reduce energy consumption. The UTC Center for Urban Informatics and Progress (CUIP)  will leverage its existing “smart corridor” to accommodate the new research. The smart corridor is a 1.25-mile span on a main artery in downtown Chattanooga, used as a test bed for research into smart city development and connected vehicles in a real-world environment.  “This project is a huge opportunity for us,” stated Dr. Mina Sartipi, CUIP Director and principal investigator, in a press release. “Collaborating on a project that is future-oriented, novel, and full of potential is exciting. This work will contribute to the existing body of literature and lead the way for future research.” UTC is collaborating with the University of Pittsburgh, the Georgia Institute of Technology, the Oak Ridge National Laboratory, and the City of Chattanooga on the project. Dr. Mina Sartipi, Director, UTC Center for Urban Informatics and Progress In the grant proposal for the DOE, the research team noted that the US transportation sector accounted for more than 69 percent of petroleum consumption, and more than 37 percent of the country’s CO2 emissions. An earlier National Traffic Signal Report Card found that inefficient traffic signals contribute to 295 million vehicle hours of traffic delay, making up to 10 percent of all traffic-related delays.  The project intends to leverage the capabilities of connected vehicles and infrastructures to optimize and manage traffic flow. While adaptive traffic control systems (ATCS) have been in use for a half century to improve mobility and traffic efficiency, they were not designed to address fuel consumption and emissions. Inefficient traffic systems increase idling time and stop-and-go traffic. The National Transportation Operations Coalition has graded the state of the nation’s traffic signals as D+. “The next step in the evolution [of intelligent transportation systems] is the merging of these systems through AI,” noted Aleksandar Stevanovic, associate professor of civil and environmental engineering at Pitt’s Swanson School of Engineering and director of the Pittsburgh Intelligent Transportation Systems (PITTS) Lab. “Creation of such a system, especially for dense urban corridors and sprawling exurbs, can greatly improve energy and sustainability impacts. This is critical as our transportation portfolio will continue to have a heavy reliance on gasoline-powered vehicles for some time.” The goal of the three-year project is to develop a dynamic feedback Ecological Automotive Traffic Control System (Eco-ATCS), which reduces fuel consumption and greenhouse gases while maintaining a highly operable and safe transportation environment. The integration of AI will allow additional infrastructure enhancements including emergency vehicle preemption, transit signal priority, and pedestrian safety. The ultimate goal is to reduce corridor-level fuel consumption by 20 percent. Read the source articles and information in Mirage, and in a press release from the UTC Center for Urban Informatics and Progress.
Read more
  • 0
  • 0
  • 20817

article-image-breaking-ai-workflow-into-stages-reveals-investment-opportunities-from-ai-trends
Matthew Emerick
08 Oct 2020
6 min read
Save for later

Breaking AI Workflow Into Stages Reveals Investment Opportunities  from AI Trends

Matthew Emerick
08 Oct 2020
6 min read
By John P. Desmond, AI Trends Editor  An infrastructure–first approach to AI investing has the potential to yield greater returns with a lower risk profile, suggests a recent account in Forbes. To identify the technologies supporting the AI system, deconstruct the workflow into two steps as a starting point: training and inference.    MBA candidate at Columbia Business School, MBA Associate at Primary Venture Partners “Training is the process by which a framework for deep-learning is applied to a dataset,” states Basil Alomary, author of the Forbes account. An MBA candidate at Columbia Business School and MBA Associate at Primary Venture Partners, his background and experience are in early-stage SaaS ventures, as an operator and an investor. “That data needs to be relevant, large enough, and well-labeled to ensure that the system is being trained appropriately. Also, the machine learning models being created need to be validated, to avoid overfitting to the training data and to maintain a level of generalizability. The inference portion is the application of this model and the ongoing monitoring to identify its efficacy.”  He identifies these stages in the AI/ML development lifecycle: data acquisition, data preparation, training, inference, and implementation. The stages of acquisition, preparation, and implementation have arguably attracted the least amount of attention from investors.   Where to get the data for training the models is a chief concern. If a company is old enough to have historical customer data, it can be helpful. That approach should be inexpensive, but the data needs to be clean and complete enough to help in whatever decisions it works on. Companies without the option of historical data, can try publicly-available datasets, or they can buy the data directly. A new class of suppliers is emerging that primarily focus on selling clean, well-labeled datasets specifically for machine learning applications.   One such startup is Narrative, based in New York City. The company sells data tailored to the client’s use case. The OpenML and Amazon Datasets have marketplace characteristics but are entirely open source, which is limiting for those who seek to monetize their own assets.    Nick Jordan, CEO and founder, Narrative “Essentially, the idea was to take the best parts of the e-commerce and search models and apply that to a non-consumer offering to find, discover and ultimately buy data,” stated Narrative founder and CEO Nick Jordan in an account in TechCrunch. “The premise is to make it as easy to buy data as it is to buy stuff online.”  In a demonstration, Jordan showed how a marketer could browse and search for data using the Narrative tools. The marketer could select the mobile IDs of people who have the Uber Driver app installed on their phone, or the Zoom app, at a price that is often subscription-based. The data selection is added to the shopping cart and checked out, like any online transaction.   Founded in 2016, Narrative collects data sellers into its market, vetting each one, working to understand how the data is collected, its quality, and whether it could  be useful in a regulated environment. Narrative does not attempt to grade the quality of the data. “Data quality is in the eye of the beholder,” Jordan stated. Buyers are able to conduct their own research into the data quality if so desired. Narrative is working on building a marketplace of third-party applications, which could include scoring of data sets.    Data preparation is critical to making the machine learning model effective. Raw data needs to be preprocessed so that machine learning algorithms can produce a model, a structural description of the data. In an image database, for example, the images may have to be labelled, which can be labor-intensive.    Automating Data Preparation is an Opportunity Area   Platforms are emerging to support the process of data preparation with a layer of automation that seeks to accelerate the process. Startup Labelbox recently raised a $25 million Series B financing round to help grow its data labeling platform for AI model training, according to a recent account in VentureBeat.  Founded in 2018 in San Francisco, Labelbox aims to be the data platform that acts as a central hub for data science teams to coordinate with dispersed labeling teams. In April, the company won a contract with the Department of Defense  for the US Air Force AFWERX program, which is building out technology partnerships.   Manu Sharma, CEO and co-founder, Labelbox A press release issued by Labelbox on the contract award contained some history of the company. “I grew up in a poor family, with limited opportunities and little infrastructure” stated Manu Sharma, CEO and one of Labelbox’s co-founders, who was raised in a village in India near the Himalayas. He said that opportunities afforded by the U.S. have helped him achieve more success in ten years than multiple generations of his family back home. “We’ve made a principled decision to work with the government and support the American system,” he stated.   The Labelbox platform is supporting supervised-learning, a branch of AI that uses labeled data to train algorithms to recognize patterns in images, audio, video or text. The platform enables collaboration among team members as well as these functions: rework, rework, quality assurance, model evaluation, audit trails, and model-assisted labeling.   “Labelbox is an integrated solution for data science teams to not only create the training data but also to manage it in one place,” stated Sharma. “It’s the foundational infrastructure for customers to build their machine learning pipeline.”  Deploying the AI model into the real world requires an ongoing evaluation, a data pipeline that can handle continued training, scaling and managing computing resources, suggests Alomary in Forbes. An example product is Amazon’s Sagemaker, supporting deployment. Amazon offers a managed service that includes human interventions to monitor deployed models.   DataRobot of Boston in 2012 saw the opportunity to develop a platform for building, deploying, and managing machine learning models. The company raised a Series E round of $206 million in September and now has $431 million in venture-backed funding to date, according to Crunchbase.   Unfortunately DataRobot in March had to shrink its workforce by an undisclosed number of people, according to an account in BOSTINNO. The company employed 250 full-time employees as of October 2019.   DataRobot announced recently that it was partnering with Amazon Web Services to provide its enterprise AI platform free of charge to anyone using it to help with the coronavirus response effort.  Read the source articles and releases in Forbes, TechCrunch, VentureBeat and BOSTINNO. 
Read more
  • 0
  • 0
  • 20791
article-image-nativescript-4-1-has-been-released
Sugandha Lahoti
11 Jun 2018
2 min read
Save for later

Nativescript 4.1 has been released

Sugandha Lahoti
11 Jun 2018
2 min read
Nativescript version 4.1 has been released with multiple performance improvements and major highlights such as support for Angular 6, faster app launch time, new UI scenarios and much more. Android 6 support NativeScript 4.1 Angular integration has been updated with support for Angular 6. And since Angular 6 requires webpack 4, developers have to update to nativescript-dev-webpack 0.12.0 as well. V8 engine now available in v6.6 The V8 Engine is now upgraded to version 6.6. The new engine adds major performance boost, JavaScript language features, code caching after execution, removal of AST numbering, and multiple asynchronous performance improvements. Faster Android startup time The upgrade to V8 6.6 and webpack 4, also brings faster app launch time on Android making it on par with iOS. Depending on the device and the specific app, Android app startup time is now between 800ms on high-end devices, to 1.8s on older devices. This is an improvement of almost 50% compared to the startup time in Nativescript v4.0! Improvements to user interfaces Nativescript 4.1 adds augmented reality with support for ARKit for iOS by adding the built-in vector type. Multiple iOS Simulators Nativescript 4.1, can now run applications simultaneously on multiple iOS Simulators, iOS devices, Android emulators, and Android devices. Developers can LiveSync change on iPhone X, iPhone 8, iPad 2, etc. and immediately see how the application looks and behaves on all of them. Navigation in a Modal View There are updates to navigation in modal view as well. Before version 4.1, for navigation inside a modal view, you had to remove the implicit wrapper and specify the root element of the modal view. With the new changes, the first element of the modal view becomes its root view and developers can place a page router outlet there, for proper navigation. LayoutChanged Event Nativescript 4.1 introduces layoutChanged event that will be fired when the layout bounds of a view changes due to layout processing. The correct view sizes are obtained with getActualSize(). Proper view sizes are especially useful when dealing with size-dependent animations. The full list of bug updates and other fixes are available in the changelog. You can read more about the release on the Nativescript blog. NativeScript: What is it, and how to set it up How to integrate Firebase with NativeScript for cross-platform app development
Read more
  • 0
  • 0
  • 20787

article-image-what-is-full-stack-developer
Richard Gall
28 Mar 2018
3 min read
Save for later

What is a full-stack developer?

Richard Gall
28 Mar 2018
3 min read
Full stack developer has been named as one of the most common developer roles according to the latest stack overflow survey. But what exactly does a full stack developer do and what does a typical full stack developer job description look like? Full stack developers bridge the gap between the font end and back end Full stack developers deal with the full spectrum of development, from back end to front end development. They are hugely versatile technical professionals, and because they work on both the client and server side, they need to be able to learn new frameworks, libraries and tools very quickly. There’s a common misconception that full stack developers are experts in every area of web development. They’re not – they’re often generalists with broad knowledge that doesn’t necessarily run deep. However, this lack of depth isn’t necessarily a disadvantage. Because they have experience in both back end and front end development they know how to provide solutions to working with both. But most importantly, as Agile becomes integral to modern development practices, developers who are able to properly understand and move between front and back ends is vital. From an economic perspective it also makes sense – with a team of full-stack developers you have a team of people able to perform multiple roles. What a full stack developer job description looks like Every full-stack developer job description looks different. The role is continually evolving and different organizations will require different skills. Here are some of the things you’re likely to see: HTML / CSS JavaScript JavaScript frameworks like Angular or React Experience of UI and API design SQL and experience with other databases At least one backend programming language (python, ruby, java etc) Backend framework experience (for example, ASP.NET Core, Flask) Build and release management or automation tools such as Jenkins Virtualization and containerization knowledge (and today possibly serverless too) Essentially, it’s up to the individual to build upon their knowledge by learning new technologies in order to become an expert full stack developer. Full stack developers need soft skills But soft skills are also important for full-stack developers. Being able to communicate effectively, manage projects and stakeholders is essential. Of course, knowledge of Agile and Scrum are always in-demand; being collaborative is also vital, as software development is never really a solitary exercise. Similarly, commercial awareness is highly valued - a full stack developer that understands they are solving business problems, not just software problems is invaluable.
Read more
  • 0
  • 0
  • 20783

article-image-intel-acquires-vertex-ai-to-join-it-under-their-artificial-intelligence-unit
Prasad Ramesh
17 Aug 2018
2 min read
Save for later

Intel acquires Vertex.ai to join it under their artificial intelligence unit

Prasad Ramesh
17 Aug 2018
2 min read
After acquiring Nervana, Mobileye, and Movidius, Intel has now bought Vertex.ai and is merging it with their artificial intelligence group. Vertex.ai is a Seattle based startup unicorn with the vision to develop deep learning for every platform with their PlaidML deep learning engine. The terms of the deal are undisclosed but the 7-person Vertex.ai team including founders Choong Ng, Jeremy Bruestle, and Brian Retford will become a part of Movidius in Intel’s Artificial Intelligence Products Group. Vertex.ai was founded in 2015 and initially funded by Curious Capital and Creative Destruction Lab, among others. Intel said in a statement "With this acquisition, Intel gained an experienced team and IP (intellectual property) to further enable flexible deep learning at the edge." The chipmaker does intend to continue developing PlaidML as an open source project. They will shortly transition it to the Apache 2.0 license from the existing AGPLv3 license. The priority for PlaidML will continue to be an engine that supports a variety of hardware with an Intel nGraph backend. “There’s a large gap between the capabilities neural networks show in research and the practical challenges in actually getting them to run on the platforms where most applications run,” Ng stated on Vertex.ai’s launch in 2016. “Making these algorithms work in your app requires fast enough hardware paired with precisely tuned software compatible with your platform and language. Efficient plus compatible plus portable is a huge challenge—we can help.” Intel is among many other giants in the tech industry making heavy investments in AI. Their AI chip business is currently at $1B a year. Their PC/chip business makes $8.8B while their data-centric business makes $7.2 billion. “After 50 years, this is the biggest opportunity for the company,” Navin Shenoy, executive vice president said at Intel’s 2018 Data Centric Innovation Summit this year. “We have 20 percent of this market today…Our strategy is to drive a new era of data center technology.” The official announcement is stated in the Vertex.ai website. Intel acquires eASIC, a custom chip (FPGA) maker for IoT, cloud and 5G environments Intel’s Spectre variant 4 patch impacts CPU performance SingularityNET and Mindfire unite talents to explore artificial intelligence
Read more
  • 0
  • 0
  • 20767
article-image-net-framework-api-porting-project-concludes-with-net-core-3-0
Savia Lobo
17 Oct 2019
3 min read
Save for later

.NET Framework API Porting Project concludes with .NET Core 3.0

Savia Lobo
17 Oct 2019
3 min read
Two days ago, Immo Landwerth, a program manager on the .NET team announced that the community is concluding the .NET Framework API Porting project. They also plan to release more of the .NET Framework codebase under the MIT license on GitHub to allow the community to create OSS projects for technologies that will not be brought to .NET Core. For example, there already are community projects for CoreWF and CoreWCF. https://twitter.com/shanselman/status/1184214679779864576 In May, this year, Microsoft announced that the future of .NET will be based on .NET Core. At Build 2019, Scott Hunter stated that AppDomains, remoting, Web Forms, WCF server, and Windows Workflow won’t be ported to .NET Core. “With .NET Core 3.0, we’re at the point where we’ve ported all technologies that are required for modern workloads, be it desktop apps, mobile apps, console apps, web sites, or cloud services,” Landwerth writes. .NET Core 3.0 released last month includes added WPF and WinForms, which increased the number of .NET Framework APIs ported to .NET Core to over 120k, which is more than half of all .NET Framework APIs. Also, about 62K APIs to .NET Core that doesn’t exist in the .NET Framework. On comparing the total number of APIs, .NET Core has about 80% of the API surface of .NET Framework. “Since we generally no longer plan to bring existing technologies from .NET Framework to .NET Core we’ll be closing all issues that are labeled with port-to-core,” Landwerth further mentions. Post this announcement on GitHub, a lot of GitHub users enquired more about this change in the .NET community. A user asked, “Is CSharpCodeProvider ever going to happen in Core? The CodeDom API arrived somewhere in the 2.x timeframe, but it's a runtime fail even under 3.0 - are we going to need to re-write for some kind of explicitly Roslyn scripting?” To this, Landwerth replied, “CSharpCodeProvider is supported on .NET Core, but only the portions that deal with CodeDOM creation. You can't compile because this would require a compiler and the .NET Core runtime doesn't include a compiler (i.e. Roslyn). However, we might be able to do some gymnastics, like reflection, where the app can pull that in. Still tracked under #12180.” https://twitter.com/terrajobst/status/1183832593197744129 To know more about this announcement in detail, read the official GitHub post. .NET 5 arriving in 2020! Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 .NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and EF 6.3
Read more
  • 0
  • 0
  • 20750

article-image-microsoft-ignite-2018-new-azure-announcements-you-need-to-know
Melisha Dsouza
25 Sep 2018
4 min read
Save for later

Microsoft Ignite 2018: New Azure announcements you need to know

Melisha Dsouza
25 Sep 2018
4 min read
If you missed the Azure announcements made at Microsoft Ignite 2018, don’t worry, we’ve got you covered. Here are some of the biggest changes and improvements the Microsoft Azure team have made to their cloud offering. Infrastructure Improvements Azure’s new capabilities to deliver the best infrastructure for every workload include: 1. GPU enable and High-Performance VM To deliver the best infrastructure for every workload, Azure has announced the Preview of GPU-enabled and High-Performance Computing Virtual Machines. The two new N-series Virtual Machines have NVIDIA GPU capabilities. The first one is the NVv2 VMs and the second virtual machine is the NDv2 VMs. The two new H-series VMs are optimized for performance and cost and are aimed at HPC workloads like fluid dynamics, structural mechanics, energy exploration, weather forecasting, risk analysis, and more. The first VM is the HB VMs and the second VM is the HC VMs. 2. Networking Azure has announced the general availability of Azure Firewall and Virtual WAN. They have also announced the preview of Azure Front Door Service, ExpressRoute Global Reach, and ExpressRoute Direct. Azure Firewall has a built-in high availability and cloud scalability. The Virtual WAN will provide a simple, unified, global connectivity, and security platform to deploy large-scale branch connectivity. 3. Improved Disk storage Microsoft has expanded the portfolio of Azure Disk offerings to deploy any app in Azure, including those that are the most IO intensive. The new previews include the Ultra SSDs, Standard SSDs, Larger managed disk sizes - to help deal with data-intensive workloads. This will also ensure better availability, reliability, and latency as compared to standard SSDs 4. Hybrid Microsoft has announced new hybrid capabilities to manage data, create even more consistency, and secure hybrid environment. They have introduced the Azure Data Box edge, Windows Server 2019 and Azure stack. With AI enable edge computing capabilities, and OS that supports hybrid management and flexibility for deploying applications, Azure is causing waves in the developer community Built-in security & management For improved Security, Azure has announced new services for preview, like Confidential Computing DC VM series, Secure score, improved threat protection, and network map (preview). These will expand Azure security controls and services to protect network, applications, data, and identities. These services are enhanced by the unique intelligence that comes from the trillions of signals we collect in running first party services like Office 365 and Xbox. For better Management, Azure has announced the preview of Azure Blueprints. These blueprints make it easy to deploy and update Azure environments in a repeatable manner using composable artifacts such as policies, role-based access controls, and resource templates. Azure cost management in the Azure portal (preview) will help to access cost management from PowerBI or directly from your own custom applications. Migration To make the migration to the cloud less challenging, Azure has announced the support for Hyper-V assessments in Azure Migrate, Azure SQL Database Managed Instance, which enables users to migrate SQL Servers to a fully managed Azure service. To help improve your migration experience, we are announcing that if you migrate Windows Server or SQL Server 2008/R2 to Azure, you will get three years of free extended security updates on those systems. This could save you some money when Windows Server and SQL Server 2008/ R2 end of support (EOS). Automated ML capability in Azure Machine Learning The problem of finding the best machine learning pipeline for a given dataset scales faster than the time available for data science projects.  Azure’s Automated machine learning enables developers to access an automated service that identifies the best machine learning pipelines for their labelled data. Data scientists are empowered with a powerful productivity tool that also takes uncertainty into account, incorporating a probabilistic model to determine the best pipeline to try next. To follow more of the Azure buzz, head to  Microsoft’s official Blog   Microsoft’s Immutable storage for Azure Storage Blobs, now generally available Azure Functions 2.0 launches with better workload support for serverless Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace  
Read more
  • 0
  • 0
  • 20703
Modal Close icon
Modal Close icon