Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-the-react-native-team-shares-their-open-source-roadmap-react-suite-hits-3-4-0
Bhagyashree R
02 Nov 2018
3 min read
Save for later

The React Native team shares their open source roadmap, React Suite hits 3.4.0

Bhagyashree R
02 Nov 2018
3 min read
Yesterday, the React Native team shared further plans for React Native to provide a better support to its users and collaborators outside of Facebook. The team is planning to open source some of the internal tools and improve the widely used tools in the open source community. In order to ensure no breaking code is open sourced, they are also improving their testing infrastructure. The following are some of the focus areas the developers will be working on: Cleaning up for leaner core The developers are planning on reducing the surface area of React Native by removing non-core and unused components. Currently, React Native is a huge repo, so it makes sense to break it into smaller ones. This will come with many advantages, some of which are: Managing the contributions to React Native will become easier Chance to deprecate older modules Bundle size for projects that don’t use the extractable components will be reduced. This will lead to faster startup times for the apps. Enable faster reviewing of pull requests and merging. Open sourcing internals and improving popular tools They will be open-sourcing some of the tools that Facebook uses internally and provide improved support for tools that are widely used by the open source community. Some of the projects they will be working on are: Open sourcing of JavaScript Interface (JSI), an interface that facilitates the communication between JavaScript and the native language. Support for 64-bit libraries on Android. The new architecture will come with debugging enabled. Support for CocoaPods, Gradle, Maven, and new Xcode build system will be improved. Improved testing infrastructure Before publishing a code, it goes through several tests internally by the React Native engineers. But, since there are a few differences in how React Native is being used at Facebook and by the open-source community, these updates sometimes result in introducing some breaking changes to the React Native surface. To avoid such situations they will be improving internal tests and ensure that new features are tested in an environment as similar to open source as possible. Along with these infrastructure improvements, Facebook will start using React Native via the public API, as the open source community does. This will reduce the unintentional breaking changes. All these changes, along with some more, will be achieved throughout the next year as per the official announcement. They have completed some of the goals at the moment such as JSI, which has already landed in open source. Releasing React Suite 3.4.0 Meanwhile, the React developers announced the release of React Suite 3.4.0. React Suite or RSUITE consists of React component libraries for enterprise system products. This release comes with TypeScript support and few minor bug fixes. The following are some of the updates that are introduced in React Suite 3.4.0: Support is added for TypeScript. renderTooltip is added for Slider. MultiCascader, a single selection of data with hierarchical relationship structure, has been added. Customizing options in <DatePicker> shortcuts were not working properly. This is fixed. The scroll bar not resetting after the column of the <Table> has been fixed. To read React Native’s open source roadmap check out their official announcement. Also, you can read the React Suite’s release notes to know more updates in React Suite 3.4.0. React Conf 2018 highlights: Hooks, Concurrent React, and more React introduces Hooks, a JavaScript function to allow using React without classes React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more!
Read more
  • 0
  • 0
  • 19245

article-image-serverless-computing-101
Guest Contributor
09 Feb 2019
5 min read
Save for later

Serverless Computing 101

Guest Contributor
09 Feb 2019
5 min read
Serverless applications began gaining popularity when Amazon launched AWS Lambda back in the year 2014. Since then, we are becoming more familiar with Serverless Computing as it is exponentially growing in use and reference among the vendors who are entering the markets with their own solutions. The reason behind the hype of serverless computing is it requires no infrastructure management which is a modern approach for the enterprise to lessen up the workload. What is Serverless Computing? It is a special kind of software architecture which executes the application logic in an environment without visible processes, operating systems, servers, and virtual machines. Serverless Computing is also responsible for provisioning and managing the infrastructure entirely by the service provider. Serverless defines a cloud service that abstracts the details of the cloud-based processor from its user; this does not mean servers are no longer needed, but they are not user-specified or controlled. Serverless computing refers to serverless architecture which relates to the applications that depend on a third-party service (BaaS) and container (FaaS). Image Source: Tatvasoft The top serverless computing providers like Amazon, Microsoft, Google and IBM provide serverless computing like FaaS to companies like NetFlix, Coca-cola, Codepen and many more. FaaS Function as a Service is a mode of cloud computing architecture where developers write business logic functions or java development code which are executed by the cloud providers. In this, the developers can upload loads of functionality into the cloud that can be independently executed. The cloud service provider manages everything from execution to scaling it automatically. Key components of FaaS: Events - Something that triggers the execution of the function is regarded as an event. For instance: Uploading a file or publishing a message. Functions - It is regarded as an independent unit of deployment. For instance: Processing a file or performing a scheduled task. Resources - Components used by the function is defined as resources. For instance: File system services or database services. BaaS Backend as a Service allows developers to write and maintain only the frontend of the application and enable them by using the backend service without building and maintaining them. The BaaS service providers offer in-built pre-written software activities like user authentication, database management, remote updating, cloud storage and much more. The developers do not have to manage servers or virtual machines to keep their applications running which helps them to build and launch applications more quickly. Image courtesy - Gallantra Use-Cases of Serverless Computing Batch jobs scheduled tasks: Schedules the jobs that require intense parallel computation, IO or network access. Business logic: The orchestration of microservice workloads that execute a series of steps for applying your ideas. Chatbots: Helps to scale at peak demand times automatically. Continuous Integration pipeline: It has the ability to remove the need for pre-provisioned hosts. Captures Database change: Auditing or ensuring modifications in order to meet quality standards. HTTP REST APIs and Web apps: Sends traditional request and gives a response to the workloads. Mobile Backends: Can build on the REST API backend workload above the BaaS APIs. Multimedia processing: To execute a transformational process in response to a file upload by implementing the functions. IoT sensor input messages: Receives signals and scale in response. Stream processing at scale: To process data within a potentially infinite stream of messages. Should you use Serverless Computing? Merits Fully managed services - you do not have to worry about the execution process. Supports event triggered approach - sets the priorities as per the requirements. Offers Scalability - automatically handles load balancing. Only pay for Execution time - you need to pay just for what you used. Quick development and deployment - helps to run infinite test cases without worrying about other components. Cut-down time-to-market - you can look at your refined product in hours after creating it. Demerits Third-party dependency - developers have to depend on cloud service providers completely. Lacking Operational tools - need to depend on providers for debugging and monitoring devices. High Complexity - takes more time and it is difficult to manage more functions. Functions cannot stay for a longer period - only suitable for applications having shorter processes. Limited mapping to database indexes - challenging to configure nodes and indexes. Stateless Functions - resources cannot exist within a function after the function stops to exit. Serverless computing can be seen as the future for the next generation of cloud-native and is a new approach to write and deploy applications that allow developers to focus only on the code. This approach helps to reduce the time to market along with the operational costs and system complexity. Third-party services like AWS Lambda has eliminated the requirement to set up and configure physical servers or virtual machines. It is always best to take an expert's advice that holds years of experience in software development with modern technologies. Author Bio: Working as a manager in a Software outsourcing company Tatvasoft.com, Vikash Kumar has a keen interest in blogging and likes to share useful articles on Computing. Vikash has also published his bylines on major publication like Kd nuggets, Entrepreneur, SAP and many more. Google Cloud Firestore, the serverless, NoSQL document database, is now generally available Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications
Read more
  • 0
  • 0
  • 19245

article-image-introducing-luna-worlds-first-programming-language-with-dual-syntax-representation-data-flow-modeling-and-much-more
Amrata Joshi
17 Jun 2019
3 min read
Save for later

Introducing Luna, world’s first programming language with dual syntax representation, data flow modeling and much more!

Amrata Joshi
17 Jun 2019
3 min read
Luna, a data processing and visualization environment, provides a library of highly tailored, domain-specific components as well as a framework for building new components. Luna focuses on domains related to data processing, such as IoT, bioinformatics, data science, graphic design and architecture. What’s so interesting about Luna? Data flow modeling Luna is a data flow modeling whiteboard that allows users to draw components and the way data flows between them. Components in Luna have simply nested data flow graphs and users can enter into any component or into its subcomponents to move from high to low levels of abstraction. It is also designed as a general purpose programming language with two equivalent representations, visual and textual. Data processing and visualizing Luna components can visualise their results and further use colors for indicating the type of data they exchange. Users can compare all the intermediate outcomes and also understand the flow of data by looking at the graph. Users can also work around the parameters and observe how they affect each step of the computation in real time. Debugging Luna can help in assisting and analyzing network service outages and data corruption. In case any errors occur, Luna tracks and display its path through the graph so that users can easily follow and understand where it comes from.  It also records and visualizes information about performance and memory consumption. Luna explorer, the search engine Luna comes with Explorer which is a context-aware fuzzy search engine that lets users query libraries for desired components as well as browse their documentation. Since the Explorer is context-aware, it can easily understand the flow of data and also predict users’ intentions and adjust the search results accordingly. Dual syntax representation Luna is also the world’s first programming language that features two equivalent syntax representations, that is visual and textual. Automatic parallelism Luna also features parallelism that uses the state of the art Haskell’s GHC runtime system which helps to run thousands of threads in a fraction of a second. It also automatically partitions a program and schedules its execution over available CPU cores. Users seem to be happy with Luna, a user commented on HackerNews, “Luna looks great. I've been doing work in this area myself and hope to launch my own visual programming environment next month or so.” Few others are happy because Luna features text syntax supports building functional blocks. Another user commented, “I like that Luna has a text syntax. I also like that Luna supports building graph functional blocks that can be nested inside other graphs. That's a missing link in other tools of this type that limits the scale of what you can do with them.” To know more about this, check out the official Luna website. Declarative UI programming faceoff: Apple’s SwiftUI vs Google’s Flutter Polyglot programming allows developers to choose the right language to solve tough engineering problems Researchers highlight impact of programming languages on code quality and reveal flaws in the original FSE study
Read more
  • 0
  • 0
  • 19239

article-image-openai-introduces-musenet-a-deep-neural-network-for-generating-musical-compositions
Bhagyashree R
26 Apr 2019
4 min read
Save for later

OpenAI introduces MuseNet: A deep neural network for generating musical compositions

Bhagyashree R
26 Apr 2019
4 min read
OpenAI has built a new deep neural network called MuseNet for composing music, the details of which it shared in a blog post yesterday. The research organization has made a prototype of MuseNet-powered co-composer available for users to try till May 12th. https://twitter.com/OpenAI/status/1121457782312460288 What is MuseNet? MuseNet uses the same general-purpose unsupervised technology as OpenAI’s GPT-2 language model, Sparse Transformer. This transformer allows MuseNet to predict the next note based on the given set of notes. To enable this behavior, Sparse Transformer uses something called “Sparse Attention”, where each of the output position computes weightings from a subset of input positions. For audio pieces, a 72-layer network with 24 attention heads is trained using the recompute and optimized kernels of Sparse Transformer. This provides the model long context that enables it to remember long term structure in a piece. For training the model, the researchers have collected training data from various sources. The dataset includes the MIDI files donated by ClassicalArchives and BitMidi. The dataset also includes data from online collections, including Jazz, Pop, African, Indian, and Arabic styles. The model is capable of generating 4-minute musical compositions with 10 different instruments and is aware of different music styles from composers like Bach, Mozart, Beatles, and more. It can also convincingly blend different music styles to create a completely new music piece. The MuseNet prototype, which is made available for users to try, only comes with a small subset of options. It supports two modes: In simple mode, users can listen to the uncurated samples generated by OpenAI. To generate a music piece yourself, you just need to choose a composer or style and an optional start of a famous piece. In advanced mode, users can directly interact with the model. Generating music in this mode will take much longer but will give an entirely new piece. Here’s how the advanced mode looks like: Source: OpenAI What are its limitations? The music generation tool is still a prototype so it does has some limitations: To generate each note, MuseNet calculates the probabilities across all possible notes and instruments. Though the model gives more priority to your instrument choices, there is a possibility that it will choose something else. MuseNet finds it difficult to generate a music piece in case of odd pairings of styles and instruments. The generated music will sound more natural if you pick instruments closest to the composer or band’s usual style. Many users have already started testing out the model. While some users are pretty impressed by the AI-generated music, some think that it is quite evident that the music is machine generated and lacks the emotional factor. Here’s an opinion shared by a Redditor for different music styles: “My take on the classical parts of it, as a classical pianist. Overall: stylistic coherency on the scale of ~15 seconds. Better than anything I've heard so far. Seems to have an attachment to pedal notes. Mozart: I would say Mozart's distinguishing characteristic as a composer is that every measure "sounds right". Even without knowing the piece, you can usually tell when a performer has made a mistake and deviated from the score. The Mozart samples sound... wrong. There are parallel 5ths everywhere. Bach: (I heard a bach sample in the live concert) - It had roughly the right consistency in the melody, but zero counterpoint, which is Bach's defining feature. Conditioning maybe not strong enough? Rachmaninoff: Known for lush musical textures and hauntingly beautiful melodies. The samples got the texture approximately right, although I would describe them more as murky more than lush. No melody to be heard.” Another user commented, “This may be academically interesting, but the music still sounds fake enough to be unpleasant (i.e. there's no way I'd spend any time listening to this voluntarily).” Though this model is in the early stages, an important question that comes in mind is who will own the generated music. “When discussing this with my friends, an interesting question came up: Who owns the music this produces? Couldn't one generate music and upload that to Spotify and get paid based off the number of listens?.” another user added. To know more in detail, visit the OpenAI’s official website. Also, check out an experimental concert by MuseNet that was live-streamed on Twitch. OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence OpenAI Five bots destroyed human Dota 2 players this weekend OpenAI Five beats pro Dota 2 players; wins 2-1 against the gamers
Read more
  • 0
  • 0
  • 19232

article-image-data-measured-in-terms-of-real-aggregate-value-from-devops-com
Matthew Emerick
16 Oct 2020
1 min read
Save for later

Data Measured in Terms of Real Aggregate Value from DevOps.com

Matthew Emerick
16 Oct 2020
1 min read
The post Data Measured in Terms of Real Aggregate Value appeared first on DevOps.com.
Read more
  • 0
  • 0
  • 19218

article-image-december-developer-platform-news-personal-access-tokens-update-auto-disabling-webhooks-and-jupyterlab-integration-from-whats-new
Anonymous
18 Dec 2020
5 min read
Save for later

December Developer Platform news: Personal Access Tokens update, auto-disabling Webhooks, and JupyterLab integration from What's New

Anonymous
18 Dec 2020
5 min read
Geraldine Zanolli Developer Evangelist Kristin Adderson December 18, 2020 - 3:42am December 18, 2020 Every month is like Christmas for Developer Program members because we strive to delight our members as we showcase the latest projects from our internal developer platform and tools engineers. For the last Sprint Demos, we featured some exciting updates: Personal Access Token impersonation, auto-disabling Webhooks, new Webhooks payload for Slack, and JupyterLab integration for the Hyper API. Check out the gifts of increased communication, time, and security that these updates will bring. Personal Access Token (PAT) impersonation One of the use cases for the REST API is to query available content (e.g. projects, workbooks, data sources) for certain users. For embedding scenarios specifically, we often want to load up end-user-specific content within the application. The way to do this today is via impersonation, by which a server admin can impersonate a user, query as that user, and retrieve content that user has access to based on permissions within Tableau. Today, server admins can already impersonate users by sending over the user’s unique userID as part of the sign-in request, however, in order to do this, they need to hardcode their username and password in any scripts requiring impersonation.  Over a year ago, we released Personal Access Tokens (PATs), which are long-lived authentication tokens that allow users to run automation with the Tableau REST API without hard-coding credentials or requiring an interactive login. In the 2021.1 release, we are going to introduce user impersonation support for PATs, the last piece of functionality previously supported only by hard-coded credentials in REST API scripts. So, why not update all your scripts to use PATs today? Auto-disable Webhooks Webhooks is a notification service that allows you to integrate Tableau with any external server. Anytime that an event is happening on Tableau, Tableau is sending an HTTP POST request to the external server. Once the external server is receiving the request, it can respond to the event. But what is happening when the Webhook fails? You might have created multiple Webhooks on your site for testing that are no longer set properly, which means you’ll want to manually disable them or delete them. Today, the way that a Webhook works is that every time a Webhook is triggered, it is going to attempt to connect to the external server up to four times. After four times, it is going to count as a failed delivery attempt.  In our upcoming product releases, after four failed delivery attempts, the Webook will be automatically disabled and an email will be sent to the Webhook owner. But don't worry: If you have a successful delivery attempt before reaching a fourth failed attempt, the counter will be reset to zero. As always, you can configure these options on Tableau Server. Slack: New payload for Webhooks Since the release of Webhooks, we noticed that one of the most popular use cases is Slack. Tableau users want to be notified on Slack when an event is happening on Tableau. Today, this use case doesn’t work out of the box. You need to set up middleware in order to send Webhooks from Tableau to Slack—so yes, the payload that we’re sending from Tableau has a different format than the payload that Slack is asking for. (It's like speaking French to someone who only speaks German: you need a translator in the middle.)  In the upcoming 2021.1 release, you’ll be able to create new Webhooks to Slack with no need for middleware! We’re going to add an additional field to the payload.  Hyper API: JupyterLab integration Hyper API is a powerful tool, but with the new command-line interface around Hyper API, will it be even more powerful?  It will indeed! We added the command-line interface around HyperAPI to our hyper-api-samples in our open-source repository, so you can directly run SQL queries against Hyper. We integrated with an existing command-line interface infrastructure—the Jupyter infrastructure—giving you the ability to use HyperAPI directly within JupyterLab. If you’re not familiar with JupyterLab, it’s a web-based IDE mostly used by data scientists.  With the JupyterLab integration, it has never been easier for you to prototype new functionalities:  You can run your SQL queries and check the results without having to write a complete program around Hyper API.  Debugging is also becoming easier: You can isolate your queries to find the root cause of your issue.  Don’t forget about all the ad hoc, analytical queries that you can now run on data directly from your console. Get started using JupyterLab in a few minutes. Updates from the #DataDev Community The #DataDev community continues to share their knowledge with others and drive innovation: Robert Crocker (twitter @robcrock) published two tutorials on the JavaScript API Elliott Stam (twitter @elliottstam) launched a YouTube Channel and published multiple videos on the Tableau REST APIs Andre de Vries (twitter @andre347_) also shared on YouTube a video explaining Trusted Authentication Anya Prosvetova (twitter @Anyalitica) inspired from the Brain Dates at TC-ish launched a monthly DataDev Happy Hours to chat about APIs and Developer Tools Join the #DataDev community to get your invitation to our exclusive Sprint Demos and be the first to know about the Developer Platform updates—directly from the engineering team. See you next year!
Read more
  • 0
  • 0
  • 19216
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-unite-berlin-2018-keynote-unity-partners-with-google-launches-ml-agents-toolkit-0-4-project-mars-and-more
Sugandha Lahoti
20 Jun 2018
5 min read
Save for later

Unite Berlin 2018 Keynote: Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS and more

Sugandha Lahoti
20 Jun 2018
5 min read
Unite Berlin 2018, the Unity annual developer conference, kicked off on June 19’ 2018. This three-day extravaganza will take you through a thrilling ride filled with new announcements, sessions, and workshops from the amazing creators of Unity. It’s a place to develop, network, and participate with artists, developers, filmmakers, researchers, storytellers and other creators. Day 1 was inaugurated with the promising Keynote, presented by John Riccitiello, CEO of Unity Technologies. It featured previews of upcoming unity technology, most prominently Unity’s alliance with Google Cloud to help developers build connected games. Let’s take a look at what was showcased. Connected Games with Unity and Google Cloud Unity and Google Cloud have collaborated for helping developers create real-time multiplayer games. They are building a suite of managed services and tools to help developers, test, and run connected experiences while offloading the hard work of quickly scaling game servers to Google Cloud. Games can be easily scaled to meet the needs of the players. Game developers can harness the massive power of Google cloud without having to be a cloud expert. Here’s what Google Cloud with Unity has in store: Game-Server Hosting: Streamlined resources to develop and scale hosted multiplayer games. Sample FPS: A production-quality sample project of a real-time multiplayer game. New ECS Networking Layer: Fast, flexible networking code that delivers performant multiplayer by default. Unity ML-Agents Toolkit v0.4 A new version of Unity ML-Agents Toolkit was also announced at Unite Berlin. The v0.4 toolkit hosts multiple updates as requested by the Unity community. Game developers now have the option to train environments directly from the Unity editor, rather than as built executables. Developers can simply launch the learn.py script, and then press the “play” button from within the editor to perform training. They have also launched a set of two new challenging environments, Walker and Pyramids. Walker is physics-based humanoid ragdoll and Pyramids is a complex sparse-reward environment. There are also algorithmic improvements in reinforcement learning. Agents are now trained to learn to solve tasks that were previously learned with great difficulty. Unity is also partnering with Udacity to launch Deep Reinforcement Learning Nanodegree to help students and professionals gain a deeper understanding of reinforcement learning. Augmented Reality with Project MARS Unity has also announced their Project MARS, a Mixed and Augmented Reality studio, that will be provided as a Unity extension. This studio will require almost little-to-no custom coding and will allow game developers to build AR and MR applications that intelligently interact with any real-world environment, with little-to-no custom coding. Unite Berlin - AR Keynote Reel MARS will include abstract layers for object recognition, location, and map data. It will have sample templates with simulated rooms, for testing against different environments, inside the editor.  AR-specific gizmos will be provided to easily define spatial conditions like plane size, elevation, and proximity without requiring code or precise measurements. It will also have elements such as face masks, to avatars, to entire rooms of digital art. Project MARS will be coming to Unity as an experimental package later this year. Unity has also unveiled a Facial AR Remote Component. Powered by Augmented Reality, this component can perform and capture animated characters, allowing filmmakers and CGI developers to shoot CG content with body movement, just like you would with live action. Kinematica - Machine Learning powered Animation system Unity also showcased their AI research by announcing Kinematica, an all-new ML-powered animation system. Kinematica overpowers traditional animation systems which generally require animators to explicitly define transitions. Kinematica does not have any superimposed structure, like graphs or blend trees. It generates smooth transitions and movements by applying machine learning to any data source. Game developers and animators no longer need to manually map out animation graphs. Unite Berlin 2018 - Kinematica Demo Kinematica decides in real time how to combine data clips from a single library into a sequence that matches the controller input, the environment content, and the gameplay requests. As with Project MARS, Kinematica will also be available later this year as an experimental package. New Prefab workflows The entire Prefab systems have been revamped with multiple improvements. This improved Prefab workflow is now available as a preview build. New additions include Prefab Mode, prefab variance, and nested prefabs. Prefab Mode allows faster, efficient, and safer editing of Prefabs in an isolated mode, without adding them to the actual scene. Developers can now edit the model prefabs, and the changes are propagated to all prefab variants. With Nested prefabs, teams can work on different parts of the prefab and then come together for the final asset. Predictive Personalized Placements Personalized placements bring the best of both worlds for players and the commercial business. With this new feature, game developers can create tailor-made game experiences for each player. This feature runs on an engine which is powered by predictive analytics. This prediction engine determines what to show to each player based on what will drive the highest engagement and lifetime value. This includes ad, an IAP promotion, a notification of a new feature, or a cross-promotion. And the algorithm will only get better with time. These were only a select few of the announcements presented in Unity Berlin Keynote. You can watch the full video on YouTube. Details on other sessions, seminars, and activities are available on the Unite website. GitHub for Unity 1.0 is here with Git LFS and file locking support Unity announces a new automotive division and two-day Unity AutoTech Summit Put your game face on! Unity 2018.1 is now available
Read more
  • 0
  • 0
  • 19211

article-image-typescript-3-6-beta-is-now-available
Amrata Joshi
23 Jul 2019
2 min read
Save for later

TypeScript 3.6 beta is now available!

Amrata Joshi
23 Jul 2019
2 min read
Last week, the team behind TypeScript announced the availability of TypeScript 3.6 Beta. The full release of TypeScript 3.6 is scheduled for the end of the next month with a Release Candidate coming a few weeks prior.  What’s new in TypeScript 3.6? Stricter checking TypeScript 3.6 comes with stricter checking for iterators and generator functions. The earlier versions didn’t let users of generators differentiate whether a value was yielded or returned from a generator. With TypeScript 3.6, users can narrow down values from iterators while dealing with them. Simpler emit The emit for constructs like for/of loops and array spreads can be a bit heavy so TypeScript opts for a simpler emit by default that supports array types, and helps in iterating on other types using the --downlevelIteration flag. With this flag, the emitted code is more accurate, but is larger. Semicolon-aware code edits Older versions of TypeScript added semicolons to the end of every statement which was not appreciated by many users as it didn’t go along with their style guidelines. TypeScript 3.6 can easily detect if a file uses semicolons while applying edits and if a file lack semicolons, TypeScript doesn’t add one. DOM updates Following are a few of the declarations that have been removed or changed within lib.dom.d.ts: Instead of GlobalFetch, WindowOrWorkerGlobalScope is used. Non-standard properties on Navigator no more exist. webgl or webgl2 is used instead of experimental-webgl context. To know more about this news, check out the official post.  Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more Material-UI v4 releases with CSS specificity, Classes boilerplate, migration to Typescript and more      
Read more
  • 0
  • 0
  • 19198

article-image-nvidias-new-turing-architecture-worlds-first-ray-tracing-gpu
Fatema Patrawala
14 Aug 2018
4 min read
Save for later

Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU”

Fatema Patrawala
14 Aug 2018
4 min read
The Siggraph 2018 Conference brought in the biggest announcements from Nvidia unveiling a new turing architecture and three new pro-oriented workstation graphics cards in its Quadro family. This is the greatest leap for Nvidia since the introduction of the CUDA GPU in 2006. The Turing architecture features new RT Cores to accelerate ray tracing and new Tensor Cores for AI inferencing to enable real-time ray tracing. The two engines along with more powerful compute for simulation and enhanced rasterization will usher in a new generation of hybrid rendering to address the $250 billion visual effects industry. Hybrid rendering enables cinematic-quality interactive experience, amazing new effects powered by neural networks and fluid interactivity on highly complex models. The company also unveiled its initial Turing-based products - the NVIDIA® Quadro® RTX™ 8000, Quadro RTX 6000 and Quadro RTX 5000 GPUs. They are expected to revolutionize the work of approximately 50 million designers and artists across multiple industries. At the Annual Siggraph conference, Jensen Huang, founder and CEO, Nvidia mentions, “Turing is NVIDIA’s most important innovation in computer graphics in more than a decade. Hybrid rendering will change the industry, opening up amazing possibilities that enhance our lives with more beautiful designs, richer entertainment and more interactive experiences. The arrival of real-time ray tracing is the Holy Grail of our industry.” Here’s the list of Turing architecture features in detail. Real-Time Ray Tracing Accelerated by RT Cores The Turing architecture is armed with dedicated ray-tracing processors called RT Cores. It will accelerate the computation similar to light and sound travel in 3D environments at up to 10 GigaRays a second. Turing will accelerate real-time ray tracing operations by up to 25x than that of the previous Pascal generation. GPU nodes can be used for final-frame rendering for film effects at more than 30x the speed of CPU nodes. AI Accelerated by powerful Tensor Cores The Turing architecture also features Tensor Cores, processors that accelerate deep learning training and inferencing, providing up to 500 trillion tensor operations a second. It will power AI-enhanced features for creating applications with new capabilities including DLAA (deep learning anti-aliasing). DLAA is a breakthrough in high-quality motion image generation for denoising, resolution scaling and video re-timing. These features are part of the NVIDIA NGX™ software development kit, a new deep learning-powered technology stack. It will enable developers to easily integrate accelerated, enhanced graphics, photo imaging and video processing into applications with pre-trained networks Faster Simulation and Rasterization with New Turing Streaming Multiprocessor A new streaming multiprocessor architecture is featured in the new Turing-based GPUs to add an integer execution unit, that will execute in parallel with the floating point datapath. A new unified cache architecture with double bandwidth of the previous generation is added too. As it is combined with new graphics technologies like variable rate shading, the Turing SM achieves unprecedented levels of performance per core. With up to 4,608 CUDA cores, Turing supports up to 16 trillion floating point operations in parallel with 16 trillion integer operations per second. Developers will be able to take advantage of NVIDIA’s CUDA 10, FleX and PhysX SDKs to create complex simulations, such as particles or fluid dynamics for scientific visualization, virtual environment and special effects. The new Turing architecture has already received support from companies like Adobe, Pixar, Siemens, Black Magic, Weta Digital, Epic Games and Autodesk. The new Quadro RTX is priced at $2,300 for a 16GB version and $6,300 for 24GB version. Double the memory to 48GB and Nvidia expects you to pay about $10,000 for the high-end card. For more information you may visit the Nvidia official blog page. IoT project: Design a Multi-Robot Cooperation model with Swarm Intelligence [Tutorial] Amazon Echo vs Google Home: Next-gen IoT war 5 DIY IoT projects you can build under $50
Read more
  • 0
  • 0
  • 19196

article-image-google-hints-shutting-down-google-news-over-eus-implementation-of-article-11-or-the-link-tax
Bhagyashree R
23 Nov 2018
3 min read
Save for later

Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax”

Bhagyashree R
23 Nov 2018
3 min read
Last week, The Guardian reported that Google may shut down Google News in Europe if the “link tax” is implemented in a way that the company has to pay news publishers. According to the “link tax” or Article 11, news publishers must receive a fair and proportionate remuneration for their publications by the information society service providers. The vice president of Google News, Richard Gingras expressed his concern regarding the proposal and told The Guardian that the discontinuation of the news service in Europe will depend on the final verdict, “We can’t make a decision until we see the final language.” The first draft of the “link tax”, or more formally, Directive on Copyright in the Digital Single Market was first issued in 2016. After several revisions and discussions, it was approved by the European Parliament on 12 September 2018. And, now a formal trilogue discussion with the European Commission, the Council of the European Union and the European Parliament is initiated to reach the final decision. The conclusion of this discussion is expected to be announced in January 2019. Another part of the proposed directive, Article 13 is designed to ensure content creators are paid for material uploaded to sites such as the Google-owned YouTube. Article 11 and Article 13 have faced a lot of criticism since the directive was proposed. Mr. Gingras further said that when in 2014 the Spanish government attempted to charge a link tax on Google, the company responded by shutting down Google News in the country. It also removed Spanish newspapers from the service internationally. This resulted in a tremendous fall in traffic to Spanish news websites. “We would not like to see that happen in Europe,” Gingras added. Julia Reda, an MEP, however, believes that this “link tax” will not be as extreme as the link tax implemented in Spain, where Google was required to pay publishers even if they didn’t want to be paid. “What we think is more likely is that publishers will have the choice to ask for Google to pay or not," she told WIRED. To know more in detail about Google’s response towards the link tax, read the full story on The Guardian. YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law German OpenStreetMap protest against “Article 13” EU copyright reform making their map unusable BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”
Read more
  • 0
  • 0
  • 19195
article-image-summer-2020-internship-with-the-angular-team-from-angular-blog-medium
Matthew Emerick
02 Sep 2020
3 min read
Save for later

Summer 2020 Internship With the Angular Team from Angular Blog - Medium

Matthew Emerick
02 Sep 2020
3 min read
Photo by Emma Twersky TL;DR Our interns were phenomenal! Read on to find out why. We’ve just wrapped up our latest intern cohort on the Angular team. Please believe me when I tell you that there are some outstanding folks out there and we were lucky enough to get to work with a few of them. Because of the ongoing pandemic Google internships were fully remote. We’re fortunate to have had some really special folks because they were outstanding. Let’s take a look at the great work developed during this cohort on the Angular Team! Better Paths for Learners One of things we’re focused on with the team is making sure that the Angular learning journey works for experienced developers and new developers. Our wonderful intern on the DevRel team, Gloria, took this mission to heart. Gloria Nduka zeroed in on the friction of the learning path for new developers. She used those insights to not only help the team but also to create an interactive tutorial focused on helping developers new to Angular. The tutorial puts the output, code, and next steps in the same window allowing learners to reduce context switching. Gloria’s second project focused on reaching out to and partnering with computer science programs to expose more students to the platform. She was able to make some meaningful connections and we are excited to continue to this initiative. Seeing Opportunities in the CDK The Angular CDK is a wonderful resource aimed at abstracting common application behaviors and interactions. Andy Chrzaszcz saw an opportunity to add new functionality to menus and improve accessibility. He added some great new directives to the CDK that give developers an expressive way to build powerful menus for Angular apps! The foundation set via his contributions will give developers the ability to build all types of menus with advanced interactions like sub-menus, intelligent menu closing and more. Developers will be able to craft menus that meet the application’s needs. A Combination of Great Things Popping Up Presenting lists of data for user’s to select from is standard fare in web development. But how can the Angular CDK help with that? Niels Rasmussen gives us that answer with his project aimed at creating directives that provide a foundation for implementing complex combobox UI components. Because the goal of the Angular CDK is to provide common behaviors, Niels’s solution follows suit for the list box and combo box gives developers the freedom to customize the presentation of the UI to their specific use cases. This brings much welcomed flexibility into the fold. Thanks for Sharing Your Gifts With Us Finally, we want to send an enthusiastic thank you and good luck to all of our interns as they finish up their undergraduate programs. We’ve seen the future in you and things are looking incredibly bright. If you’d like to be an intern here at Google, we invite you to apply. The world needs your gifts and talents. These projects will be released in future versions of the platform. Stay tuned for updates. Summer 2020 Internship With the Angular Team was originally published in Angular Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 19178

article-image-facebook-ai-open-sources-pytorch-biggraph-for-faster-embeddings-in-large-graphs
Natasha Mathur
03 Apr 2019
3 min read
Save for later

Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs

Natasha Mathur
03 Apr 2019
3 min read
The Facebook AI team yesterday announced, the open-sourcing of PyTorch-BigGraph (PBG), a tool that enables faster and easier production of graph embeddings for large graphs.   With PyTorch-BigGraph, anyone can take a large graph and produce high-quality embeddings with the help of a single machine or multiple machines in parallel. PBG is written in PyTorch, allowing researchers and engineers to easily swap in their own loss functions, models, and other components. Other than that, PBG can also compute the gradients and is automatically scalable. Facebook AI team states that the standard graph embedding methods don’t scale well and are not able to operate on large graphs consisting of billions of nodes and edges. Also, many graphs exceed the memory capacity of commodity servers creating problems for the embedding systems. But, PBG helps prevent that issue. PBG performs block partitioning of the graph that helps overcome the memory limitations of graph embeddings. Also, nodes are randomly divided into P partitions ensuring the two partitions fit easily in memory. The edges are then further divided into P2 buckets depending on their source and the destination node. After this partitioning, training can be performed on one bucket at a time. PBG offers two different ways to train embeddings of partitioned graph data, namely, single machine and distributed training. In a single-machine training, embeddings and edges are swapped out in case they are not being used. In distributed training, PBG uses PyTorch parallelization primitives and embeddings are distributed across the memory of multiple machines. Facebook AI team also made several modifications to the standard negative sampling, which is necessary for large graphs. “We took advantage of the linearity of the functional form to reuse a single batch of N random nodes to produce corrupted negative samples for N training edges..this allows us to train on many negative examples per true edge at a little computational cost”, says the Facebook AI team. To produce embeddings useful in different downstream tasks, Facebook AI team found an effective approach that involves corrupting edges with a mix of 50 percent nodes sampled uniformly from the nodes, and with 50 percent nodes sampled based on their number of edges. Apart from that, to analyze PBG’s performance, Facebook AI used the publicly available Freebase knowledge graph comprising more than 120 million nodes and 2.7 billion edges. A smaller subset of the Freebase graph, known as FB15k. was also used. As a result, PBG performed comparably to other state-of-the-art embedding methods for the FB15k data set. PBG was also used to train embeddings for the full Freebase graph where PBG’s partitioning scheme reduced both memory usage and training time. PBG embeddings were also evaluated for several publicly available social graph data sets and it was found that PBG outperformed all the competing methods. “We..hope that PBG will be a useful tool for smaller companies and organizations that may have large graph data sets but not the tools to apply this data to their ML applications. We hope that this encourages practitioners to release and experiment with even larger data sets”, states the Facebook AI team. For more information, check out the official Facebook AI blog. PyTorch 1.0 is here with JIT, C++ API, and new distributed packages PyTorch 1.0 preview release is production ready with torch.jit, c10d distributed library, C++ API PyTorch-based HyperLearn Statsmodels aims to implement a faster and leaner GPU Sklearn
Read more
  • 0
  • 0
  • 19176

article-image-daily-coping-30-dec-2020-from-blog-posts-sqlservercentral
Anonymous
30 Dec 2020
2 min read
Save for later

Daily Coping 30 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
30 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to bring joy to others. Share something which made you laugh. I love comedy, and that is one outing that my wife and I miss. We’ve watched some specials and comedians online, but it’s not the same. I look forward to being able to go back to a live comedy show. One thing that I love about the Internet is the incredible creativity of so many people. While I get that there is a lot of things posted that others may not like, and it’s easy to waste lots of time, it’s also nice to take a brief break and be entertained by something. There is plenty to be offended by, but one of the cleaner, more entertaining things was brought to my by my daughter. She showed me You Suck at Cooking one night while I was cooking. I finished and then ended up spending about 20 minute watching with her while we ate. Two I enjoyed: kale chips potato latkes The post Daily Coping 30 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 19170
article-image-daily-coping-29-dec-2020-from-blog-posts-sqlservercentral
Anonymous
29 Dec 2020
2 min read
Save for later

Daily Coping 29 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
29 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to congratulate someone for an achievement that may go unnoticed. This is a double congrats, for someone that I know and don’t get to see often enough. Some of you may also know Glenn Berry, hardware expert and the author of the DMV Diagnostic queries that lots of people use. Glenn left his job and struck out on his own this past year and he’s been busy working on a number of things where he’s had some success that I think is worth noting. I have probably left some positive notes on posts, but I’m going to use today as a way to congratulate him for a few things. First, his YouTube channel has over 500 subscribers, which is a nice accomplishment in relatively short time. Second, he’s been winning awards for his beer. Congrats to him, and hopefully I’ll get the chance to try some of these. I was over at his house last summer, watching him brew beer one day and got to try a few of his creations. Looking forward to doing it again. The post Daily Coping 29 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 19162

article-image-new-dataproc-optional-components-support-apache-flink-and-docker-from-cloud-blog
Matthew Emerick
15 Oct 2020
5 min read
Save for later

New Dataproc optional components support Apache Flink and Docker from Cloud Blog

Matthew Emerick
15 Oct 2020
5 min read
Google Cloud’s Dataproc lets you run native Apache Spark and Hadoop clusters on Google Cloud in a simpler, more cost-effective way. In this blog, we will talk about our newest optional components available in Dataproc’s Component Exchange: Docker and Apache Flink. Docker container on Dataproc Docker is a widely used container technology. Since it’s now a Dataproc optional component, Docker daemons can now be installed on every node of the Dataproc cluster. This will give you the ability to install containerized applications and interact with Hadoop clusters easily on the cluster.  In addition, Docker is also critical to supporting these features: Running containers with YARN Portable Apache Beam job Running containers on YARN allows you to manage dependencies of your YARN application separately, and also allows you to create containerized services on YARN. Get more details here. Portable Apache Beam packages jobs into Docker containers and submits them the Flink cluster. Find more detail about Beam portability.  Docker optional component is also configured to use Google Container Registry, in addition to the default Docker registry. This lets you use container images managed by your organization. Here is how to create a Dataproc cluster with the Docker optional component: gcloud beta dataproc clusters create <cluster-name>   --optional-components=DOCKER   --image-version=1.5 When you run the Docker application, the log will be streamed to Cloud Logging, using gcplogs driver. If your application does not depend on any Hadoop services, check out Kubernetes and Google Kubernetes Engine to run containers natively. For more on using Dataproc, check out our documentation. Apache Flink on Dataproc Among streaming analytics technologies, Apache Beam and Apache Flink stand out. Apache Flink is a distributed processing engine using stateful computation. Apache Beam is a unified model for defining batch and steaming processing pipelines. Using Apache Flink as an execution engine, you can also run Apache Beam jobs on Dataproc, in addition to Google’s Cloud Dataflow service. Flink and running Beam on Flink are suitable for large-scale, continuous jobs, and provide: A streaming-first runtime that supports both batch processing and data streaming programs A runtime that supports very high throughput and low event latency at the same time Fault-tolerance with exactly-once processing guarantees Natural back-pressure in streaming programs Custom memory management for efficient and robust switching between in-memory and out-of-core data processing algorithms Integration with YARN and other components of the Apache Hadoop ecosystem Our Dataproc team here at Google Cloud recently announced that Flink Operator on Kubernetes is now available. It allows you to run Apache Flink jobs in Kubernetes, bringing the benefits of reducing platform dependency and producing better hardware efficiency.  Basic Flink Concepts A Flink cluster consists of a Flink JobManager and a set of Flink TaskManagers. Like similar roles in other distributed systems such as YARN, JobManager has responsibilities such as accepting jobs, managing resources and supervising jobs. TaskManagers are responsible for running the actual tasks.  When running Flink on Dataproc, we use YARN as resource manager for Flink. You can run Flink jobs in 2 ways: job cluster and session cluster. For the job cluster, YARN will create JobManager and TaskManagers for the job and will destroy the cluster once the job is finished. For session clusters, YARN will create JobManager and a few TaskManagers.The cluster can serve multiple jobs until being shut down by the user. How to create a cluster with Flink Use this command to get started: gcloud beta dataproc clusters create <cluster-name>   --optional-components=FLINK   --image-version=1.5 How to run a Flink job After a Dataproc cluster with Flink starts, you can submit your Flink jobs to YARN directly using the Flink job cluster. After accepting the job, Flink will start a JobManager and slots for this job in YARN. The Flink job will be run in the YARN cluster until finished. The JobManager created will then be shut down. Job logs will be available in regular YARN logs. Try this command to run a word-counting example: The Dataproc cluster will not start a Flink Session cluster by default. Instead, Dataproc will create the script “/usr/bin/flink-yarn-daemon,” which will start a Flink session.  If you want to start a Flink session when Dataproc is created, use the metadata key to allow it: If you want to start the Flink session after Dataproc is created, you can run the following command on master node: Submit jobs to that session cluster. You’ll need to get the Flink JobManager URL: How to run a Java Beam job It is very easy to run an Apache Beam job written in Java. There is no extra configuration needed. As long as you package your Beam jobs into a JAR file, you do not need to configure anything to run Beam on Flink. This is the command you can use: How to run a Python Beam job written in Python Beam jobs written in Python use a different execution model. To run them in Flink on Dataproc, you will also need to enable the Docker optional component. Here’s how to create a cluster: You will also need to install necessary Python libraries needed by Beam, such as apache_beam and apache_beam[gcp]. You can pass in a Flink master URL to let it run in a session cluster. If you leave the URL out, you need to use the job cluster mode to run this job: After you’ve written your Python job, simply run it to submit: Learn more about Dataproc.
Read more
  • 0
  • 0
  • 19161
Modal Close icon
Modal Close icon