Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-google-cloud-announces-new-go-1-11-runtime-for-app-engine
Bhagyashree R
17 Oct 2018
2 min read
Save for later

Google Cloud announces new Go 1.11 runtime for App Engine

Bhagyashree R
17 Oct 2018
2 min read
Yesterday, Google Cloud announced a new Go 1.11 runtime for the App Engine standard environment. This provides all the benefits of App Engine such as paying only for what you use, automatic scaling and managed infrastructure. Starting with Go 1.11, which was launched in August this year, Go on App Engine has no limits on application structure, supported packages, context.Context values, or HTTP clients. What are the changes in the Go 1.11 runtime as compared to Go 1.9? 1. Now, you can specify the Go 1.11 runtime in your app.yaml file by adding the following line: runtime: go111 2. Each of your services must include a package main statement in at least one source file. 3. The appengine build tag is now deprecated and will no longer be used when building an app for deployment. 4. The way you import dependencies has changed. You can specify the dependencies in this runtime by the following two ways: Putting your application and related code in your GOPATH. Or else, by creating a go.mod file to define your module. 5. Google App Engine now does not modify the Go toolchain to include the appengine package. Using Google Cloud client library or third party libraries instead of the App Engine-specific APIs is recommended. 6. You can deploy services that use the Go 1.11 runtime using the gcloud app deploy command. You can still use the appcfg.py commands the Go 1.9 runtime, but the gcloud command-line tool is preferred. This release of the Go 1.11 runtime in the App Engine uses the latest stable release of Go 1.11 and will automatically update to new minor versions upon deployment but will not for any major versions. Also, it is currently in beta and might be changed in backward-incompatible ways in future. You can read more about Go 1.11 runtime on The Go Blog and also the documentation published by Google. Golang plans to add a core implementation of an internal language server protocol Why Golang is the fastest growing language on GitHub Golang 1.11 is here with modules and experimental WebAssembly port among other updates
Read more
  • 0
  • 0
  • 11120

article-image-facebook-open-sources-magma-a-software-platform-for-deploying-mobile-networks
Amrata Joshi
26 Feb 2019
2 min read
Save for later

Facebook open sources Magma, a software platform for deploying mobile networks

Amrata Joshi
26 Feb 2019
2 min read
Yesterday, the team at Facebook open-sourced Magma, a software platform that will help operators for deploying mobile networks easily. This platform comes with a software-centric distributed mobile packet core and tools for automating network management. Magma extends existing network topologies to the edge of rural deployments, private LTE (Long Term Evolution) networks or wireless enterprise deployments instead of replacing existing EPC deployments for large networks. Magma enables new types of network archetypes where there is a need for continuous integration of software components and incremental upgrade cycles. It also allows authentication and integration with the help of LTE EPC (Evolved Packet Core). It also reduces the complexity of operating mobile networks by enabling automation of network operations like software updates, element configuration, and device provisioning. Magma’s centralized cloud-based controller can be used on a public or private cloud environment. Its automated provisioning infrastructure makes deploying LTE as easy as deploying a WiFi access point. The platform currently works with existing LTE base stations and can associate with traditional mobile cores for extending services to new areas. According to a few users, “Facebook internally considers the social network to be its major asset and not their technology.” Any investment in open technologies or internal technology which make the network effect stronger is considered important. Few users discussed Facebook’s revenue strategies in the HackerNews thread. A comment on HackerNews reads, “I noticed that FB and mobile phone companies offering "free Facebook" are all in a borderline antagonistic relationship because messenger kills their revenue, and they want to bill FB an arm and a leg for that.” To know more about this news in detail, check out Facebook’s blog post. Facebook open sources SPARTA to simplify abstract interpretation Facebook open sources the ELF OpenGo project and retrains the model using reinforcement learning Facebook’s AI Chief at ISSCC talks about the future of deep learning hardware  
Read more
  • 0
  • 0
  • 11069

article-image-elixir-1-7-language-for-erlang-virtual-machine-releases
Sugandha Lahoti
27 Jul 2018
3 min read
Save for later

Elixir 1.7, the programming language for Erlang virtual machine, releases

Sugandha Lahoti
27 Jul 2018
3 min read
Elixir 1.7 has been released. Elixir builds on top of Erlang designed for building scalable and maintainable applications. This release is focused on improving error handling, logger reporting, and documentation. It also brings improvements to ExUnit, Elixir’s testing library. ExUnit improvements ExUnit is Elixir’s unit testing library. ExUnit uses Elixir macros to provide error reports when a failure happens using the assert macro. The assert macro can look at the code, extract the current line, extract the operands and show a difference between the data structures alongside the stacktrace when the assertion fails. However, for certain ‘bare’ assertions, ExUnit usually re-runs the tests, debugging or printing the values. In Elixir 1.7, now, whenever a “bare” assertion will fail, it will print the value of each argument individually. E.g, For a simple example such as assert some_vars(1 + 2, 3 + 4), users will get this report: Their build tool Mix has also received new updates. There is a new --failed flag that runs all tests that failed the last time they ran. The coverage reports generated with mix test --cover includes a summary out of the box. Updates to the ExDoc tool ExDoc is a tool to generate documentation for user Elixir projects. It leverages metadata to provide better documentation for developers. These are the updates to ExDoc. Deprecated modules, functions, callbacks, and types now have a warning automatically attached to them. Functions, macros, callbacks, and types now include the version in which they were added. Future Elixir versions will include their own section for guards in the documentation and in the sidebar. They are currently exploring ways to generalize this feature in ExDoc itself. Erlang/OTP logger integration improvements Elixir 1.7 fully integrates with the new :logger module available in Erlang/OTP 21. The Logger.Translator mechanism has also been improved to export metadata, allowing custom Logger backends to leverage information such as: :crash_reason, a two-element tuple with the throw/error/exit reason as the first argument and the stacktrace as the second. :initial_call, the initial call that started the process. :registered_name, the process’ registered name as an atom. Updates to Logger configuration system From Elixir 1.7 the Logger macros such as debug, info, will evaluate their arguments only when the message is logged. The Logger configuration system also accepts a new option: compile_time_purge_matching that allows users to remove log calls with specific compile-time metadata. There are also certain developments in areas not directly related to the Elixir codebase. A new Development section has been added to the website, that outlines the Elixir team structure and goals. It also now has its own mini-documentary. Read the Elixir-lang blog for the full list of Elixir 1.7 updates. You can also check the Install section to get Elixir installed and read the Getting Started guide to learn more. Elixir Basics – Foundational Steps toward Functional Programming 5 Reasons to learn programming
Read more
  • 0
  • 0
  • 11041

article-image-github-launches-draft-pull-requests
Amrata Joshi
15 Feb 2019
3 min read
Save for later

GitHub launches draft pull requests

Amrata Joshi
15 Feb 2019
3 min read
Yesterday, GitHub launched a new feature named draft pull requests, which allows users to start a pull request before they are done implementing all the code changes. Users can start a conversation with their collaborators once the code is ready. If a user ends up closing the pull request for some reason or is refactoring the code entirely, the pull request would work in collaboration. Also, if a user wants to signal a pull request to be the start of the conversation and the code isn’t ready, users can still let the people check it out locally and get feedback. The draft pull requests feature can tag users if they are still working on a PR and notify the team once it’s ready. This feature will also help the pull requests that are prematurely closed, or for times when users start working on a new feature and forget to send a PR. When a user opens a pull request, a drop-down arrow appears next to the ‘Create pull request’ button. Users can toggle the drop-down arrow for creating a draft. A draft pull request is differently styled for indicating that it is in a draft state. Users can change the status to ‘Ready for review’ near the bottom of the pull request for removing the draft state and allow merging according to the project’s settings. In case a user has ‘CODEOWNERS file’ in their repository, a draft pull request will suppress notifications to those reviewers until it is marked as ready for review. Users have given mixed reviews to this news. According to a few users, this new feature will save up a lot of time. One of the users said, “It saves a lot of wasted effort by exploring the problem domain collaboratively before development begins.” While according to a few others this idea is not much effective. Another comment reads, “Someone suggested this on my team. I personally don’t like the idea because these policies often times lead to bureaucracy and then nothing getting released. It is not that I am against thinking ahead but if I have to in details explain everything I do, then more time is spent documenting than actually creating which is the part I enjoy.” To know more about this news, check out GitHub official post. Western Digital RISC-V SweRV Core is now on GitHub GitHub Octoverse: top machine learning packages, languages, and projects of 2018 Github wants to improve Open Source sustainability; invites maintainers to talk about their OSS challenges
Read more
  • 0
  • 0
  • 11031

article-image-splinter-0-9-0-the-popular-web-app-testing-tool-released
Melisha Dsouza
28 Aug 2018
2 min read
Save for later

Splinter 0.9.0, the popular web app testing tool, released!

Melisha Dsouza
28 Aug 2018
2 min read
Splinter, the open source tool for testing web applications using Python has now leveled up to Splinter 0.9.0. Browser actions such as visiting URLs and interacting with their items can be automated. Apart from providing a simple api, Splinter has multiple webdrivers including chrome webdriver, firefox webdriver, phantomjs webdriver, zopetest browser, and remote webdriver. It provides support to iframe, alert and executes javascript while working with both, ajax and async javascript. Two ways to install Splinter 0.9.0 Step 1: Install Python In order to install Splinter, you need to make sure that Python 2.7+  is installed. Step 2: Install Splinter There are two ways to install Splinter: Install a stable release For an official and almost bug-free version, use the terminal: $ [sudo] pip install splinter Install under-development source-code For splinter’s latest-and-greatest features and aren’t afraid of running under development code, run: $ git clone git://github.com/cobrateam/splinter.git $ cd splinter $ [sudo] python setup.py install Head over to the install guide for additional notes. Upgraded features in Splinter 0.9.0 Support for phantomjs is removed. With Chrome and Firefox headless, phantom is no longer needed. Users can now add custom options to the chrome browser. The bug related to element.find_by_text  stands resolved. When trying to do a contextual search for text, the result would include all matching text for the whole DOM instead of just those nodes that are children of the contextual node. Support was added for zope.testbrowser 5+, Flask 1+, selenium 3.14.0. Splinter can now handle webdriver StaleElementReferenceException. The lxml and cssselect has been updated  to 4.2.4  1.0.3 respectively. For a detailed explanation of features visitits  Github page. Visual Studio code July 2018 release, version 1.26 is out! OpenSSH 7.8 released! JDK 11 First Release Candidate (RC) is out with ZGC, Epsilon and more!  
Read more
  • 0
  • 0
  • 11002

article-image-rails-6-will-be-shipping-source-maps-by-default-in-production
Amrata Joshi
30 Jan 2019
3 min read
Save for later

Rails 6 will be shipping source maps by default in production

Amrata Joshi
30 Jan 2019
3 min read
The developer community surely owes respect to the innovation of ‘View Source’ as it had made things much easier for the coders. Well, David Heinemeier Hansson, the developer of Ruby on Rails have made a move to make programmers’ life easy by announcing that Rails 6 will be shipping source maps by default in production. Source maps help developers view code as it was written by the creator with comments, understandable variable names, and all the other help that makes it possible for programmers to understand the code. It is sent to users over the wire when users have the dev tools open in their browser. Source maps, so far, have been seen merely as a local development tool and not something that will be shipped to production. Live debugging would make things easier for the developers. According to the post by David Heinemeier Hansson, all the JavaScript that runs Basecamp 3 under Webpack now has source maps. David Heinemeier Hansson said, “We’re still looking into what it’ll take to get source maps for the parts that were written for the asset pipeline using Sprockets, but all our Stimulus controllers are compiled and bundled using Webpack, and now they’re easy to read and learn from.” David Heinemeier Hansson is also a partner at the web-based software development firm Basecamp. He said that 90% of all the code that runs Basecamp, is open source in the form of Ruby on Rails, Turbolinks, Stimulus. He further added, “I like to think of Basecamp as a teaching hospital. The care of our users is our first priority, but it’s not the only one. We also take care of the staff running the place, and we try to teach and spread everything we learn. Pledging to protect View Source fits right in with that.” Sam Saffron, the co-founder at Discourse said, “I just wanted to voice my support for bringing this back by @dhh . We have been using source maps at Discourse now for 4 or so years, including maps for both JS and SCSS in production, default on.” According to him one of the important reasons to enable source maps in production is that often JS frameworks have "production" and "development" modes. Sam Saffron said, “I have seen many cases over the years where a particular issue only happens in production and does not happen in development. Being able to debug properly in production is a huge life saver. Source maps are not the panacea as they still have some limitations around local var unmangling and other edge cases, but they are 100 times better than working through obfuscated minified code with magic formatting enabled.” According to Sam, there is one performance concern that is the cost of precompilation. The cost was minimal at Discourse but the cost for a large number of source maps is unpredictable. Users had discussed this issue on the GitHub thread, two years ago. According to most of them the precompile build times will be reduced. A user commented on Github, “well-generated source maps can actually make it very easy to rip off someone else's source.” Another comment reads, “Source maps are super useful for error reporting, as well as for analyzing bundle size from dependencies. Whether one chooses to deploy them or not is their choice, but producing them is useful.” Ruby on Rails 6.0 Beta 1 brings new frameworks, multiple DBs, and parallel testing GitHub addresses technical debt, now runs on Rails 5.2.1 Introducing Web Application Development in Rails
Read more
  • 0
  • 0
  • 10943
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-dgraph-releases-ristretto-a-fast-concurrent-and-memory-bound-go-cache-library
Amrata Joshi
24 Sep 2019
4 min read
Save for later

Dgraph releases Ristretto, a fast, concurrent and memory-bound Go cache library

Amrata Joshi
24 Sep 2019
4 min read
Last week, the team at Dgraph released Ristretto, a fast, fixed size, concurrent, memory-bound Go cache library. It is contention-proof and focuses on throughput as well as hit ratio performance.  There was a need for a memory-bound and concurrent Go cache in Dgraph so the team used a sharded map with shard eviction for releasing memory but that led to memory issues. The team then repurposed Groupcache’s LRU with the help of mutex locks for thread safety.  But later the team realised that the cache suffered from severe contention. The team removed this cache and improved their query latency by 5-10x as the cache was slowing down the process. The team realised that the concurrent cache story in Go is broken and it needs to be fixed.  The official page reads, “In March, we wrote about the State of Caching in Go, mentioning the problem of databases and systems requiring a smart memory-bound cache which can scale to the multi-threaded environment Go programs find themselves in.” Ristretto is built on 3 key principles  Ristretto is built on three key principles including fast accesses, high concurrency, contention resistance and memory bounding. Let’s discuss more about the principles and how the team achieved them: Fast hash with runtime.memhash The team experimented with store interface within Ristretto and found out that sync.Map performs well for read-heavy workloads but it deteriorates for write workloads. As there was no thread-local storage, the team worked with sharded mutex-wrapped Go maps that gave good performance results. The team used 256 shards to ensure that the performance doesn’t get affected while working with a 64-core server. With a shard based approach, the team also needed a quick way to understand which shard a key should go in. The long keys consumed too much memory so the team used uint64 for keys, instead of storing the entire key.  There was a need for using the hash of the key in multiple places and to quickly generate a fast hash, the team borrowed runtime.memhash from Go Runtime. The runtime.memhash function uses assembly code for generating a hash, quickly.  Handling concurrency and contention resistance with batching The team wanted to achieve high hit ratios but that would require managing metadata about the information that is currently present in the cache and the information that will be needed in it. They took inspiration from the paper BP-Wrapper that explains two ways for mitigating contention: prefetching and batching. The team only used ‘batching’ to lower contention instead of acquiring a mutex lock for every metadata mutation. While talking about concurrency, Ristretto performs well under heavy concurrent load but it would lose some metadata while delivering better throughput performance. The page reads, “Interestingly, that information loss doesn’t hurt our hit ratio performance because of the nature of key access distributions. If we do lose metadata, it is generally lost uniformly while the key access distribution remains non-uniform. Therefore, we still achieve high hit ratios and the hit ratio degradation is small as shown by the following graph.” Key cost The workloads usually have variable-sized values where one value ncan cost a few bytes while another will cost few kilobytes and some other value might cost a few megabytes. In this case, it is not possible to have the same memory cost for all of them. In Ristretto, the cost is attached to every key-value and users can easily specify what that cost is while calling the Set function. This cost is calculated against the MaxCost of the cache.  The page reads, “When the cache is operating at capacity, a heavy item could displace many lightweight items. This mechanism is nice in that it works well for all different workloads, including the naive approach where each key-value costs 1.” To know more about Ristretto and its key principles in detail, check out the official post. Other interesting news in programming How Quarkus brings Java into the modern world of enterprise tech LLVM 9 releases with official RISC-V target support, asm goto, Clang 9, and more Twitter announces to test ‘Hide Replies’ feature in the US and Japan, after testing it in Canada
Read more
  • 0
  • 0
  • 10917

article-image-exploring%e2%80%afforms-in-angular-types-benefits-and-differences%e2%80%af%e2%80%af%e2%80%af-%e2%80%af
Expert Network
21 Jul 2021
11 min read
Save for later

Exploring Forms in Angular – types, benefits and differences     

Expert Network
21 Jul 2021
11 min read
While developing a web application, or setting dynamic pages and meta tags we need to deal with multiple input elements and value types, such limitations could seriously hinder our work – in terms of either data flow control, data validation, or user experience.    This article is an excerpt from the book, ASP.NET Core 5 and Angular, Fourth Edition by Valerio De Sanctis – A revised edition of a bestseller that includes coverage of the Angular routing module, expanded discussion on the Angular CLI, and detailed instructions for deploying apps on Azure, as well as both Windows and Linux.   Sure, we could easily work around most of the issues by implementing some custom methods within our form-based components; we could throw some errors such as isValid(), isNumber(), and so on here and there, and then hook them up to our template syntax and show/hide the validation messages with the help of structural directives such as *ngIf, *ngFor, and the like. However, it would be a horrible way to address our problem; we didn't choose a feature-rich client-side framework such as Angular to work that way.   Luckily enough, we have no reason to do that since Angular provides us with a couple of alternative strategies to deal with these common form-related scenarios:   Template-Driven Forms   Model-Driven Forms, also known as Reactive Forms   Both are highly coupled with the framework and thus extremely viable; they both belong to the @angular/forms library and share a common set of form control classes. However, they also have their own specific sets of features, along with their pros and cons, which could ultimately lead to us choosing one of them.   Let's try to quickly summarize these differences.   Template-Driven Forms   If you've come from AngularJS, there's a high chance that the Template-Driven approach will ring a bell or two. As the name implies, Template-Driven Forms host most of the logic in the template code; working with a Template-Driven Form means:   Building the form in the .html template file   Binding data to the various input fields using ngModel instance   Using a dedicated ngForm object related to the whole form and containing all the inputs, with each being accessible through their name.   These things need to be done to perform the required validity checks. To understand this, here's what a Template-Driven Form looks like:   <form novalidate autocomplete="off" #form="ngForm" (ngSubmit)="onSubmit(form)">  <input type="text" name="name" value="" required   placeholder="Insert the city name..."    [(ngModel)]="city.Name" #title="ngModel"   />  <span *ngIf="(name.touched || name.dirty) &&       name.errors?.required">           Name is a required field: please enter a valid city name.   </span>   <button type="submit" name="btnSubmit"          [disabled]="form.invalid">         Submit   </button>   </form>     Here, we can access any element, including the form itself, with some convenient aliases – the attributes with the # sign – and check for their current states to create our own validation workflow.   These states are provided by the framework and will change in real-time, depending on various things: touched, for example, becomes True when the control has been visited at least once; dirty, which is the opposite of pristine, means that the control value has changed, and so on. We used both touched and dirty in the preceding example because we want our validation message to only be shown if the user moves their focus to the <input name="name"> and then goes away, leaving it blank by either deleting its value or not setting it.   These are Template-Driven Forms in a nutshell; now that we've had an overall look at them, let's try to summarize the pros and cons of this approach. Here are the main advantages of Template-Driven Forms: Template-Driven Forms are very easy to write. We can recycle most of our HTML knowledge (assuming that we have any). On top of that, if we come from AngularJS, we already know how well we can make them work once we've mastered the technique.   They are rather easy to read and understand, at least from an HTML point of view; we have a plain, understandable HTML structure containing all the input fields and validators, one after another. Each element will have a name, a two-way binding with the underlying ngModel, and (possibly) Template-Driven logic built upon aliases that have been hooked to other elements that we can also see, or to the form itself.   Here are their weaknesses:   Template-Driven Forms require a lot of HTML code, which can be rather difficult to maintain and is generally more error-prone than pure TypeScript.   For the same reason, these forms cannot be unit tested. We have no way to test their validators or to ensure that the logic we implemented will work, other than running an end-to-end test with our browser, which is hardly ideal for complex forms.   Their readability will quickly drop as we add more and more validators and input tags. Keeping all their logic within the template might be fine for small forms, but it does not scale well when dealing with complex data items. Ultimately, we can say that Template-Driven Forms might be the way to go when we need to build small forms with simple data validation rules, where we can benefit more from their simplicity. On top of that, they are quite like the typical HTML code we're already used to (assuming that we do have a plain HTML development background); we just need to learn how to decorate the standard <form> and <input> elements with aliases and throw in some validators handled by structural directives such as the ones we've already seen, and we'll be set in (almost) no time.   For additional information on Template-Driven Forms, we highly recommend that you read the official Angular documentation at: https://angular.io/guide/forms   That being said; the lack of unit testing, the HTML code bloat that they will eventually produce, and the scaling difficulties will eventually lead us toward an alternative approach for any non-trivial form. Model-Driven/Reactive Forms   The Model-Driven approach was specifically added in Angular 2+ to address the known limitations of Template-Driven Forms. The forms that are implemented with this alternative method are known as Model-Driven Forms or Reactive Forms, which are the exact same thing.   The main difference here is that (almost) nothing happens in the template, which acts as a mere reference to a more complex TypeScript object that gets defined, instantiated, and configured programmatically within the component class: the form model.   To understand the overall concept, let's try to rewrite the previous form in a Model-Driven/Reactive way (the relevant parts are highlighted). The outcome of doing this is as follows:  <form [formGroup]="form" (ngSubmit)="onSubmit()">  <input formControlName="name" required />   <span *ngIf="(form.get('name').touched || form.get('name').dirty)            && form.get('name').errors?.required">           Name is a required field: please enter a valid city name.   </span>  <button type="submit" name="btnSubmit"           [disabled]="form.invalid">  Submit  </button>     </form>  As we can see, the amount of required code is much lower.  Here's the underlying form model that we will define in the component class file (the relevant parts are highlighted in the following code):   import { FormGroup, FormControl } from '@angular/forms';   class ModelFormComponent implements OnInit {   form: FormGroup;         ngOnInit() {       this.form = new FormGroup({          title: new FormControl()       });     }   }   Let's try to understand what's happening here:   The form property is an instance of FormGroup and represents the form itself.   FormGroup, as the name suggests, is a container of form controls sharing the same purpose. As we can see, the form itself acts as a FormGroup, which means that we can nest FormGroup objects inside other FormGroup objects (we didn't do that in our sample, though).   Each data input element in the form template – in the preceding code, name – is represented by an instance of FormControl.   Each FormControl instance encapsulates the related control's current state, such as valid, invalid, touched, and dirty, including its actual value.   Each FormGroup instance encapsulates the state of each child control, meaning that it will only be valid if/when all its children are also valid.   Also, note that we have no way of accessing the FormControls directly like we were doing in Template-Driven Forms; we have to retrieve them using the .get() method of the main FormGroup, which is the form itself.   At first glance, the Model-Driven template doesn't seem too different from the Template-Driven one; we still have a <form> element, an <input> element hooked to a <span> validator, and a submit button; on top of that, checking the state of the input elements takes a bigger amount of source code since they have no aliases we can use. What's the real deal, then?  To help us visualize the difference, let's look at the following diagrams: here's a schema depicting how Template-Driven Forms work:   [caption id="attachment_72453" align="alignnone" width="690"] Fig 1: Template-Driven Forms schematic[/caption] By looking at the arrows, we can easily see that, in Template-Driven Forms, everything happens in the template; the HTML form elements are directly bound to the DataModel component represented by a property filled with an asynchronous HTML request to the Web Server, much like we did with our cities and country table.   That DataModel will be updated as soon as the user changes something, that is, unless a validator prevents them from doing that. If we think about it, we can easily understand how there isn't a single part of the whole workflow that happens to be under our control; Angular handles everything by itself using the information in the data bindings defined within our template.   This is what Template-Driven actually means: the template is calling the shots.  Now, let's take a look at the Model-Driven Forms (or Reactive Forms) approach:   [caption id="attachment_72454" align="alignnone" width="676"] Fig 2: Model-Driven/Reactive Forms schematic[/caption] As we can see, the arrows depicting the Model-Driven Forms workflow tell a whole different story. They show how the data flows between the DataModel component – which we get from the Web Server – and a UI-oriented form model that retains the states and the values of the HTML form (and its children input elements) that are presented to the user. This means that we'll be able to get in-between the data and the form control objects and perform a number of tasks firsthand: push and pull data, detect and react to user changes, implement our own validation logic, perform unit tests, and so on.  Instead of being superseded by a template that's not under our control, we can track and influence the workflow programmatically, since the form model that calls the shots is also a TypeScript class; that's what Model-Driven Forms are about. This also explains why they are also called Reactive Forms – an explicit reference to the Reactive programming style that favors explicit data handling and change management throughout the workflow.   Summary    In this article, we focused on the Angular framework and the two form design models it offers: the Template-Driven approach, mostly inherited from AngularJS, and the Model-Driven or Reactive alternative. We took some valuable time to analyze the pros and cons provided by both, and then we made a detailed comparison of the underlying logic and workflow. At the end of the day, we chose the Reactive way, as it gives the developer more control and enforces a more consistent separation of duties between the Data Model and the Form Model.   About the author   Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He's the founder and owner of Ryadel and the author of many best-selling books on back-end and front-end web development.      
Read more
  • 0
  • 0
  • 10885

article-image-red-hat-announces-codeready-workspaces-the-first-kubernetes-native-ide-for-easy-collaboration-among-developers
Natasha Mathur
06 Feb 2019
2 min read
Save for later

Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers

Natasha Mathur
06 Feb 2019
2 min read
Red Hat has come out with a new Integrated development environment (IDE), called CodeReady Workspaces. The new IDE, announced yesterday, is a Kubernetes-native, browser-based environment for developers to enable easier collaboration among the development team. CodeReady Workspaces is based on the open source Eclipse Che integrated development environment (IDE) project and has been optimized for Red Hat OpenShift and Red Hat Enterprise Linux. It offers the enterprise development teams a shareable developer environment, which comprises the tools and dependencies required to code, build, test, run and debug the container-based applications. Other than that, it is the first Kubernetes-native IDE in the market that runs inside a Kubernetes cluster and is capable of managing the developer’s code, and dependencies present inside OpenShift pods and containers. Also, there is no need for the developers to be Kubernetes or OpenShift experts in order to use the IDE. Additionally, Red Hat CodeReady Workspaces comes with a sharing feature called Factories, which is a template containing the source code location, runtime, tooling configuration and commands that are needed for a project. It enables development teams to run the new Kubernetes-native developer environment in a few minutes. Team members can access their own or shared workspaces on any device with a browser and on any IDE and operating system (OS). According to the Red Hat team, CodeReady Workspaces is an ideal platform for DevOps based organizations and allows the IT or development teams to manage workspaces at scale and effectively control system performance, security features and functionality. CodeReady Workspaces allows the developers to: Integrate their preferred version control (public and private repositories). Control workspace permissions and resourcing Make use of Lightweight Directory Access Protocol (LDAP) or Active Directory (AD) authentication for single sign-on. “Red Hat CodeReady Workspaces offers enterprise development teams a collaborative and scalable platform that can enable developers to more efficiently and effectively deliver new applications for Kubernetes and collaborate on container-native applications”, said Brad Micklea, senior director, Developer Experience and Programs, Red Hat. Red Hat CodeReady Workspaces is free with an OpenShift subscription. You can download it by joining the Red Hat Developer Program. For more information, check out the official Red Hat CodeReady Workspaces blog. Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL) Red Hat acquires Israeli multi-cloud storage software company, NooBaa Red Hat announces full support for Clang/LLVM, Go, and Rust
Read more
  • 0
  • 0
  • 10863

article-image-github-has-added-security-alerts-for-python
Richard Gall
16 Jul 2018
2 min read
Save for later

GitHub has added security alerts for Python

Richard Gall
16 Jul 2018
2 min read
At the end of 2017 GitHub announced the launch of its 'security alerts' feature for vulnerable Ruby and JavaScript packages. With the feature proving a huge success for GitHub, the platform has now rolled out the feature for Python. The GitHub team promised that Python would be the next language to receive the security alert feature - but with fears over a possible mass migration to GitLab, following Microsoft's acquisition of the platform, the news couldn't come at a better time. How Github's security alerts work GitHub's security alerts work using its dependency graph. The dependency graph allows developers to visualize the range of projects on which their code depends. Security alerts followed the release of the dependency graph for Ruby and JavaScript. With the dependency graph in place, the security alerts "track when dependencies are associated with public security vulnerabilities." When you enable the dependency graph GitHub will notify you if there is a possible vulnerability in one of your dependencies. It will also suggest some possible fixes as well. Rolling security alerts out to Python projects The Python roll out was announced on the GitHub blog by Robert Schultheis on July 12. He writes: We’ve chosen to launch the new platform offering with a few recent vulnerabilities. Over the coming weeks, we will be adding more historical Python vulnerabilities to our database. Going forward, we will continue to monitor the NVD feed and other sources, and will send alerts on any newly disclosed vulnerabilities in Python packages. He isn't specific about the Python vulnerabilities. However, as noted earlier, launching support for Python has always been part of GitHub's plan since 2017. As The Register notes, there have only been four Python entries on the CVE database in 2018 so far "and one of those is disputed." According to Schultheis, Github "will be adding more Python vulnerabilities to our database."
Read more
  • 0
  • 0
  • 10854
article-image-go-1-12-release-candidate-1-is-here-with-improved-runtime-assembler-ports-and-more
Amrata Joshi
13 Feb 2019
3 min read
Save for later

Go 1.12 Release Candidate 1 is here with improved runtime, assembler, ports and more

Amrata Joshi
13 Feb 2019
3 min read
Yesterday, the team at Gophers released Go 1.12rc1, a release candidate version of Go 1.12. This release comes with improved runtime, updated libraries, ports and more. What’s new in Go 1.12rc1 Trace In Go 1.12rc1, the trace tool supports plotting mutator utilization curves, including cross-references to the execution trace. These are used to analyze the impact of the garbage collector on application latency and throughput. Assembler On arm64, the platform register was renamed from R18 to R18_PLATFORM to prevent accidental use, as the OS could choose to reserve this register. Runtime This release improves the performance of sweeping when a large fraction of the heap remains live. This reduces allocation latency following a garbage collection.The Go runtime now releases memory back to the operating system and particularly in response to large allocations that can't reuse existing heap space. In this release, the runtime’s timer and deadline code is faster and scales better with higher numbers of CPUs. It also improves the performance of manipulating network connection deadlines. Ports With this release, the race detector is now supported on linux/arm64. Go 1.12rc1 is supported on FreeBSD 10.x. Windows The new windows/arm port supports Go on Windows 10 IoT Core on 32-bit ARM chips such as the Raspberry Pi 3. AIX This release supports AIX 7.2 and later on POWER8 architectures (aix/ppc64). Though external linking, pprof, cgo, and the race detector aren't yet supported. Darwin This one is the last release to run on macOS 10.10 Yosemite, as Go 1.13 will need macOS 10.11 El Capitan or later. libSystem is now used while making syscalls on Darwin, which ensures forward-compatibility with future versions of macOS and iOS. This switch to libSystem has triggered additional App Store checks for private API usage. Tools The go tool vet is no longer supported. With this release, the go vet command has been rewritten to serve as the base for a range of different source code analysis tools. Even the external tools that use go tool vet must be changed to use go vet. Using go vet instead of go tool vet will work with all supported versions of Go. Even the experimental -shadow option is no longer available with go vet. Build cache requirement The build cache is now used for eliminating $GOPATH/pkg. With Go 1.12rc1, setting the environment variable GOCACHE=off will cause go commands to fail. Binary-only packages This one is the last release that will support binary-only packages. Cgo This release translates the C type EGLDisplay to the Go type uintptr. In this release, mangled C names are no longer accepted by the packages that use Cgo. The Cgo names are used now instead. Minor changes to the library Bufio: In this release, the reader's UnreadRune and UnreadByte methods will now return an error if they are called after Peek. Bytes: This release comes with a new function, ReplaceAll that returns a copy of a byte slice with all non-overlapping instances of a value replaced by another. To know more about this news, check out the official post. Introduction to Creational Patterns using Go Programming Go Programming Control Flow Essential Tools for Go Programming
Read more
  • 0
  • 0
  • 10839

article-image-resharper-18-2-brings-performance-improvements-c-7-3-blazor-support-and-spellcheck
Prasad Ramesh
23 Aug 2018
3 min read
Save for later

ReSharper 18.2 brings performance improvements, C# 7.3, Blazor support and spellcheck

Prasad Ramesh
23 Aug 2018
3 min read
JetBrains released ReSharper Ultimate 2018.2 with fixes for improved performance, and C# 7.3 support, integrated spellcheck. It also features JSLint, ESLint, and TSLint support along with navigation improvements. Performance improvements Around 30 performance fixes are made in different areas of ReSharper. They range from speeding up EditorConfig support to decreasing solution loading times. Visit the page dedicated to performance improvements for more details. C# 7.3 support ReSharper now fully supports C# 7.3 including all features from the latest. New inspections and appropriate quick-fixes are included to make compatible with C# 7.3. The features include Tuple equality, pattern-based fixed statement, indexing movable fixed buffers and others. JSLint, ESLint, and TSLint support These three static analysis tools have been integrated into JavaScript/TypeScript code analysis. This will provide additional inspections and appropriate quick-fixes. These linters help ensure readability in JavaScript and TypeScript code. Integrated spell checking with ReSpeller There is spell-checking functionality out of the box, enabled for most of the supported languages. By default, this spell checker comes with a built-in dictionary for English (US) but more languages can be downloaded. Blazor support added Blazor is experimental as of now, but initial support is added in ReSharper. For example, code completion includes all possible directives such as page (routing), inject (service injection), and function (component members). Navigation improvements A long-awaited feature is introduced for Search & Navigation options: ignored files can be specified by using a mask in under Search & Navigation in Environment. Files can be excluded from all search and navigation features based on a file extension or by folder. Some ReSharper features now take local functions into account, they include: File Structure, Containing Declaration, Next/Previous Members, and others. Formatter engine updated Comments that override formatter settings can now be generated automatically. Improvements are made to the formatting rules presentation which come from a StyleCop config file. Refactorings UI update Many ReSharper refactorings are moved to the new presentation framework. This will yield many benefits in the near future thanks to a unified control behavior for ReSharper and Rider. Visible UI changes are code completion under Change Signature and a better presentation for Extract Method. Other features Fix-in-scope quick-fixes now have more granular fixing scopes. The code style for Built-in Type has been improved. There’s a new option to execute BeforeBuild and AfterBuild targets for skipped projects in ReSharper Build.  A new inspection was also added to highlight misplaced text in XAML. For more details visit the JetBrains page. Visual Studio code July 2018 release, version 1.26 is out! Microsoft releases the Python Language Server in Visual Studio Visual Studio 2019: New features you should expect to see
Read more
  • 0
  • 0
  • 10836

article-image-safari-technology-preview-release-83-now-available-for-macos-mojave-and-macos-high-sierra
Amrata Joshi
03 Jun 2019
2 min read
Save for later

Safari Technology Preview release 83 now available for macOS Mojave and macOS High Sierra

Amrata Joshi
03 Jun 2019
2 min read
Last week, the team at WebKit announced that Safari Technology Preview release 83 is now available for macOS Mojave and macOS High Sierra. Safari Technology Preview is a version of Safari for OS X includes an in-development version of the WebKit browser engine. What’s new in Safari Technology Preview release 83? Web authentication This release comes with web authentication enabled by default on macOS. The web authentication has been changed to cancel the pending request when a new request is made. Web authentication has been changed to return InvalidStateError to sites whenever authenticators return such error. Pointer events With this release, the issue with isPrimary property of pointercancel events has been fixed. Also, the issue with calling preventDefault() on pointerdown has been fixed. Rendering The team has implemented backing-sharing in compositing layers and have further allowed overlap layers to paint into the backing store of another layer. The team has also fixed rendering of backing-sharing layers with transforms. The issue with layer-related flashing with composited overflow: scroll has been fixed. CSS In this release, “clearfix” with display: flow-root has been implemented. Also, page-break-* and -webkit-column-break-* have been implemented. The issue with font-optical-sizing applying the wrong variation value has been implemented. The CSS grid has also  been updated. WebRTC This release now allows sequential playback of media files. Also, the issue with video stream freezing has been fixed. Major bug fixes In this release, the CPU timeline and memory timeline bars have been fixed. The colors in the network table waterfall container have been fixed. The issue with context menu items in the DOM tree has been fixed. To know more about this news, check out the release notes. Chrome, Safari, Opera, and Edge to make hyperlink auditing compulsorily enabled Safari Technology Preview 71 releases with improvements in Dark Mode, Web Inspector, WebRTC, and more! Is it time to ditch Chrome? Ad blocking extensions will now only be for enterprise users
Read more
  • 0
  • 0
  • 10814
article-image-gnu-bison-3-3-released-with-major-bug-fixes-yyrhs-and-yyphrs-tables-token-constructors-and-more
Amrata Joshi
28 Jan 2019
4 min read
Save for later

GNU Bison 3.3 released with major bug fixes, yyrhs and yyphrs tables, token constructors and more

Amrata Joshi
28 Jan 2019
4 min read
On Saturday, the team at Bison announced the stable release of Bison 3.3, a general-purpose parser generator. Bison 3.3 comes with yyrhs and yyphrs tables, major bug fixes, parsers and much more. What’s new in Bison 3.3 DJGPP support This release comes with support for DJGPP (DJ’s GNU Programming Platform) in Bison, which was unmaintained and untested since a few years. Generation of fix-its for IDEs/Editors Bison 3.3 features a new option called ‘ffixit’ which helps Bison in generating machine-readable editing instructions to fix issues. It helps in updating deprecated directives and removing duplicates. Symbol declaration The symbol declaration syntax was overhauled in previous releases. In Bison 3.3, the symbol ‘%nterm’, is now an officially supported feature. Bison is now relocatable Users can now make Bison relocatable by passing '--enable-relocatable' to ‘configure’. Users can move or copy the relocatable program to a different location on the file system which can also be used through mount points for network sharing. With this release, it is now possible to make symbolic links to the installed and moved programs and invoke them through the symbolic link. Renaming variable names In Bison 3.3, few variables, mostly related to parsers in Java, have been renamed for consistency. Following is a list of such variables:   Abstract -> api.parser.abstract Annotations -> api.parser.annotations Extends -> api.parser.extends Final -> api.parser.final Implements -> api.parser.implements Parser_class_name -> api.parser.class Public -> api.parser.public Strictfp -> api.parser.strictfp %expect and %expect-rr modifiers on individual rules Users can now document and check which rules participate in shift/reduce and reduce/reduce conflicts. Users can use %expect-rr in a rule for reduce/reduce conflicts in GLR parsers. C++ Parsers This release comes with C++ parsers that feature symbol constructors and use noexcept/constexpr. This release also features GLR parsers in C++ that support the syntax_error exceptions. C++ token constructors Variants and token constructors are enabled in this release. In addition to the type-safe named token constructors (make_ID, make_INT, etc.), this release features constructors for symbol_type. C++: Syntax error exceptions in GLR In this version of Bison, the glr.cc skeleton now supports syntax_error exceptions thrown from user actions or from the scanner. More POSIX Yacc compatibility warnings With this release, directives are now reported with -y or -Wyacc.   yyrhs and yyphrs tables Since none of the Bison skeletons used the ‘yyrhs’ and ‘yyphrs’ tables, they were removed in 2008.  But these tables are back again as some users expressed interest in being able to use them in their own skeletons. Deprecated directives The %error-verbose directive is deprecated in favor of '%define parse.error verbose' with warnings issued.The '%name-prefix "xx"' directive is deprecated in favor of '%define api.prefix {xx}' with warnings issued. Deprecated features The new release replaces deprecated features with their modern spelling. The grammar files have been updated. Option -u/--updates results in a cleaner grammar file. Major bug fixes The previous versions of Bison used to report a single RR conflict instead of two. This bug was the oldest one in Bison, it is at least 31 years old, but it has been fixed now. Earlier, passing invalid arguments to %nterm, for instance, character literals, used to result in unclear error messages. This release highlights clear error messages. Users are skeptical about the fact that a bug can live on for so long and gets addressed after years. One of the users commented on HackerNews, “In a thousand years time will archeologists study us through the bugs left behind in Linux 1300.05 and windows (30)95?” Some of the users don’t seem to be happy with the UX of Bison. A comment on HackerNews reads, “A big part of why tools move away from Bison and ANTLR isn’t performance, but UX (especially error reporting).” Others are happy with this news and think that Bison makes parsing easy. One of the comments read, “Congrats though! I love it when these tried-and-true tools continue to perform and improve!” To know more about this Bison 3.3, check out the release notes. GNU Bison 3.2 got rolled out GNU ed 1.15 released! Bash 5.0 is here with new features and improvements
Read more
  • 0
  • 0
  • 10807

article-image-javafx-11-to-release-soon-announces-the-gluon-team
Natasha Mathur
24 Aug 2018
3 min read
Save for later

JavaFX 11 to release soon, announces the Gluon team

Natasha Mathur
24 Aug 2018
3 min read
Earlier this week, the Gluon team announced that JavaFX 11 GA will be released in the second half of September, close to the release of Java 11. In the meantime, development has begun on JavaFX 12. JavaFX is the software platform that allows development of desktop Java apps. It comprises a single codebase which consists of code for a rich, interactive UI on many platforms. Users access information from multiple devices so having a single codebase makes it cost-effective. Single codebases are also easy to maintain and interact well with enterprise and cloud services. It was announced, back in March, that the framework JavaFX would be offered as a separate component and no longer will be a part of the Java SDK. Ever since then, JavaFX has been under development by the community as a stand-alone project called OpenJFX with multiple new developers joining in. As mentioned in the official Gluon blog post, the reason behind new developers contributing to JavaFX 11 is the fact that GitHub has made it easier for these developers to get started  as all they have to do now is “sign the contributor agreement, commit the code -- pushed upstream to the official OpenJFX repository on the OpenJDK infrastructure”. JavaFX 11 is the first release under the umbrella of the OpenJFX open project. Johan Vos, Co-CTO of Gluon, is also co-lead of the OpenJFX project and one of the driving forces behind the advancement of JavaFX. A JavaFX 11 stabilization repository has been created. This will only be responsible for fixing the blocking issues. Gluon will be handling the release of JavaFX 11. In addition to that, the Gluon team has increased their investment in OpenJFX as they are constantly working on its code. Development on JavaFX 12 is currently ongoing and the Gluon community is keen on following the same core principles which are: release often, include the ready features. In case of a feature not ready for a particular release, it can be made available in the next release cycle, 6 months away. Keeping in mind that not all developers are interested in changing versions every six months, Gluon offers JavaFX Enterprise Support, where a Long Term Support version of JavaFX 11 is maintained. On subscribing to this payment mode, you will have access to builds s which have been backported to JavaFX 11. This is an attempt to make sure that the developers are always using  “the latest, feature-rich, stable, well-tested code in their projects They don’t have to wait years for a feature or bug fix to be in a released version. It also allows the OpenJFX developers to work on future versions, and to include new technologies and ideas into the JavaFX code” says the Gluon team. For more information, check out the official blog post. State of OpenJDK: Past, Present and Future with Oracle NVIDIA open sources its material definition language, MDL SDK Unit testing with Java frameworks: JUnit and TestNG [Tutorial]  
Read more
  • 0
  • 0
  • 10764
Modal Close icon
Modal Close icon