Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Application Development

279 Articles
article-image-junit-5-4-released-with-an-aggregate-artifact-for-reducing-your-maven-and-gradle-files
Bhagyashree R
08 Mar 2019
2 min read
Save for later

JUnit 5.4 released with an aggregate artifact for reducing your Maven and Gradle files

Bhagyashree R
08 Mar 2019
2 min read
Last month, the team behind the JUnit framework announced the release of JUnit 5.4. This release allows ordering extensions and test case execution provides an aggregate artifact for slimming your Maven and Gradle files, and more. Some new features in JUnit 5.4 Ordering test case execution JUnit 5.4 allows you to explicitly define a text execution order. To enable tests ordering, you need to annotate the class with the ‘@TestMethodOrder’ extension and also mention the ordering type of either Alphanumeric, OrderAnnotation, or Random. Alphanumeric orders the test execution based on the method name of the test case. For a custom defined execution, you can use the OrderAnnotation order type. To order test cases pseudo-randomly, you can use the Random order type. Extension ordering With this release, you can not only order test case execution but also order how programmatically register extensions are executed. These extensions are registered with @RegisterExtension. You can use this feature in cases where the setup/teardown behavior of a test is complex and has separate domains. For instance, when you are testing the behavior of how a cache and database are used. Aggregate artifact A large number of dependencies were required when using JUnit 5. With this release, the team has changed this by providing the ‘junit-jupiter’ aggregate artifact. The ‘junit-jupiter’ artifact includes ‘junit-jupiter-api’ and ‘junit-jupiter-params’. This artifact collectively covers most of the dependencies we will need when using JUnit 5. It will also help in reducing the size of Maven and Gradle files of projects using JUnit 5. TempDir In JUnit 5.4, the team has added @TempDir as a native feature of the JUnit framework, which was originally a part of the JUnit-Pioneer third-party library. You can use the @TempDir extension for handling the creation and cleanup of temporary files. TestKit With TestKit, you can perform a meta-analysis on a test suite. It allows you to check the number of executed tests, passed tests, failed tests, skipped tests, as well as a few other behaviors. To read the full list of updates in JUnit 5.4, check out the official announcement. Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more! JUnit 5.3 brings console output capture, assertThrow enhancements and parallel test execution Unit testing with Java frameworks: JUnit and TestNG [Tutorial]  
Read more
  • 0
  • 0
  • 11618

article-image-telegram-introduces-new-features-slow-mode-switch-custom-titles-comments-widget-and-much-more
Amrata Joshi
12 Aug 2019
3 min read
Save for later

Telegram introduces new features: Slow mode switch, custom titles, comments widget and much more!

Amrata Joshi
12 Aug 2019
3 min read
Last week, the team at Telegram, the messaging app, introduced new features for group admins and users. These features include Slow Mode switch, custom titles, features for videos, and much more. What’s new in Telegram? Admins get more authority to manage the group  Slow Mode switch The Slow Mode feature will allow the group admin to control how often a member could send a message in the group. Once the admin enables Slow Mode in a group, the users will be able to send one message per the interval they choose. Also, a timer will be shown to the users which would tell them how long they need to wait before sending their next message. This feature is introduced to make group conversations more orderly and also to raise the value of each individual message. The official post suggests admins to “Keep it (Slow Mode feature) on permanently, or toggle as necessary to throttle rush hour traffic.” Image Source: Telegram Custom titles Group owners will now be able to set custom titles for admins like ‘Meme Queen’, ‘Spam Hammer’ or ‘El Duderino’. These custom titles will be shown with the default admin labels. For adding a custom title, users need to edit admin's rights in Group Settings. Image Source: Telegram Silent messages Telegram has now planned to bring more peace of mind to its users by introducing a feature that allows its users to message friends without any sound. Users just have to hold the send button to have any message or media delivered. New feature for videos Videos shared on Telegram now show thumbnail previews as users scroll through the videos to help them find the moment they were looking for. If users add a timestamp like 0:45 to a video caption, it will be automatically highlighted as a link. Also, if a user taps on a timestamp the video will play from the right spot.  Comments widget The team has come up with a new tool called Comments.App for users to comment on channel posts. With the help of the comments widget, users can log in with just two taps and comment with text and photos, as well as like, dislike and further reply to comments from others. Few users are excited about this news and appreciate Telegram over Whatsapp because it provides by default end to end encryption. A user commented on HackerNews, “I really like Telegram. Only end-to-end encryption by default and in group chats would make it perfect.” To know more about this news, check out the official post by Telegram. Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests Hacker destroys Iranian cyber-espionage data; leaks source code of APT34’s hacking tools on Telegram Trick or a treat: Telegram announces its new ‘delete feature’ that deletes messages on both the ends
Read more
  • 0
  • 0
  • 11495

article-image-project-management-platform-clubhouse-announces-free-plan-for-up-to-10-users-and-a-new-documentation-tool
Sugandha Lahoti
12 Sep 2019
3 min read
Save for later

Project management platform ClubHouse announces ‘Free Plan’ for up to 10 users and a new documentation tool

Sugandha Lahoti
12 Sep 2019
3 min read
ClubHouse, a popular project management platform has announced a free plan for smaller teams and a new collaborative documentation tool called ‘ClubHouse Write’. What is interesting is that although there are a number of competitors in the project management space, including the popular Atlassian Jira, few if any are offering it for free. ClubHouse provides a ‘Free plan’ for smaller teams of up to 10 users This no-cost option allows teams of up to 10 users to get unlimited access to ClubHouse core features such as core features Stories, Epics, Milestones for free.  These features show how everyday tasks of a team contribute towards a larger company goal. Additional features for support and additional security are available in Standard and Enterprise Plans for larger teams. All current small plan customers with 10 users or less, will be automatically transitioned over to the Free Plan. Organizations that previously paid an annual fee and have 10 or fewer users will be refunded the difference in price. Once a team adds the 11th user, they will transition to the current Standard Plan. Although Free Plan does not support Observers, if teams have Observers on a current Small Plan, they will be allowed to keep existing Observers. Users were quite excited about this new Free Plan, commenting about it on social media platforms. “You guys rock! One less expense to worry about it until I hit my stride. I'll gladly be paying for 11+ members when I can reach my goals,” reads a comment. Another says, “Thanks! I LOVE CLUBHOUSE! I would still gladly pay $10/mth maybe you should have made free for teams up to 5, but then kept small for 5-10 :)” ClubHouse Write, a collaborative documentation tool Along with today’s Free Plan announcement, Clubhouse has introduced Write, a real-time collaborative documentation tool. This product is currently in beta and will “make it easier for your software team to document, collaborate, and ideate together.” Software development teams will be able to collaborate, organize and comment on project documentation in real-time, for inter-team communication. Development teams can organize their Docs in multiple Collections. They can also choose to keep a Doc private or publish to the whole Workspace. Users will also be notified when there are new comments on followed Docs. In an interview with TechCrunch, Clubhouse discussed how the offerings will provide key competitive positioning against competitors such as Atlassian’s project management tool “Jira,”. Clubhouse Write, will compete head-on with Atlassian’s team collaboration product “Confluence.” Twitteratis were also quite excited about this new development. https://twitter.com/kkukshtel/status/1171829400951824384 https://twitter.com/kieranmoolchan/status/1171450725877997568 Other interesting news in Tech The Eclipse Foundation releases Jakarta EE 8, the first truly open-source, vendor-neutral Java EE. The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes. Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, Apple TV+, iPad, and more.
Read more
  • 0
  • 0
  • 11476

article-image-yarn-releases-a-roadmap-for-yarn-v2-and-beyond-moves-from-flow-to-typescript
Sugandha Lahoti
25 Jan 2019
3 min read
Save for later

Yarn releases a roadmap for Yarn v2 and beyond; moves from Flow to Typescript

Sugandha Lahoti
25 Jan 2019
3 min read
Yesterday, Maël Nison, maintainer at Yarn, opened a GitHub thread on the Yarn repository describing the roadmap for the next major Yarn release. The roadmap (codenamed Berry) contains significant changes that are planned for Yarn's design. Nison reinstates that Yarn’s philosophy will continue to be the same based around three important principles. Developing Javascript projects in Yarn should have totally predictable and reproducible behavior. Developing Javascript projects should be easy. Contributing to the Yarn ecosystem should be simple. Long-term and Short-term goals for Yarn in 2019 Yarn will be rewritten in TypeScript. The entire codebase will be ported from Flow to TypeScript. This is done to help third-party contributors to help to maintain Yarn. Yarn will become a development-first tool. This means Package managers will no longer work as tools for running on your production servers. Yarn will become an API first and CLI second. Its internal components will be split into modular entities Yarn will be designed in a way that each component of Yarn’s pipeline can be switched to adapt to different install targets. Yarn will now be a package manager platform as much as a package manager. Overall compatibility will be preserved when possible. The caveat: installs will now use Plug'n'Play by default. Major changes and new features The lockfile format will become a strict subset of YAML. Yarn will drop support for both Node 4 and Node 6. The log system will be renovated with implementation of diagnostic error codes from Typescript. Some features currently in the core will be moved into contrib plugins. The cache file format will switch from Tar to Zip, which offer better features in terms of random access. Nested workspaces will be supported out of the box. Running Yarn ./packages/my-package add foo will run Yarn add foo into my-package. There will be a new resolution protocol, workspaces:, which allows developers to force the package manager to link one of the packages against a workspace. Yarn constraints is a new command which will allow developers to enforce constraints across workspaces. The Yarn link command will now persist its changes into the package.json files Berry will ship with a portable posix-like light shell that'll be used by default. Scripts will be able to put their arguments anywhere in the command-line (and repeat them if needed) using $@. Cache will become fully atomic, with multiple Yarn instances running concurrently on the same cache without risking corrupting the data. Developers are generally positive about this release specially pointing out the change of moving from Flow to Typescript. A hacker news user states, “Finally one of the biggest news is the switch from Flow to Typescript. I think it's now clear that Facebook is admitting defeat with Flow; it brought a lot of good in the scene but Typescript is a lot more popular and gets overall much better support. Uniting the JS ecosystem around Typescript will be such a big deal.” Another comment reads, “The codebase will be ported from Flow to TypeScript. We hope this will help our community ramp up on Yarn, and will help you build awesome new features on top of it. Another major project moving from flow to typescript.” A new repository will be set up in early February. Post which Berry will be accessible by running yarn policies set-version berry within a project. Hadoop 3.2.0 released with support for node attributes in YARN, Hadoop submarine and more Starting with YARN Basics Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript
Read more
  • 0
  • 0
  • 11332

article-image-fedora-30-beta-released-with-desktop-environment-options-gnome-3-32-and-much-more
Amrata Joshi
04 Apr 2019
2 min read
Save for later

Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more

Amrata Joshi
04 Apr 2019
2 min read
Just two days ago, the team at Fedora announced the release of Fedora 30 Beta to test its six variants including a workstation, server, silverblue, spins, Labs, and ARM. This release comes with GNOME 3.32, improved performance and much more. https://twitter.com/mattdm/status/1113079013696856065 What’s new in Fedora 30 Beta? Desktop environment options This release features two new options for the desktop environment, namely DeepinDE, a user-friendly domestic desktop by Deepin Technology Co. and Pantheon Desktop, mainly used in the elementary OS and is least customizable. Improved DNF performance This release features zchunk format which is a new compression format designed for highly efficient deltas. All the DNF (Dandified YUM) repository metadata is now compressed with the zchunk format in addition to xz or gzip. When Fedora’s metadata is compressed using zchunk, DNF downloads only the differences between earlier copies of the metadata and the current version. GNOME 3.32 This release comes with GNOME 3.32, which is the latest version of GNOME 3. It features updated visual style, improved user interface, icons, and much more. Testing needed Since it is a beta release, users might encounter bugs or experience that some of the features are missing. Users can report issues encountered during testing by contacting the Fedora QA team via the mailing list or in #fedora-qa on Freenode. Updated packages This release includes updated versions of many popular packages including Golang, GNU C Library, Bash shell, Python, and Perl. Major changes Binary support for deprecated and unsafe functions have been removed from libcrypt. Python 2 package has been removed from this release. In this release, language support groups in Comps file has been replaced by rich dependencies in the langpacks package. Obsolete scriplets have been removed from this release. Few users are excited about this release but others are still facing some bugs and dependency issues since it is the beta version. https://twitter.com/YanivKaul/status/1113132353096953857 To know more about this news, check out the official post by Fedora Magazine. GNOME 3.32 released with fractional scaling, improvements to desktop, web and much more Fedora 31 will now come with Mono 5 to offer open-source .NET support Fedora 29 released with Modularity, Silverblue, and more
Read more
  • 0
  • 0
  • 11323

article-image-haiku-beta-released-with-package-management-a-new-preflet-webkit-and-more
Amrata Joshi
28 Dec 2018
4 min read
Save for later

Haiku beta released with package management, a new preflet, webkit and more

Amrata Joshi
28 Dec 2018
4 min read
On Tuesday, the team at Haiku released Haiku beta, an open-source operating system that specifically targets personal computing. It is inspired by the BeOS and is fast, simple to use and easy to learn. What’s new in  Haiku? Package management This release comes with a complete package management system. Haiku’s packages are a special type of compressed filesystem image, that are mounted upon installation (and thereafter on each boot) by the packagefs, a kernel component. The /system/ hierarchy in Haiku beta is now read-only, since it is merely a combination of the presently installed packages at the system level and it ensures that the system files themselves are incorruptible. With this release, it is possible to boot into a previous package state or even blacklist individual files. Since the disk transactions for managing the packages are limited, the installations and uninstallations are instant. It is possible to manage the installed package set on a non-running Haiku system by mounting its boot disk and further manipulating the /system/packages directory and associated configuration files. It is now possible to switch your system repositories from master to r1beta1. WebPositive upgrades The system web browser is more stable than before with the YouTube now functioning properly and other under-the-hood changes . With WebKit it is possible to fix a large number of bugs in Haiku such as broken stack alignment, various kernel panics in the network stack, bad edge-case handling in app_server’s rendering core, missing support for extended transforms and gradients, broken picture-clipping support, missing POSIX functionality, etc. Haiku WebKit now also uses Haiku’s network protocol layer and supports Gopher. Completely rewritten network preflet The old network preflet has now been replaced with a completely new preflet, designed from the ground-up for ease of use and longevity. The preflet now can manage the network services on the machine, such as OpenSSH and ftpd. The preflet also uses a plugin-based API, so third-party network services (VPNs, web servers, etc) can integrate with it. User interface cleanup & live color updates A lot of miscellaneous cleanups to various parts of the user interface has been made since the last release. Mail and Tracker both have received a significant internal cleanup of their UI code. This release features Haiku-style toolbars and font-size awareness. Major improvements in Haiku Media subsystem improvements The media subsystem now features a large number of cleanups to the Media Kit to improve fault tolerance, latency correction, and performance issues. The HTTP and RTSP streaming support are now integrated into the I/O layer of the Media Kit. With this release, live streams can now be played in WebPositive via HTML5 audio/video support, or in the native MediaPlayer. FFmpeg decoder plugin improvements FFmpeg 4.0 is now used even on GCC2 builds. This release comes with added support for audio and video formats, as well as significant performance improvements. HDA driver improvements The driver for HDA (High-Definition Audio) chipsets now comes with audio chipsets in modern x86-based hardware. RemoteDesktop Haiku’s native RemoteDesktop application has been improved and added to the builds. This RemoteDesktop forwards drawing commands from the host system to the client system. RemoteDesktop doesn’t require any special server. It can easily connect and run applications on any Haiku system. SerialConnect This release comes with SerialConnect, which is a simple and straightforward graphical interface to serial ports. It supports arbitrary baud rates and certain extended features such as XMODEM file transfers. Built-in Debugger is now the default Haiku’s built-in Debugger has replaced GDB as the default debugger. It also features a command-line interface for those who prefer it. The debugger services the system-wide crash dialogs. launch_daemon The launch_daemon now includes support for service dependency tracking, lazy daemon startup, and automatic restart of daemons upon crashes. Updated filesystem drivers Haiku comes with NFSv4 client, a GSoC project, which is now included by default. Haiku’s userlandfs supports running filesystem drivers in userland, which is now shipped along with Haiku itself. It now supports running BeOS filesystem drivers which are not supported in kernel mode. To know more about this release, check out  Haiku’s release notes. The Haiku operating system has released R1/beta1 Haiku, the open source BeOS clone, to release in beta after 17 years of development KDevelop 5.3 released with new analyzer plugin and improved language support
Read more
  • 0
  • 0
  • 11317
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-bling-introduces-fire-a-finite-state-machine-and-regular-expression-manipulation-library
Natasha Mathur
18 Apr 2019
2 min read
Save for later

Microsoft Bling introduces Fire: a Finite state machine and regular expression manipulation library

Natasha Mathur
18 Apr 2019
2 min read
A Microsoft team named Bling (Beyond Language Understanding) announced a Finite State machine and regular expression manipulation library called Fire, yesterday. Fire has been developed to use in case of different linguistic operations inside Bing including Tokenization, Multi-word expression matching, Unknown word-guessing, and Stemming/Lemmatization among others. Under Fire comes a tokenizer, which has been designed for fast-speed and quality tokenization of Natural Language text. Fire tokenization uses the tokenization logic of NLTK (Natural Language Toolkit), with an exception that hyphenated words can be split and only a few errors can be fixed. Also, when compared with other popular NLP libraries, Bling Fire becomes 10X faster speed in tokenization task. The latest release of Bling Fire model is enabled to support most languages including East Asian (Chinese Simplified, Traditional, Japanese, Korean, Thai). The tokenizer’s high-level API is friendly to use from languages such as Python, Perl, C#, Java, etc. Also, the tokenizer has been designed in a way that it requires 0 zero configurations, or initialization, or additional files. The reason Tokenizer is very fast is because it makes use of deterministic finite state machines underneath. In order to use the Bling Fire Library and Finite State Machine manipulation tools, the project can be built on Windows/Linux using CMake, which allows you to create your own tokenization/segmentation, stemming, etc. To use the Bling Fire Library in Python, users can install the release with the help of using: pip install blingfire For more information, check out Bling Fire on GitHub. Microsoft reveals certain Outlook.com user accounts were hacked for months Microsoft makes the first preview builds of Chromium-based Edge available for testing Microsoft announces the general availability of Live Share and brings it to Visual Studio 2019
Read more
  • 0
  • 0
  • 11257

article-image-gitlab-11-4-is-here-with-merge-request-reviews-and-many-more-features
Prasad Ramesh
23 Oct 2018
3 min read
Save for later

GitLab 11.4 is here with merge request reviews and many more features

Prasad Ramesh
23 Oct 2018
3 min read
GitLab 11.4 was released yesterday with new features like merge request reviews, feature flags, and many more. Merge request reviews in GitLab 11.4 This feature will allow a reviewer to draft unlimited comments in a merge request as per preference. It will ensure consistency and then submit them all as a single action. A reviewer can spread their work over many sessions as the drafts are saved to GitLab. The draft comments appear as normal individual comments once they are submitted. This allows individual team members flexibility. They can review code the way they want, it will still be compatible with the entire team. Create and toggle feature flags for applications This alpha feature gives users the ability to create and manage feature flags for software directly in the product. It is as simple as creating a new feature flag and validating it using simple API instructions. Then you have the ability to control the behavior of the software in the field via the feature flag within GitLab. Feature flags offer a feature toggle system for applications. File tree for browsing merge request diff The file tree summarizes both the structure and size of the change. It is similar to diff-stats which provides an overview of the change thereby improving navigation between diffs. Search allows reviewers to limit code review to a subset of files. This simplifies reviews by specialists. Suggest code owners as merge request approvers It is not always obvious as to which person is the best to review changes. The code owners are now shown as suggested approvers when a merge request is created or edited. This makes assigning the right person easy. New user profile page overview With GitLab 11.4, a redesigned profile page overview is introduced. It shows your activity via the familiar but shortened contribution graph. It displays the latest activities and most relevant personal GitLab projects. Set and show user status message within the user menu Setting your status is even more simple with GitLab 11.4. There is a new “Set status” item in the user menu which provides a fresh modal allowing users to set and clear their status right within context. In addition, the status you set is also shown in your user menu, on top of your full name and username. There are some more features like: Move the ability to use includes in .gitlab-ci.yml from starter to core Run all jobs only/except for modifications on a path/file Add timed incremental rollouts to Auto DevOps Support Kubernetes RBAC for GitLab managed apps Auto DevOps support for RBAC Support PostgreSQL DB operations for Auto DevOps Other improvements for searching projects, UX improvements, and Geo improvements For a complete list of features visit the GitLab website. GitLab 11.3 released with support for Maven repositories, protected environments and more GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub GitLab is moving from Azure to Google Cloud in July
Read more
  • 0
  • 0
  • 11235

article-image-ml-net-1-0-rc-releases-with-support-for-tensorflow-models-and-much-more
Amrata Joshi
08 Apr 2019
2 min read
Save for later

ML.NET 1.0 RC releases with support for TensorFlow models and much more!

Amrata Joshi
08 Apr 2019
2 min read
Last week, the team behind ML.NET announced the release of ML.NET 1.0 RC (Release Candidate), an open-source and cross-platform machine learning framework for .NET developers.  ML.NET 1.0 RC is the last preview release before releasing the final ML.NET 1.0 RTM (Requirements Traceability Matrix) this year. Developers can use ML.NET in sentiment analysis, product recommendation, spam detection, image classification, and much more. What’s new in ML.NET 1.0 RC? Preview packages According to the Microsoft blog, “Heading ML.NET 1.0, most of the functionality in ML.NET (around 95%) is going to be released as stable (version 1.0).” The packages that will be available for the preview state are TensorFlow, Onnx components, TimeSeries components, and recommendation components. IDataView moved to Microsoft.ML namespace In this release, IDataView has been moved back into Microsoft. ML namespace based on feedback the team received. Support for TensorFlow models This release comes with added support for TensorFlow models, an open source machine learning framework used for deep learning projects. The issues in ML.NET version 0.11 related to TensorFlow models have been fixed in this release. Major changes in ML.NET 1.0 RC The ‘Data’ namespace has been removed in this release with the help using Microsoft.Data.DataView. The Nuget package has been added for Microsoft.ML.FastTree. Also, PoissonRegression has been changed to LbfgsPoissonRegression. To know more about this release, check out the official announcement. .NET team announces ML.NET 0.6 Qml.Net: A new C# library for cross-platform .NET GUI development ML.NET 0.4 is here with support for SymSGD, F#, and word embeddings transform!A
Read more
  • 0
  • 0
  • 11226

article-image-gitlab-11-10-releases-with-enhanced-operations-dashboard-pipelines-for-merged-results-and-much-more
Amrata Joshi
26 Apr 2019
3 min read
Save for later

GitLab 11.10 releases with enhanced operations dashboard, pipelines for merged results and much more!

Amrata Joshi
26 Apr 2019
3 min read
Yesterday, the team at GitLab released GitLab 11.10, a web-based DevOps lifecycle tool. This release comes with new features including pipelines on the operations dashboard, pipelines for merged results, and much more. What’s new in GitLab 11.10? Enhanced operations dashboard GitLab 11.10 enhances the operations dashboard with a powerful feature that provides an overview of pipeline status. This becomes useful while looking at a single project's pipeline as well as for multi-project pipelines. With this release, users can now get instant visibility at a glance into the health of all of the pipelines on the operations dashboard. Run pipelines against merged results Users can now run pipelines against the merged result prior to merging. This allows the users to quickly catch errors for much quicker resolution of pipeline failures and more efficient usage of GitLab Runners. By having the merge request pipeline automatically create a new ref that contains the combined merge result of the source and target branch, it is possible to have the combined result valid. Scoped labels Scoped labels allow teams to use apply labels on issues, merge requests, and epics and custom workflow states. These labels can be configured by using a special double colon syntax in the label title. Few users think that new updates won’t be as successful as the team expects it to be and there is still a long way to go for GitLab. A user commented on HackerNews, “Can't help but feel that their focus on moving all operation insights into gitlab itself will not be as successful as they want it to be (as far as I read, their goal was to replace their operations and monitoring tools with gitlab itself[1]). I've worked with the ultimate edition for a year and the kubernetes integration is nowhere close to the insight you would get from google, amazon or azure in terms of insight and integration with ops-land. I wish all these hours were spent on improving the developer lifecycle instead.” Few others are happy with this news and they think that GitLab has progressed well. A comment reads, “GitLab has really come a long way in the past few years. The days of being a github-alike are long gone. Very happy to see them continue to find success.” To know more about this news, check out GitLab’s post. GitLab considers moving to a single Rails codebase by combining the two existing repositories Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications GitLab 11.7 releases with multi-level child epics, API integration with Kubernetes, search filter box and more  
Read more
  • 0
  • 0
  • 11129
article-image-microsoft-connect-2018-net-foundation-open-membership-net-core-2-2-net-core-3-preview-1-released-wpf-winui-windows-forms-open-sourced
Prasad Ramesh
05 Dec 2018
4 min read
Save for later

Microsoft Connect(); 2018: .NET foundation open membership, .NET Core 2.2, .NET Core 3 Preview 1 released, WPF, WinUI, Windows forms open sourced

Prasad Ramesh
05 Dec 2018
4 min read
Yesterday Microsoft made a handful of announcements at Connect (); 2018. The membership to the .NET foundation is now open, .NET Core 2.2 is released, .NET Core 3 Preview 1 is released and Windows Forms, WinUI are now open source. Membership is now open to the .NET Foundation Found in 2014, the .NET Foundation was formed to foster .NET open source development and collaboration. Microsoft has set the membership open to the community. It is also expanding on the number of board members from three to seven and only one of the seats will be occupied by a Microsoft employee with the remaining elected from the open source community. The board elections will commence in January 2019 and any individual who has contributed to a .NET Foundation open source project is eligible. This criteria also applies to become a member and the election will be held every year. You can apply for a membership on their website. To know more about membership and eligibility, head to the Microsoft Blog. New features in .NET Core 2.2 .NET Core 2.2 comes with diagnostic improvements to the runtime, ARM32 support for Windows and Azure Active Directory for SQL Client. Tiered compilation Tiered compilation enables the runtime to use the Just-In-Time (JIT) compiler more adaptively. This will give better performance at startup to maximize throughput. It is an opt-in option and is enabled by default in .NET Core 3.0. Runtime events With .NET Core 2.2, CoreCLR events can be consumed using the EventListener class. These CoreCLR events describe the behavior of GC, JIT, ThreadPool, and interop. They are the same events exposed as part of the CoreCLR ETW provider on Windows. This allows applications to consume these events or use a transport mechanism to send them to a telemetry aggregation service. Support for AccessToken in SqlConnection Setting the AccessToken property to authenticate SQL Server connections are now supported in the ADO.NET provider for SQL Server, SqlClient. This is done using Azure Active Directory. To use the feature, the access token value can be obtained using Active Directory Authentication Library for .NET. This library is present in the Microsoft.IdentityModel.Clients.ActiveDirectory NuGet package. Injecting code prior to Main .NET Core 2.2 enables injecting code prior to running an application main method. This can be done via a startup hook. Startup hooks allow for a host to customize application behavior after it has been deployed. Windows ARM32 Windows ARM32 is now supported in .NET Core 2.2 just like Linux ARM32 which was added in .NET Core 2.1. A bug prevented publishing of .NET Core builds for Windows ARM32. These builds will be available for .NET Core 2.2.1, in January 2019. .NET Core 3 Preview 1 .NET Core 3 Preview 1 is the first public release of .NET Core 3. Visual Studio 2019 Preview 1 will support development with .NET Core 3. .NET Core 3 is a major update. It adds support for building Windows desktop applications using Windows Presentation Foundation (WPF), Windows Forms, and Entity Framework 6 (EF6). Read more about the preview on the .NET Blog. WPF, Windows Forms, and WinUI are now open source After .NET Core went open source in 2014, it saw many contributions from the community. Microsoft is now open sourcing WPF, Windows Forms, and WinUI. Some code will be available in GitHub now and more will be added over the next few months. Repositories for WPF and WinUI are ready too. WPF and Windows Forms projects are under the .NET Foundation. This happened at the Connect(); conference yesterday when Microsoft employees merged the first two community pull requests on stage. This is another step from Microsoft towards open source, strongly signaling the seriousness of their open source commitment. Microsoft reportedly ditching EdgeHTML for Chromium in the Windows 10 default browser Microsoft becomes the world’s most valuable public company, moves ahead of Apple Microsoft announces official support for Windows 10 to build 64-bit ARM apps
Read more
  • 0
  • 0
  • 11128

article-image-facebook-open-sources-magma-a-software-platform-for-deploying-mobile-networks
Amrata Joshi
26 Feb 2019
2 min read
Save for later

Facebook open sources Magma, a software platform for deploying mobile networks

Amrata Joshi
26 Feb 2019
2 min read
Yesterday, the team at Facebook open-sourced Magma, a software platform that will help operators for deploying mobile networks easily. This platform comes with a software-centric distributed mobile packet core and tools for automating network management. Magma extends existing network topologies to the edge of rural deployments, private LTE (Long Term Evolution) networks or wireless enterprise deployments instead of replacing existing EPC deployments for large networks. Magma enables new types of network archetypes where there is a need for continuous integration of software components and incremental upgrade cycles. It also allows authentication and integration with the help of LTE EPC (Evolved Packet Core). It also reduces the complexity of operating mobile networks by enabling automation of network operations like software updates, element configuration, and device provisioning. Magma’s centralized cloud-based controller can be used on a public or private cloud environment. Its automated provisioning infrastructure makes deploying LTE as easy as deploying a WiFi access point. The platform currently works with existing LTE base stations and can associate with traditional mobile cores for extending services to new areas. According to a few users, “Facebook internally considers the social network to be its major asset and not their technology.” Any investment in open technologies or internal technology which make the network effect stronger is considered important. Few users discussed Facebook’s revenue strategies in the HackerNews thread. A comment on HackerNews reads, “I noticed that FB and mobile phone companies offering "free Facebook" are all in a borderline antagonistic relationship because messenger kills their revenue, and they want to bill FB an arm and a leg for that.” To know more about this news in detail, check out Facebook’s blog post. Facebook open sources SPARTA to simplify abstract interpretation Facebook open sources the ELF OpenGo project and retrains the model using reinforcement learning Facebook’s AI Chief at ISSCC talks about the future of deep learning hardware  
Read more
  • 0
  • 0
  • 11069

article-image-github-launches-draft-pull-requests
Amrata Joshi
15 Feb 2019
3 min read
Save for later

GitHub launches draft pull requests

Amrata Joshi
15 Feb 2019
3 min read
Yesterday, GitHub launched a new feature named draft pull requests, which allows users to start a pull request before they are done implementing all the code changes. Users can start a conversation with their collaborators once the code is ready. If a user ends up closing the pull request for some reason or is refactoring the code entirely, the pull request would work in collaboration. Also, if a user wants to signal a pull request to be the start of the conversation and the code isn’t ready, users can still let the people check it out locally and get feedback. The draft pull requests feature can tag users if they are still working on a PR and notify the team once it’s ready. This feature will also help the pull requests that are prematurely closed, or for times when users start working on a new feature and forget to send a PR. When a user opens a pull request, a drop-down arrow appears next to the ‘Create pull request’ button. Users can toggle the drop-down arrow for creating a draft. A draft pull request is differently styled for indicating that it is in a draft state. Users can change the status to ‘Ready for review’ near the bottom of the pull request for removing the draft state and allow merging according to the project’s settings. In case a user has ‘CODEOWNERS file’ in their repository, a draft pull request will suppress notifications to those reviewers until it is marked as ready for review. Users have given mixed reviews to this news. According to a few users, this new feature will save up a lot of time. One of the users said, “It saves a lot of wasted effort by exploring the problem domain collaboratively before development begins.” While according to a few others this idea is not much effective. Another comment reads, “Someone suggested this on my team. I personally don’t like the idea because these policies often times lead to bureaucracy and then nothing getting released. It is not that I am against thinking ahead but if I have to in details explain everything I do, then more time is spent documenting than actually creating which is the part I enjoy.” To know more about this news, check out GitHub official post. Western Digital RISC-V SweRV Core is now on GitHub GitHub Octoverse: top machine learning packages, languages, and projects of 2018 Github wants to improve Open Source sustainability; invites maintainers to talk about their OSS challenges
Read more
  • 0
  • 0
  • 11031
article-image-rails-6-will-be-shipping-source-maps-by-default-in-production
Amrata Joshi
30 Jan 2019
3 min read
Save for later

Rails 6 will be shipping source maps by default in production

Amrata Joshi
30 Jan 2019
3 min read
The developer community surely owes respect to the innovation of ‘View Source’ as it had made things much easier for the coders. Well, David Heinemeier Hansson, the developer of Ruby on Rails have made a move to make programmers’ life easy by announcing that Rails 6 will be shipping source maps by default in production. Source maps help developers view code as it was written by the creator with comments, understandable variable names, and all the other help that makes it possible for programmers to understand the code. It is sent to users over the wire when users have the dev tools open in their browser. Source maps, so far, have been seen merely as a local development tool and not something that will be shipped to production. Live debugging would make things easier for the developers. According to the post by David Heinemeier Hansson, all the JavaScript that runs Basecamp 3 under Webpack now has source maps. David Heinemeier Hansson said, “We’re still looking into what it’ll take to get source maps for the parts that were written for the asset pipeline using Sprockets, but all our Stimulus controllers are compiled and bundled using Webpack, and now they’re easy to read and learn from.” David Heinemeier Hansson is also a partner at the web-based software development firm Basecamp. He said that 90% of all the code that runs Basecamp, is open source in the form of Ruby on Rails, Turbolinks, Stimulus. He further added, “I like to think of Basecamp as a teaching hospital. The care of our users is our first priority, but it’s not the only one. We also take care of the staff running the place, and we try to teach and spread everything we learn. Pledging to protect View Source fits right in with that.” Sam Saffron, the co-founder at Discourse said, “I just wanted to voice my support for bringing this back by @dhh . We have been using source maps at Discourse now for 4 or so years, including maps for both JS and SCSS in production, default on.” According to him one of the important reasons to enable source maps in production is that often JS frameworks have "production" and "development" modes. Sam Saffron said, “I have seen many cases over the years where a particular issue only happens in production and does not happen in development. Being able to debug properly in production is a huge life saver. Source maps are not the panacea as they still have some limitations around local var unmangling and other edge cases, but they are 100 times better than working through obfuscated minified code with magic formatting enabled.” According to Sam, there is one performance concern that is the cost of precompilation. The cost was minimal at Discourse but the cost for a large number of source maps is unpredictable. Users had discussed this issue on the GitHub thread, two years ago. According to most of them the precompile build times will be reduced. A user commented on Github, “well-generated source maps can actually make it very easy to rip off someone else's source.” Another comment reads, “Source maps are super useful for error reporting, as well as for analyzing bundle size from dependencies. Whether one chooses to deploy them or not is their choice, but producing them is useful.” Ruby on Rails 6.0 Beta 1 brings new frameworks, multiple DBs, and parallel testing GitHub addresses technical debt, now runs on Rails 5.2.1 Introducing Web Application Development in Rails
Read more
  • 0
  • 0
  • 10943

article-image-exploring%e2%80%afforms-in-angular-types-benefits-and-differences%e2%80%af%e2%80%af%e2%80%af-%e2%80%af
Expert Network
21 Jul 2021
11 min read
Save for later

Exploring Forms in Angular – types, benefits and differences     

Expert Network
21 Jul 2021
11 min read
While developing a web application, or setting dynamic pages and meta tags we need to deal with multiple input elements and value types, such limitations could seriously hinder our work – in terms of either data flow control, data validation, or user experience.    This article is an excerpt from the book, ASP.NET Core 5 and Angular, Fourth Edition by Valerio De Sanctis – A revised edition of a bestseller that includes coverage of the Angular routing module, expanded discussion on the Angular CLI, and detailed instructions for deploying apps on Azure, as well as both Windows and Linux.   Sure, we could easily work around most of the issues by implementing some custom methods within our form-based components; we could throw some errors such as isValid(), isNumber(), and so on here and there, and then hook them up to our template syntax and show/hide the validation messages with the help of structural directives such as *ngIf, *ngFor, and the like. However, it would be a horrible way to address our problem; we didn't choose a feature-rich client-side framework such as Angular to work that way.   Luckily enough, we have no reason to do that since Angular provides us with a couple of alternative strategies to deal with these common form-related scenarios:   Template-Driven Forms   Model-Driven Forms, also known as Reactive Forms   Both are highly coupled with the framework and thus extremely viable; they both belong to the @angular/forms library and share a common set of form control classes. However, they also have their own specific sets of features, along with their pros and cons, which could ultimately lead to us choosing one of them.   Let's try to quickly summarize these differences.   Template-Driven Forms   If you've come from AngularJS, there's a high chance that the Template-Driven approach will ring a bell or two. As the name implies, Template-Driven Forms host most of the logic in the template code; working with a Template-Driven Form means:   Building the form in the .html template file   Binding data to the various input fields using ngModel instance   Using a dedicated ngForm object related to the whole form and containing all the inputs, with each being accessible through their name.   These things need to be done to perform the required validity checks. To understand this, here's what a Template-Driven Form looks like:   <form novalidate autocomplete="off" #form="ngForm" (ngSubmit)="onSubmit(form)">  <input type="text" name="name" value="" required   placeholder="Insert the city name..."    [(ngModel)]="city.Name" #title="ngModel"   />  <span *ngIf="(name.touched || name.dirty) &&       name.errors?.required">           Name is a required field: please enter a valid city name.   </span>   <button type="submit" name="btnSubmit"          [disabled]="form.invalid">         Submit   </button>   </form>     Here, we can access any element, including the form itself, with some convenient aliases – the attributes with the # sign – and check for their current states to create our own validation workflow.   These states are provided by the framework and will change in real-time, depending on various things: touched, for example, becomes True when the control has been visited at least once; dirty, which is the opposite of pristine, means that the control value has changed, and so on. We used both touched and dirty in the preceding example because we want our validation message to only be shown if the user moves their focus to the <input name="name"> and then goes away, leaving it blank by either deleting its value or not setting it.   These are Template-Driven Forms in a nutshell; now that we've had an overall look at them, let's try to summarize the pros and cons of this approach. Here are the main advantages of Template-Driven Forms: Template-Driven Forms are very easy to write. We can recycle most of our HTML knowledge (assuming that we have any). On top of that, if we come from AngularJS, we already know how well we can make them work once we've mastered the technique.   They are rather easy to read and understand, at least from an HTML point of view; we have a plain, understandable HTML structure containing all the input fields and validators, one after another. Each element will have a name, a two-way binding with the underlying ngModel, and (possibly) Template-Driven logic built upon aliases that have been hooked to other elements that we can also see, or to the form itself.   Here are their weaknesses:   Template-Driven Forms require a lot of HTML code, which can be rather difficult to maintain and is generally more error-prone than pure TypeScript.   For the same reason, these forms cannot be unit tested. We have no way to test their validators or to ensure that the logic we implemented will work, other than running an end-to-end test with our browser, which is hardly ideal for complex forms.   Their readability will quickly drop as we add more and more validators and input tags. Keeping all their logic within the template might be fine for small forms, but it does not scale well when dealing with complex data items. Ultimately, we can say that Template-Driven Forms might be the way to go when we need to build small forms with simple data validation rules, where we can benefit more from their simplicity. On top of that, they are quite like the typical HTML code we're already used to (assuming that we do have a plain HTML development background); we just need to learn how to decorate the standard <form> and <input> elements with aliases and throw in some validators handled by structural directives such as the ones we've already seen, and we'll be set in (almost) no time.   For additional information on Template-Driven Forms, we highly recommend that you read the official Angular documentation at: https://angular.io/guide/forms   That being said; the lack of unit testing, the HTML code bloat that they will eventually produce, and the scaling difficulties will eventually lead us toward an alternative approach for any non-trivial form. Model-Driven/Reactive Forms   The Model-Driven approach was specifically added in Angular 2+ to address the known limitations of Template-Driven Forms. The forms that are implemented with this alternative method are known as Model-Driven Forms or Reactive Forms, which are the exact same thing.   The main difference here is that (almost) nothing happens in the template, which acts as a mere reference to a more complex TypeScript object that gets defined, instantiated, and configured programmatically within the component class: the form model.   To understand the overall concept, let's try to rewrite the previous form in a Model-Driven/Reactive way (the relevant parts are highlighted). The outcome of doing this is as follows:  <form [formGroup]="form" (ngSubmit)="onSubmit()">  <input formControlName="name" required />   <span *ngIf="(form.get('name').touched || form.get('name').dirty)            && form.get('name').errors?.required">           Name is a required field: please enter a valid city name.   </span>  <button type="submit" name="btnSubmit"           [disabled]="form.invalid">  Submit  </button>     </form>  As we can see, the amount of required code is much lower.  Here's the underlying form model that we will define in the component class file (the relevant parts are highlighted in the following code):   import { FormGroup, FormControl } from '@angular/forms';   class ModelFormComponent implements OnInit {   form: FormGroup;         ngOnInit() {       this.form = new FormGroup({          title: new FormControl()       });     }   }   Let's try to understand what's happening here:   The form property is an instance of FormGroup and represents the form itself.   FormGroup, as the name suggests, is a container of form controls sharing the same purpose. As we can see, the form itself acts as a FormGroup, which means that we can nest FormGroup objects inside other FormGroup objects (we didn't do that in our sample, though).   Each data input element in the form template – in the preceding code, name – is represented by an instance of FormControl.   Each FormControl instance encapsulates the related control's current state, such as valid, invalid, touched, and dirty, including its actual value.   Each FormGroup instance encapsulates the state of each child control, meaning that it will only be valid if/when all its children are also valid.   Also, note that we have no way of accessing the FormControls directly like we were doing in Template-Driven Forms; we have to retrieve them using the .get() method of the main FormGroup, which is the form itself.   At first glance, the Model-Driven template doesn't seem too different from the Template-Driven one; we still have a <form> element, an <input> element hooked to a <span> validator, and a submit button; on top of that, checking the state of the input elements takes a bigger amount of source code since they have no aliases we can use. What's the real deal, then?  To help us visualize the difference, let's look at the following diagrams: here's a schema depicting how Template-Driven Forms work:   [caption id="attachment_72453" align="alignnone" width="690"] Fig 1: Template-Driven Forms schematic[/caption] By looking at the arrows, we can easily see that, in Template-Driven Forms, everything happens in the template; the HTML form elements are directly bound to the DataModel component represented by a property filled with an asynchronous HTML request to the Web Server, much like we did with our cities and country table.   That DataModel will be updated as soon as the user changes something, that is, unless a validator prevents them from doing that. If we think about it, we can easily understand how there isn't a single part of the whole workflow that happens to be under our control; Angular handles everything by itself using the information in the data bindings defined within our template.   This is what Template-Driven actually means: the template is calling the shots.  Now, let's take a look at the Model-Driven Forms (or Reactive Forms) approach:   [caption id="attachment_72454" align="alignnone" width="676"] Fig 2: Model-Driven/Reactive Forms schematic[/caption] As we can see, the arrows depicting the Model-Driven Forms workflow tell a whole different story. They show how the data flows between the DataModel component – which we get from the Web Server – and a UI-oriented form model that retains the states and the values of the HTML form (and its children input elements) that are presented to the user. This means that we'll be able to get in-between the data and the form control objects and perform a number of tasks firsthand: push and pull data, detect and react to user changes, implement our own validation logic, perform unit tests, and so on.  Instead of being superseded by a template that's not under our control, we can track and influence the workflow programmatically, since the form model that calls the shots is also a TypeScript class; that's what Model-Driven Forms are about. This also explains why they are also called Reactive Forms – an explicit reference to the Reactive programming style that favors explicit data handling and change management throughout the workflow.   Summary    In this article, we focused on the Angular framework and the two form design models it offers: the Template-Driven approach, mostly inherited from AngularJS, and the Model-Driven or Reactive alternative. We took some valuable time to analyze the pros and cons provided by both, and then we made a detailed comparison of the underlying logic and workflow. At the end of the day, we chose the Reactive way, as it gives the developer more control and enforces a more consistent separation of duties between the Data Model and the Form Model.   About the author   Valerio De Sanctis is a skilled IT professional with 20 years of experience in lead programming, web-based development, and project management using ASP.NET, PHP, Java, and JavaScript-based frameworks. He held senior positions at a range of financial and insurance companies, most recently serving as Chief Technology and Security Officer at a leading IT service provider for top-tier insurance groups. He is an active member of the Stack Exchange Network, providing advice and tips on the Stack Overflow, ServerFault, and SuperUser communities; he is also a Microsoft Most Valuable Professional (MVP) for Developer Technologies. He's the founder and owner of Ryadel and the author of many best-selling books on back-end and front-end web development.      
Read more
  • 0
  • 0
  • 10885
Modal Close icon
Modal Close icon