Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Server-Side Web Development

85 Articles
article-image-security-issues-in-nginx-http-2-implementation-expose-nginx-servers-to-dos-attack
Bhagyashree R
12 Nov 2018
2 min read
Save for later

Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack

Bhagyashree R
12 Nov 2018
2 min read
Last week, two security issues were reported in nginx HTTP/2 implementation, which can result in excessive memory consumption and CPU usage. Along with these, an issue was found in ngx_http_mp4_module, which can be exploited by an attacker to cause a DoS attack. The issues in the HTTP/2 implementation happen if ngnix is compiled with the ngx_http_v2_module and the http2 option of the listen directive is used in a configuration file. To exploit these two issues, attackers can send specially crafted HTTP/2 requests that can lead to excessive CPU usage and memory usage, eventually triggering a DoS state. These issues affected nginx 1.9.5 - 1.15.5 and are now fixed in nginx 1.15.6, 1.14.1. In addition to these, a security issue was also identified in the ngx_http_mp4_module, which might allow an attacker to cause an infinite loop in a worker process. This can result in crashing the worker process or disclose its memory by using a specially crafted mp4 file. This issue only affects nginx if it is built with the ngx_http_mp4_module and the mp4 directive is used in the configuration file. The attack is only possible if an attacker is able to trigger processing of a specially crafted mp4 file with the ngx_http_mp4_module. This issue affects nginx 1.1.3+, 1.0.7+ and is now fixed in 1.15.6, 1.14.1. You can read more about these security issues in nginx at its official website. Meet Carlo, a web rendering surface for Node applications by the Google Chrome team Introducing Howler.js, a Javascript audio library with full cross-browser support
Read more
  • 0
  • 0
  • 15687

article-image-django-2-1-released-with-new-model-view-permission-and-more
Sugandha Lahoti
06 Aug 2018
3 min read
Save for later

Django 2.1 released with new model view permission and more

Sugandha Lahoti
06 Aug 2018
3 min read
Django 2.1 has been released with changes to Model view permissions, Database backend API, and additional new features. Django 2.1 supports Python 3.5, 3.6, and 3.7. Django 2.1 is a time-based release. The schedule followed was: May 14, 2018 Django 2.1 alpha; feature freeze. June 18 Django 2.1 beta; non-release blocking bug fix freeze. July 16 Django 2.1 RC 1; translation string freeze. ~August 1 Django 2.1 final Here is the list of all new features: Model view permission Django 2.1 adds a view permission to the model Meta.default_permissions. This new permission will allow users read-only access to models in the admin. The permission will be created automatically when running migrate. Considerations for the new model view permission With the new “view” permission, existing custom admin forms may raise errors when a user doesn’t have the change permission because the form might access nonexistent fields. If users have a custom permission with a codename of the form can_view_<modelname>, the new view permission handling in the admin will allow view access to the changelist and detail pages for those models. Changes to Database backend API To adhere to PEP 249, exceptions where a database doesn’t support a feature are changed from NotImplementedError to django.db.NotSupportedError. The allow_sliced_subqueries database feature flag is renamed to allow_sliced_subqueries_with_in. The DatabaseOperations.distinct_sql() now requires an additional params argument and returns a tuple of SQL and parameters instead of a SQL string. The DatabaseFeatures.introspected_boolean_field_type is changed from a method to a property. Dropped support for MySQL 5.5 and PostgreSQL 9.3 Django 2.1 marks the end of upstream support for MySQL 5.5. It now supports MySQL 5.6 and higher. Similarly, it ends the support for PostgreSQL 9.3. Django 2.1 supports PostgreSQL 9.4 and higher. SameSite cookies The cookies used for django.contrib.sessions, django.contrib.messages, and Django’s CSRF protection now set the SameSite flag to Lax by default. Browsers that respect this flag won’t send these cookies on cross-origin requests. Other Features It removes BCryptPasswordHasher from the default PASSWORD_HASHERS setting. The minimum supported version of mysqlclient is increased from 1.3.3 to 1.3.7. Support for SQLite < 3.7.15 is removed. The multiple attribute rendered by the SelectMultiple widget now uses HTML5 boolean syntax rather than XHTML’s multiple="multiple". The local-memory cache backend now uses a least-recently-used (LRU) culling strategy rather than a pseudo-random one. The new json_script filter safely outputs a Python object as JSON, wrapped in a <script> tag, ready for use with JavaScript. These are just a select few updates in available in Django 2.1. The release notes cover all the new features in detail. Getting started with Django RESTful Web Services Getting started with Django and Django REST frameworks to build a RESTful app Python web development: Django vs Flask in 2018
Read more
  • 0
  • 0
  • 15604

article-image-redbird-a-modern-reverse-proxy-for-node
Amrata Joshi
06 Nov 2018
3 min read
Save for later

Redbird, a modern reverse proxy for node

Amrata Joshi
06 Nov 2018
3 min read
The latest version, 8.0 of Redbird got released last month. It is a modern reverse proxy for node. Redbird comes with built in Cluster, HTTP2, LetsEncrypt and Docker support which helps in the handling of load balancing, dynamic virtual hosts, proxying web sockets and SSL encryption. It comes with a complete library for building dynamic reverse proxies with the speed and robustness of http-proxy. It is a light-weight package that includes everything that is needed for easy reverse routing of applications. It is useful for routing applications from different domains in one single host. It is also used for easy handling of SSL. What’s new in Redbird? Support for HTTP2: One can now enable HTTP2 easily by setting the HTTP2 flag to true. Note: HTTP2 requires SSL/TLS certificates. Support for LetsEncrypt: Redbird now supports automatic generation of SSL certificates using LetsEncrypt. While using LetsEncrypt, the obtained certificates will be copied to the specific path on disk. One should take the backup, or save them. Features  It provides flexible and easy routing It also supports websockets The users can experience seamless SSL Support. It also, automatically redirects the user from HTTP to HTTPS It enables automatic TLS certificates generation and renewal It supports load balancing after following a round-robin algorithm It helps in registering and unregistering routes programmatically without restart which allows zero downtime deployments It helps in the automatic registration of running containers by enabling docker support. It enables automatic multi-process with the help of cluster support It is based on top of rock-solid node-http-proxy. It also offers optional logging which is based on bunyan It uses node-etcd to create proxy records automatically from an etcd cluster. Cluster Support in Redbird Redbird supports automatic generation of node cluster. To use the cluster support feature one needs to specify the number of processes that one wants it to use. Redbird automatically restarts any thread that crashes and hence increases reliability. If one needs NTLM support, Redbird adds the required header handler. This then registers a response handler. This handler makes sure that the NTLM auth header is properly split into two entries from http-proxy. Custom resolvers in Redbird Redbird comes with custom resolvers that helps one to decide how the proxy server handles the request. Custom resolvers help in path-based routing, headers based routing and wildcard domain routing. The install command for Redbird is npm install redbird. To read more about this news, check out the official page of Github. Squid Proxy Server: debugging problems How to Configure Squid Proxy Server Squid Proxy Server: Fine Tuning to Achieve Better Performance  
Read more
  • 0
  • 0
  • 15459

article-image-mojolicious-8-0-a-web-framework-for-perl-released-with-new-promises-and-roles
Savia Lobo
18 Sep 2018
2 min read
Save for later

Mojolicious 8.0, a web framework for Perl, released with new Promises and Roles

Savia Lobo
18 Sep 2018
2 min read
Mojolicious, a next generation web framework for the Perl programming language announced its upgrade to the latest 8.0 version. Mojolicious 8.0 was announced at the Mojoconf in Norway held from 6th to 7th September 2018. This release is codenamed as ‘Supervillain’ and is by far the major release in Mojolicious. Mojolicious allows users to easily grow single file prototypes into well-structured MVC web applications. It is a powerful web development toolkit, that one can use for all kinds of applications, independently of the web framework. Many companies such as Alibaba Group, IBM, Logitech, Mozilla, and others rely on Mojolicious to develop new code bases. Even companies like Bugzilla are getting themselves ported to Mojolicious. The Mojolicious community has decided to make a few organizational changes, to support the continuous growth. This includes: All new development will be consolidated in a single GitHub organization. Mojolicious’ official IRC channel named say hi! that has almost 200 regulars will be moving to Freenode (#mojo on irc.freenode.net). This will make it easier for people not yet part of the Perl community to get involved. Some highlights of the Mojolicious 8.0 Promises/A+ Mojolicious 8.0 includes Promises/A+, a new module and pattern for working with event loops. A promise represents the eventual result of an asynchronous operation. Roles and subprocess The version 8.0 now includes roles, a new way to extend Mojo classes. Also, the subprocesses can now mix event loops and computationally expensive tasks. Placeholder types and Mojo::File With the placeholder types, one can avoid repetitive routes. Whereas the Mojo::File, is the brand new module for dealing with file systems. Cpanel::JSON::XS and Mojo::PG With the Cpanel::JSON::XS, users can process JSON at a much faster rate now. The Mojo::PG includes many new SQL::Abstract extensions for Postgres features. To know more about Mojolicious 8.0 in detail, visit its GitHub page. Warp: Rust’s new web framework for implementing WAI (Web Application Interface) What’s new in Vapor 3, the popular Swift based web framework Beating jQuery: Making a Web Framework Worth its Weight in Code
Read more
  • 0
  • 0
  • 15455

article-image-now-you-can-run-nginx-on-wasmjit-on-all-posix-systems
Natasha Mathur
10 Dec 2018
2 min read
Save for later

Now you can run nginx on Wasmjit on all POSIX systems

Natasha Mathur
10 Dec 2018
2 min read
Wasmjit team announced last week that you can now run Nginx 1.15.3, a free and open source high-performance HTTP server and reverse proxy, in user-space on all POSIX system. Wasmjit is a small embeddable WebAssembly runtime that can be easily ported to most environments. It primarily targets a Linux kernel module capable of hosting Emscripten-generated WebAssembly modules. It comes equipped with a host environment for running in user-space on POSIX systems. This allows you to run WebAssembly modules without having to run an entire browser. Getting Nginx to run had been a major goal for the wasmjit team ever since its first release in late July. “While it might be convenient to run the same binary on multiple systems without modification (“write once, run anywhere”), this goal was chosen because IO-bound / system call heavy servers to stand to gain the most by running in kernel space. Running FUSE file systems in kernel space is another motivating use case that Wasmjit will soon support”, mentions the wasmjit team. Other future goals for wasmjit includes introduction of an interpreter, rust-runtime for Rust-generated wasm files, Go-runtime for Go-generated wasm files, optimized x86_64 JIT ,arm64 JIT, and macOS kernel module. Wasmjit running nginx has been tested on Linux, OpenBSD, and macOS so far. The complete compiled version of nginx without any modifications and with multi-process capability has been used. All the complex parts of the POSIX API that are needed for proper implementation of Nginx have been used such as signal handling and forking. That being said, Kernel space support still needs working as Emscripten delegates some large APIs such as getaddrinfo() and strftime() to the host implementation. These need to be re-implemented in the kernel. Moreover, kernel space versions of fork(), execve(), and signal handling also need to be implemented. Also, Wasmjit is currently in alpha-level software in development and might lead to unpredictable substances when used in production. Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more! Getting Started with Nginx
Read more
  • 0
  • 0
  • 15312

article-image-cloudflares-decentralized-vision-of-the-web-interplanetary-file-system-ipfs-gateway-to-create-distributed-websites
Melisha Dsouza
18 Sep 2018
4 min read
Save for later

Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites

Melisha Dsouza
18 Sep 2018
4 min read
The Cloudflare team has introduced Cloudflare’s IPFS Gateway which will make accessing content from the InterPlanetary File System (IPFS) easy and quick without having to install and run any special software on a user’s computer. The gateway which supports new distributed web technologies is hosted at cloudflare-ipfs.com. The team asserts that this will lead to highly-reliable and security-enhanced web applications. A brief gist of IPFS When a user accesses a website from the browser, it tracks down the centralized repository for the website’s content. It then sends a request from the user’s computer to that origin server, and that server sends the content back to the user's computer. However, this centralization mechanism makes it impossible to keep content online if the origin servers rolls back the data. If the origin server faces a downtime or the site owner decides to take down the data, the content stands unavailable. On the other hand, IPFS is a distributed file system that allows users to share files that will be distributed to other computers- throughout the networked file system. This means that a user’s content is stored on all the nodes of the network and data can be safely backed up. Key Differences between IPFS and the traditional Web #1 Free caching and serving of content IPFS provides free caching and serving of content. Anyone can sign up their computer to be a node in the system and start serving data. On the flipside, the traditional web relies on big hosting providers to store content and serve it to the rest of the web. Setting up a website with these providers costs money. #2 Content addressed data Rather than location-addressed data, IPFS focuses on content addressed data. In the traditional web, when a user navigates to a website, it fetches data stored at the websites IP address. The server sends back the relevant information from that IP. With IPFS, every single block of data stored in the system is addressed by a cryptographic hash of its contents. When a user requests for a piece of data in IPFS, they request it by its hash .i.e  content that has a hash value of, for example, QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy Why is Cloudflare’s IPFS Gateway Important? The IPFS increases the resilience of the network. The content with a hash of-QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy could be stored on dozens of nodes. So, if one of the nodes that was storing the content goes down, the network will just look for the content on another node. In addition to resilience, there is an automatic level of security introduced in the system. If the data requested by the user was tampered with during transit, the hash value the user gets will be different than the hash that he/she had asked for. This means that the system has a built-in way of knowing whether or not content has been tampered with. Users can access any of the billions of files stored on IPFS from their browser. Using Cloudflare’s gateway, they can also build a website hosted entirely on IPFS available to users at a custom domain name. Any website connected to IPFS gateway will be provided with a free SSL certificate. IPFS is embracing a new, decentralized vision of the web. Users will be able to create static web sites- containing information that cannot be censored by governments, companies, or other organizations- that are served entirely over IPFS. To know more about this announcement, head over to Cloudflare’s official Blog. 7 reasons to choose GraphQL APIs over REST for building your APIs Laravel 5.7 released with support for email verification, improved console testing Javalin 2.0 RC3 released with major updates!
Read more
  • 0
  • 0
  • 15174
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-basecamp-3-faces-a-read-only-outage-of-nearly-5-hours
Bhagyashree R
13 Nov 2018
3 min read
Save for later

Basecamp 3 faces a read-only outage of nearly 5 hours

Bhagyashree R
13 Nov 2018
3 min read
Yesterday, Basecamp shared the cause behind the outage Basecamp 3 faced on November 8. The outage continued for nearly five hours starting from 7:21 am CST to 12:11 pm. Due to this, the users were only able to access existing messages, to-do lists, and files, but they were prevented from entering any new information and altering any existing information. David Heinemeier Hansson, the creator of Ruby on Rails, founder & CTO at Basecamp said in his post that this was the worst outage Basecamp has faced in probably 10 years: “It’s bad enough that we had the worst outage at Basecamp in probably 10 years, but to know that it was avoidable is hard to swallow. And I cannot express my apologies clearly or deeply enough.” https://twitter.com/basecamp/status/1060554610241224705 Key causes behind the Basecamp 3 outage Every activity that a user does is tracked in Basecamp’s events table, whether it is posting a message, updating a to-do list, or applauding a comment. The root cause behind the Basecamp going into read-only mode was its database hitting the ceiling of 2,147,483,647 on this very busy events table. Secondly, the programming framework that Basecamp uses, Ruby on Rails updated their default for database tables in version 5.1 released in 2017. This update lifted the headroom for records from 2,147,483,647 to 9,223,372,036,854,775,807 on all tables. But, the column in the database was configured as an integer rather than a big integer. The complete timeline of the outage Time Activity 7:21 am CST   They ran out of ID numbers on the events table in the database because the column in the database was configured as an integer rather than a big integer. The integer runs out of numbers at 2147483647 and big integer can grow until 9223372036854775807. 7:29 am CST The team started working on database migration where they updated the column type from the regular integer to the big integer type. They later tested this fix on a staging database to make sure it was safe. 7:52 am CST The test done on the staging database verified that the fix was correct, so they moved on to make the changes to the production database table. Due to the huge size of the production database, the migration was estimated to take about one hour and forty minutes. 10:56 am CST-11:52 am CST The upgrade to the database was completed, but still, verification of all the data, and configurations update was required to ensure no other problems are faced when it is back online. 12:22 pm CST After the successful verification, Basecamp came back online. 12:33 pm CST Basecamp went down again because of the intense load of the application was back online, which caused the caching server to get overwhelmed. 12:41 pm CST Basecamp came back online after they switched over to the backup caching servers. To read the entire update on Basecamp’s outage, check out David Heinemeier Hansson’s post on Medium. GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Azure DevOps outage root cause analysis starring greedy threads and rogue scale units
Read more
  • 0
  • 0
  • 14862

article-image-github-addresses-technical-debt-now-runs-on-rails-5-2-1
Bhagyashree R
01 Oct 2018
3 min read
Save for later

GitHub addresses technical debt, now runs on Rails 5.2.1

Bhagyashree R
01 Oct 2018
3 min read
Last week, GitHub announced that their main application is now running on Rails 5.2.1. Along with this upgrade, they have also improved the overall codebase and cleaned up technical debt. How GitHub upgraded to Rails 5.2.1? The upgrade started out as a hobby with no dedicated team assigned. As they made progress and gained traction it became a priority. They added the ability to dual boot the application in multiple versions of Rails, instead of using a long running branch to upgrade Rails. Two Gemfile.lock were created: Gemfile.lock for the current version Gemfile_next.lock for the next version This dual booting enabled the developers to regularly deploy changes for the next version to GitHub without any affect on how production works. This was done by conditionally loading the code: if GitHub.rails_3_2? ## 3.2 code (i.e. production a year and a half ago) elsif GitHub.rails_4_2? # 4.2 code else # all 5.0+ future code, ensuring we never accidentally # fall back into an old version going forward end To roll out the Rails upgrade they followed a careful and iterative process: The developers first deployed to their testing environment and requested volunteers from each team to click test their area of the codebase to find any regressions the test suite missed. These regressions were then fixed and deployment was done in off-hours to a percentage of production servers. During each deploy, data about exceptions and performance of the site was collected. With this information they fixed bugs that came up and repeat those steps until the error rate was low enough to be considered equal to the previous version. Finally, they merged the upgrade once they could deploy to full production for 30 minutes at peak traffic with no visible impact. This process allowed them to deploy 4.2 and 5.2 with minimal customer impact and no down time. Key lessons they learned during this upgradation Regular upgradation Upgrading will be easier if you are closer to a new version of Rails. This also encourages your team to fix bugs in Rails instead of monkey-patching the application. Keeping an upgrade infrastructure Needless to say, there will always be a new version to upgrade to. To keep up with the new versions, add a build to run against the master branch to catch bugs in Rails and in your application early. This make upgrades easier and increase your upstream contributions. Regularly address technical debt Technical debt refers to the additional rework you and your team have to do because of choosing an easy solution now instead of using a better approach that would take longer. Refraining from messing with a working code could cause a bottleneck for upgrades. To avoid this try to prevent coupling your application logic too closely to your framework. The line where your application logic ends and your framework begins should be clear. Be sure to assume that things will breaks Upgrading a large and trafficked application like GitHub is not easy. They did face issues with CI, local development, slow queries, and other problems that didn’t show up in their CI builds or click testing. Read the full announcement on GitHub Engineering blog. GitHub introduces ‘Experiments’, a platform to share live demos of their research projects GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub Packt’s GitHub portal hits 2,000 repositories
Read more
  • 0
  • 0
  • 14759

article-image-node-10-0-0-released-packed-with-exciting-new-features
Kunal Chaudhari
30 Apr 2018
4 min read
Save for later

Node 10.0.0 released, packed with exciting new features

Kunal Chaudhari
30 Apr 2018
4 min read
Node.js 10.0.0 released, packed with exciting new features, it will be the next candidate in line for the Long Term Support (LTS) in October 2018. So what exactly is an LTS and why is the Node.js foundation releasing a major version every six months?   Node.js History and the Significance of LTS When Ryan Dhal first created Node.js, he decided to follow Linux kernel style odd/even version releases. Odd version releases represented internal development with no guarantee for stable releases, whereas even releases guaranteed stability. But this release scheme didn’t last long, as version 0.12 represented the last release under that versioning scheme. Later in 2015, Node.js 4.0 was announced, dubbed famously as the “Converged Release”. This meant that both io.js and Node.js projects under the Node foundation were merged together, providing a unified release plan for all Node products. In order to attract more enterprise users and to provide more stability to the platform, the Node foundation announced the “Long Term Support” strategy. The plan was simple, every six months a major version of the platform will be released which would follow semantic versioning. The even releases would be scheduled in April while the odd ones would come out in October and every even-numbered release will automatically become a candidate for the LTS. Long Term Support release lines focus on stability, extended support and provide a reliable platform for applications of any scale. Most Node.js users and companies prefer to use the Long Term Support releases. A candidate covered in the LTS plan is actively maintained for a period of 18 months from the date it enters LTS coverage. Following those 18 months of active support, they transition into "maintenance" mode for 12 additional months. To read more about the LTS release plan and schedule visit the official Node.js Foundation Release Working Group page. Source: Node.js Foundation Release Working Group Node.js 10 features Codenamed “Dubnium” Node.js 10.x comes with plenty of new features like OpenSSL 1.1.0 security toolkit, upgraded npm, N-API, and much more. Let’s take a closer look at each one of them. N-API and Native Node HTTP/2 becomes stable N-API, an abbreviation for Native API is used for building native addon, while Native HTTP/2 is module that improves the standard HTTP protocol. Both these features were first announced as experimental projects in Node.js 8 release and now have been confirmed as stable features in the 10.x release. N-API aims to solve two main problems namely, reducing the maintenance cost for native modules and reducing difficulties in upgrading Node.js versions in production deployments for users. The native HTTP/2 module will help improve Node servers and the web experience that they provide. Upgraded npm npm has recently been upgraded from v5.7 to v6.0, while Node.js 10 ships with npm 5.7, the 10.x line will be upgraded to npm Version 6 later on. This major version increase in npm provides improvements in all areas including performance, security, and stability. OpenSSL Version 1.1.0 With the recent finalization of the TLS 1.3 specification, a huge step forward for the security of the web, OpenSSL release their newest version of the security toolkit which supports this TLS specification. It didn’t take long for Node.js to start supporting OpenSSL, since it would provide more secure communication between applications on the Node platform. While Node is just supporting OpenSSL version 1.1.0 in this latest release, it plans to upgrade it in the future versions of the 10.x release, bringing the brand new TLS support for the developers.   What to expect in the future releases Plenty of exciting features like better support for the ECMAScript (ES) modules, JavaScript Promises, new infrastructure for build/automation support, and functional testing for third party modules are in line for the next releases. Node 10 will remain in LTS from October 2018 until April 2021. This LTS release will also mark the deprecation of Node.js 4. While the features released so far are already very impressive, the Node team remains adamant on bringing even more cutting edge technology to the platform, making the life of developers easier. To get a more detailed description of the new features or to download the latest version of Node.js, please visit their official web page. How is Node.js Changing Web Development? How to deploy a Node.js application to the web using Heroku
Read more
  • 0
  • 0
  • 14699

article-image-jest-23-facebooks-popular-framework-for-testing-react-applications-is-now-released
Sugandha Lahoti
05 Jun 2018
3 min read
Save for later

Jest 23, Facebook’s popular framework for testing React applications is now released

Sugandha Lahoti
05 Jun 2018
3 min read
A new version of Jest, the popular framework for testing React applications is now available. Jest is developed by Facebook and can be used for testing JavaScript functions, but is specifically aimed at React. Jest is a zero configuration testing platform with features such as snapshot testing, parallelized test runs, built-in code coverage reports, and instant feedback. Jest 23 features major updates. Here are the top ones. Babel and Webpack join the Jest community Webpack saw their total test suite time reduced 6x from over 13 minutes to 2 minutes 20 seconds, after converting from Mocha to Jest 23 Beta. Interactive Snapshot Mode The newly incorporated Interactive snapshot mode, is added as a default watch menu option. With this new mode, testers can browse through each failing snapshot in each failing suite, and review, update or skip each failed snapshots individually. Snapshot Property Matchers Jest now has Snapshot property matchers through which testers can pass properties to the snapshot matcher which specify the structure of the data instead of the specific values. These property matchers are then verified before serializing the matcher type to provide consistent snapshot results across multiple test runs. Jest Each Jest 23 features a new jest-each library inspired by mocha-each and Spock Data Tables. This library defines a table of test cases, and then runs a test for each row with the specified column values. Support is provided for both array types and template literals for all flavors of describe and test. Watch Mode Plugins The watch mode system now allows adding of custom plugins to watch mode. These watch mode plugins can hook into Jest events and provide custom menu options in the watch mode menu. Other changes include: Test descriptions and functions are a mandate. Jest 23 will fail tests that do not include both a function and a description. Undefined props from React snapshots are now removed. MapCoverage, jest.genMockFunction and jest.genMockFn are deprecated. Snapshot name (if provided) is now added to the snapshot failure message so it's easier to find the snapshot that's failing. Mock timestamps are replaced with invocationCallOrder since two or more mocks may often have the same timestamp, making it impossible to test the call order. Mock function call results are added to snapshots so that both the calls and the results of the invocation are tracked. For the complete list of changes and updates, see the changelog. Testing Single Page Applications (SPAs) using Vue.js developer tools What is React.js and how does it work? How to test node applications using Mocha framework
Read more
  • 0
  • 0
  • 14629
article-image-http-over-quic-will-be-officially-renamed-to-http-3
Savia Lobo
12 Nov 2018
2 min read
Save for later

HTTP-over-QUIC will be officially renamed to HTTP/3

Savia Lobo
12 Nov 2018
2 min read
The protocol called HTTP-over-QUIC will be officially renamed to  HTTP/3. In a discussion on IETF mail archive thread, Mark Nottingham, Chairman of the IETF HTTPBIS Working Group and W3C Web Services Addressing Working Group, triggered the confusion between QUIC-the-transport-protocol, and QUIC-the-HTTP-binding. QUIC, a TCP replacement done over UDP, was started as an effort by Google and then more of a "HTTP/2-encrypted-over-UDP" protocol. The QUIC Working Group in the IETF works on creating the QUIC transport protocol. According to Daniel Stenberg, lead developer of curl at Mozilla, “When the work took off in the IETF to standardize the protocol, it was split up in two layers: the transport and the HTTP parts. The idea being that this transport protocol can be used to transfer other data too and it’s not just done explicitly for HTTP or HTTP-like protocols. But the name was still QUIC.” People in the community have referred different versions of the protocol using informal names such as iQUIC and gQUIC to separate the QUIC protocols from IETF and Google. The protocol that sends HTTP over "iQUIC" was called "hq" (HTTP-over-QUIC) for a long time. Last week, on November 7, 2018, Dmitri Tikhonov, a programmer at Litespeed announced that his company and Facebook had successfully done the first interop ever between two HTTP/3 implementations. Here’s Mike Bihop's follow-up presentation at the HTTPbis session on the topic. https://www.youtube.com/watch?v=uVf_yyMfIPQ&feature=youtu.be&t=4956 Brute forcing HTTP applications and web applications using Nmap [Tutorial] Phoenix 1.4.0 is out with ‘Presence javascript API’, HTTP2 support, and more! Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial]
Read more
  • 0
  • 0
  • 14548

article-image-amazon-rolls-out-aws-amplify-console-a-deployment-and-hosting-service-for-mobile-web-apps-at-reinvent-2018
Amrata Joshi
27 Nov 2018
3 min read
Save for later

Amazon rolls out AWS Amplify Console, a deployment and hosting service for mobile web apps, at re:Invent 2018

Amrata Joshi
27 Nov 2018
3 min read
On day 1 of AWS re:Invent 2018, the team at Amazon released AWS Amplify Console, a continuous deployment and hosting service for mobile web applications. The AWS Amplify Console helps in avoiding downtime during application deployment and simplifies the deployment of the application’s front end and backend. Features of AWS Amplify Console Simplified continuous workflows By connecting AWS Amplify Console to the code repository, the frontend and backend are deployed in a single workflow on every code commit. This lets the web application to get updated only after the deployment is successfully completed by eliminating inconsistencies between the application’s frontend and backend. Easy Access AWS Amplify Console makes the building, deploying, and hosting of mobile web applications easier. It also lets users access the features faster. Easy custom domain setup One can set up custom domains managed in Amazon Route 53 with a single click and also get a free HTTPS certificate. If one manages the domain in Amazon Route 53, the Amplify Console automatically connects the root, subdomains and branch subdomains. Globally available The apps are served via Amazon's reliable content delivery network with 144 points of presence globally. Atomic deployments In AWS Amplify Console, the atomic deployments eliminate the maintenance windows and the scenarios where files fail to upload properly. Password protection The Amplify Console comes with a password to protect the web app and one easily work on new features without making them publicly accessible. Branch deployments With Amplify Console, one can work on new features without impacting the production. Also, the users can create branch deployments linked to each feature branch. Other features   The Amplify Console automatically detects the front end build settings along with any backend functionality provisioned with the Amplify CLI when connected to a code repository. With AWS Amplify Console, users can easily manage the production and staging environments for front-end and backend by connecting new branches. With AWS Amplify Console, one get screenshots of the app, rendered on different mobile devices to highlight layout issues. Users can now set up rewrites and redirects to maintain SEO rankings. Users can build web apps with static and dynamic functionality. One can deploy SSGs (Service Selection Gateway) with free SSL on the AWS Amplify Console. Check out the official announcement to know more about AWS Amplify Console. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon re:Invent 2018: AWS Snowball Edge comes with a GPU option and more computing power Amazon re:Invent 2018: AWS Key Management Service (KMS) Custom Key Store
Read more
  • 0
  • 0
  • 14314

article-image-django-2-2-alpha-1-0-is-now-out-with-constraints-classes-and-more
Bhagyashree R
18 Jan 2019
3 min read
Save for later

Django 2.2 alpha 1.0 is now out with constraints classes, and more!

Bhagyashree R
18 Jan 2019
3 min read
Yesterday, the team behind Django released Django 2.2 alpha 1.0. Django 2.2 is designated as LTS, which means it will receive security updates for at least three years after its expected release in April 2019. This version will come with two new constraints classes, some minor features, and deprecates Meta.ordering. It is compatible with Python 3.5, 3.6, and 3.7. Here are some of the updates that Django 2.2 will come with: Constraints: Two new constraint classes are defined in django.db.models.constraints for adding custom database constraints, namely, CheckConstraint and UniqueConstraint. These classes are also imported into django.db.models for convenience. django.contrib.auth: A request argument is added to the RemoteUserBackend.configure_user() method as the first positional argument, if it accepts it. django.contrib.gis: Oracle support is added for the Envelope function and SpatiaLite support for the coveredby and covers lookups. django.contrib.postgres: A new ordering argument is added to the ArrayAgg and StringAgg classes for determining the ordering of aggregated elements. With new BTreeIndex, HashIndex, and SpGistIndex classes, you can now create B-Tree, hash, and SP-GiST indexes in the database. Internationalization: Support and translations are added for the Armenian language. Backward incompatible updates Database backend API: These are some of the changes that will be needed in third-party database backends: They must support table check constraints or set DatabaseFeatures.supports_table_check_constraints to False. Support for ignoring constraints or uniqueness errors while inserting is needed or you can set DatabaseFeatures.supports_ignore_conflicts to False. Support for partial indexes is needed or you can set DatabaseFeatures.supports_partial_indexes to False. DatabaseIntrospection.table_name_converter() and column_name_converter() are now removed. Third-party database backends will may have to implement DatabaseIntrospection.identifier_converter() instead. Other changes Admin actions: In this version, admin actions now follow standard Python inheritance and are no longer collected from the base ModelAdmin classes. TransactionTestCase serialized data loading: At the end of the test, initial data migrations are now loaded in TransactionTestCase after the flush. Earlier, this data was loaded at the beginning of the test, which prevented the test --keepdb option from working properly. sqlparse: The sqlparse module will be automatically installed with Django as it is now a required dependency. This change is done to simplify a few parts of Django’s database handling. Permissions for proxy models: You can now create permissions for proxy models using the content type of the proxy model rather than the content type of the concrete model. Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users Django 2.1 released with new model view permission and more Python web development: Django vs Flask in 2018
Read more
  • 0
  • 0
  • 14140
article-image-vlcs-updating-mechanism-still-uses-http-over-https
Bhagyashree R
22 Jan 2019
3 min read
Save for later

VLC’s updating mechanism still uses HTTP over HTTPS

Bhagyashree R
22 Jan 2019
3 min read
Last week, a bug was reported to the VLC bug tracker that all the connections to the update server are still done in HTTP instead of HTTPS. One of the VLC developers replied back asking the bug reporter for a threat model, and when he did not submit it, the VLC developer closed the bug and marked it as “invalid”. This is not the first time this bug has been reported. In a bug reported in 2017, a user said, “It appears that VLC's updating mechanism downloads a new VLC executable over HTTP (ie, in clear-text). Please modify the update mechanism to happen over TLS (preferably with Forward Secrecy enabled).” What are some of the implications of using HTTP over HTTPS? One of the Hacker News users said, “As a trivial example, this is a privacy leak - anyone on the network path can see what version you're upgrading to. It doesn't sound like a huge deal but we are moving to a 100% encrypted world, and it is a one character change to fix the issue. If VLC wants to keep the update over plaintext then they should justify why they want to do that, not have users justify why it should be over https. Instead, it feels like the VLC devs are having a kneejerk defensive reaction.” Along with this, there are several security threats related to software that updates over HTTP, some of which are described here: An attacker can see the contents of software update requests. They can then modify these update requests or responses to change the update behavior or outcome. They can also intercept and redirect software update requests to a malicious server. Attackers can respond to the client request with a huge amount of data that will interfere with the client’s system resulting in endless data attacks. Clients can be prevented by the attackers from being aware of interference with receiving updates by responding to client requests so slowly that automated updates never complete resulting in endless data attacks. Attackers can trick a client into installing software that is older, which is known to have critical bugs. Why VideoLAN does not see it as a big problem? Jean-Baptiste Kempf, the President, and lead VLC developer, said that some of these attacks described above are the case for nearly all download systems, “I'm sorry, but some described attacks (Slow retrieval attacks, Endless data attacks) are issues that are the case for all download system like most Linux Distributions, and that will not be fixed. Mirrors are HTTP and will stay HTTP for a few obvious reasons. Moreover, they will install binaries, so there is no security issue. Moreover, downloads are never done automatically, without user intervention.” As Kempf said, this is not just the case with VLC. A Hacker News user said, “it seems to be a common practice for highly-loaded services to outsource as many cryptographies to clients as possible.” A general-purpose package manager like Pacman uses HTTP because there is not much value in using transport-level security when the payload is cryptographically signed. Even Tesla’s firmware updates are not encrypted in transit as their updates are cryptographically signed. Oracle also followed the same policy with VirtualBox distributions and that's been fine because they signed packages. You can read more in detail on the VLC bug tracker website. dav1d 0.1.0, the AV1 decoder by VideoLAN, is here Presenting dav1d, a new lightweight AV1 decoder, by VideoLAN and FFmpeg dav1d to release soon with all features of AV1, and better performance than libaom
Read more
  • 0
  • 0
  • 13962

article-image-apache-flink-1-8-0-releases-with-finalized-state-schema-evolution-support
Bhagyashree R
15 Apr 2019
2 min read
Save for later

Apache Flink 1.8.0 releases with finalized state schema evolution support

Bhagyashree R
15 Apr 2019
2 min read
Last week, the community behind Apache Flink announced the release of Apache Flink 1.8.0. This release comes with the finalized state evolution support, lazy cleanup strategies for state TTL, improved pattern matching support in SQL, and more. Finalized state schema evolution support This release marks the completion of the community-driven effort to provide a schema evolution story for user state managed by Flink. The following changes are made to finalize the state schema evolution support: The list of data types that support state schema evolution is now extended to include POJOs (Plain Old Java Objects). All Flink built-in serializers are upgraded to use the new serialization compatibility abstractions. Implementing abstractions using custom state serializers is now easy for advanced users. Continuous cleanup of old state based on TTL In Apache Flink 1.6, TTL (time-to-live) was introduced for the keyed state. TTL enables cleanup and makes keyed state entries inaccessible after a given timeout. The state can also be cleaned when writing a savepoint or checkpoint. With this release, continuous cleanup of old entries is also allowed for both the RocksDB state backend and the heap backend. Improved pattern-matching support in SQL This release extends the MATCH_RECOGNIZE clause by adding two new updates: user-defined functions and aggregations. User-defined functions are added for custom logic during pattern detection and aggregations are added for complex CEP definitions. New KafkaDeserializationSchema for direct access to ConsumerRecord A new KafkaDeserializationSchema is introduced to give direct access to the Kafka ConsumerRecord. This will give users access to all data that Kafka provides for a record including the headers. Hadoop-specific distributions will not be released Starting from this release Hadoop-specific distributions will not be released. If a deployment relies on ‘flink-shaded-hadoop2’ being included in ‘flink-dist’, then it must be manually downloaded and copied into the /lib directory. Updates in the Maven modules of Table API Users who have a ‘flink-table’ dependency are required to update their dependencies to ‘flink-table-planner’. If you want to implement a pure table program in Scala or Java, add  ‘flink-table-api-scala’ or ‘flink-table-api-java’ respectively to your project. To know more in detail, check out the official announcement by Apache Flink. Apache Maven Javadoc Plugin version 3.1.0 released LLVM officially migrating to GitHub from Apache SVN Apache NetBeans IDE 10.0 released with support for JDK 11, JUnit 5 and more!
Read more
  • 0
  • 0
  • 13931
Modal Close icon
Modal Close icon