Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Server-Side Web Development

85 Articles
article-image-whats-new-in-vapor-3-the-popular-swift-based-web-framework
Sugandha Lahoti
08 May 2018
3 min read
Save for later

What’s new in Vapor 3, the popular Swift based web framework

Sugandha Lahoti
08 May 2018
3 min read
Vapor, the popular web framework written in Swift has released its next major update. Vapor 3 is a complete rewrite of the existing versions and all of its related packages. The release is centered around three new features. Async: Vapor 3 is all ready to handle high levels of concurrency as it is completely non-blocking and runs on Apple’s SwiftNIO. Services: With Vapor’s new Dependency Injection framework Services, all JSON configuration files are replaced by Swift. Codable: Codable integration throughout all of Vapor brings type safety and better performance to Parsing and serializing content from HTTP messages, creating database models, and rendering views. The main focus of this release is on building a foundation for developers to work on based on the growth of Vapor and server-side Swift over the past two years. The release is updated across four major categories. Packages Vapor 3 offers a couple of new packages this release. Most notable are the MySQL and PostgreSQL packages which are now non-blocking and built on SwiftNIO. Some of these packages include: SQLite: SQLite 3 wrapper for Swift. PostgreSQL: Non-blocking, event-driven Swift client for PostgreSQL. MySQL: Pure Swift MySQL client built on non-blocking, event-driven sockets. Fluent: Swift ORM framework (queries, models, and relations) for building NoSQL and SQL database integrations. FluentSQLite: Swift ORM (queries, models, relations, etc) built on SQLite 3. Auth: Authentication and Authorization layer for Fluent. JWT: JSON Web Token signing and verification. Leaf: An expressive, performant, and extensible templating language built for Swift. A complete list of packages is available in the vapor documentation. Better updated Documentation A large part of the release focuses on better documentation. Subsequently, Vapor 3 improves API docs with 100% docblock coverage including: Helpful code samples where possible. Method parameter descriptions. MARK and code re-org to help make things readable in API doc form. Also, the main docs are moving more toward a guide / tutorial feel. These guide docs cover broad use cases and practices, in contrast to the API docs which heavily focus on particular methods and protocols. Moving to Discord and introducing Books Vapor’s official team chat is now moved to Discord. The team has also announced two books (Server Side Swift with Vapor and Server-side Swift (Vapor Edition)) written specifically for Vapor 3. Benchmarks Vapor 3 introduces certain benchmarks for this release available on GitHub. The benchmarks were run on two identical Digital Ocean droplets. One for hosting the frameworks and one for running the benchmark. The benchmarker program is a small script written in Swift that runs wrk and captures the results. It is capable of doing multiple runs and averaging the results. Vapor achieved state-of-the-art results on both the plaintext benchmarks. To know further updates and other minor changes, be sure to Check out the updated website. Your First Swift Program [tutorial] Swift for TensorFlow is now open source [news] RxSwift Part 1: Where to Start? Beginning with Hot and Cold Observables [tutorial]
Read more
  • 0
  • 0
  • 13874

article-image-django-is-revamping-its-governance-model-plans-to-dissolve-django-core-team
Bhagyashree R
21 Nov 2018
4 min read
Save for later

Django is revamping its governance model, plans to dissolve Django Core team

Bhagyashree R
21 Nov 2018
4 min read
Yesterday, James Bennett, a software developer and an active contributor to the Django web framework issued the summary of a proposal on dissolving the Django Core team and revoking commit bits. Re-forming or reorganizing the Django core team has been a topic of discussion from the last couple of years, and this proposal aims to take this discussion to real action. What are the reasons behind the proposal of dissolving the Django Core team? Unable to bring in new contributors Django, the open source project has been facing some difficulty in recruiting and retaining contributors to keep the project alive. Typically, open source projects avoid this situation by having corporate sponsorship of contributions. Companies which rely on the software also have employees who are responsible to maintain it. This was true in the case of Django as well but it hasn’t really worked out as a long-term plan. As compared to the growth of this web framework, it has hardly been able to draw contributors from across its entire user base. The project has not been able to bring new committers at a sufficient rate to replace those who have become less active or even completely inactive. This essentially means that Django is dependent on the goodwill of the contributors who mostly don’t get paid to work on it and are very few in number. This poses a risk on the future of the Django web framework. Django Committer is seen as a high-prestige title Currently, the decisions are made by consensus, involving input from committers and non-committers on the django-developers list and the commits to the main Django repository are made by the Django Fellows. Even people who have commit bits of their own, and therefore have the right to just push their changes straight into Django, typically use pull requests and start a discussion. The actual governance rarely relies on the committers, but still, Django committer is seen as a high-prestige title, and committers are given a lot of respect by the wider community. This creates an impression among potential contributors that they’re not “good enough” to match up to those “awe-inspiring titanic beings”. What is this proposal about? Given the reasons above, this proposal is being made to dissolve the Django core team and also revoke the commit bits. Instead, this proposal will introduce two roles called Mergers and Releasers. Mergers would merge pull requests into Django and Releasers would package/publish releases. Rather than being all-powered decision-makers, these would be bureaucratic roles. The current set of Fellows will act as the initial set of Mergers, and something similar will happen for Releasers. As opposed to allowing the committers making decisions, governance would take place entirely in public, on the django-developers mailing list. But as a final tie-breaker, the technical board would be retained and would get some extra decision-making power. These powers will be mostly related to the selection of the Merger/Releaser roles and confirming that new versions of Django are ready for release. The technical board will be elected very less often than it currently is and the voting would also be open to public. The Django Software Foundation (DSF) will act as a neutral  administrator of the technical board elections. What are the goals this proposal aims to achieve? Mr. Bennett believes that eliminating the distinction between the committers and the “ordinary contributors” will open doors for more contributors: “Removing the distinction between godlike “committers” and plebeian ordinary contributors will, I hope, help to make the project feel more open to contributions from anyone, especially by making the act of committing code to Django into a bureaucratic task, and making all voices equal on the django-developers mailing list.” The technical board remains as a backstop for resolving dead-locked decisions. This proposal will provide additional authority to the board such as issuing the final go-ahead on releases. Retaining the technical board will ensure that Django is not going to descend into some sort of “chaotic mob rule”. Also, with this proposal the formal description of Django’s governance becomes much more in line with the reality of how the project actually works and has worked for the past several years. To know more in detail, read the post by James Bannett: Django Core no more. Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users Django 2.1 released with new model view permission and more Getting started with Django and Django REST frameworks to build a RESTful app
Read more
  • 0
  • 0
  • 13873

article-image-web-security-update-casl-2-0-releases
Sunith Shetty
13 Apr 2018
2 min read
Save for later

Web Security Update: CASL 2.0 releases!

Sunith Shetty
13 Apr 2018
2 min read
CASL has released a new version 2.0 bringing with it several compelling opportunities for enhancing web app authorization methods. CASL is an isomorphic authorization JavaScript library which allows you to fix user abilities in the system. It grants you to set permissions in order to access the required resources in the system. You need to define the permissions in a single location since you cannot duplicate them across UI components, API services, and database queries. Some of the noteworthy changes available in CASL 2.0 are: Package Refactoring Refactoring is a process of changing a software system to improve the internal structure of the code without altering the external performance.   The lerna project has refactored CASL 2.0 to monorepo. Because of which MongoDB related functionality is moved into a different package, thus decreasing the core library size. You can find the core package at casl/ability and MongoDB related functionality at casl/mongoose, while helper function at casl/ability/extra. You don’t need to worry about updating your dependencies, thanks to renovate bot. CASL procures Frontend frameworks CASL now has complementary packages for leading frontend frameworks such as React, Vue, Angular and Aurelia. You can now integrate CASL into different single page applications with ease.   For more details, you can refer the README file for each library: CASL Vue package CASL React package CASL Angular package CASL Aurelia package Set abilities per fields Now you can set permissions per field of your application. For example if you want certain users with the ability to change the name of the product but not the product description. You can see suitable form fields for different roles in the admin panel Demo Examples If you want demo tutorials as per CASL 2.0 and complementary packages you can visit: Integrate CASL authorization in Vuejs2 application using CASL and Vue Integrate CASL authorization in React application using CASL and React Integrate CASL authorization in Aurelia application using CASL and Aurelia Integrate CASL authorization in Expressjs application using CASL and Expressjs Integrate CASL authorization in Feathersjs application using CASL and Feathersjs If you want to start implementing CASL library in your project or work, you can visit the GitHub page.
Read more
  • 0
  • 0
  • 13649

article-image-introducing-apollo-graphql-platform-for-product-engineering-teams-of-all-sizes-to-do-graphql-right
Bhagyashree R
08 Nov 2018
3 min read
Save for later

Introducing Apollo GraphQL Platform for product engineering teams of all sizes to do GraphQL right

Bhagyashree R
08 Nov 2018
3 min read
Yesterday, Apollo introduced its Apollo GraphQL Platform for product engineering teams. It is built on Apollo's core open source GraphQL client and server and comes with additional open source devtools and cloud services. This platform is a combination of open source components, commercial extensions, and cloud services. The following diagram depicts its architecture: Source: Apollo GraphQL The Apollo GraphQL platform consists of the following components: Core open source components Apollo Server: It is a JavaScript GraphQL server used to define a schema and a set of resolvers that implement each part of that schema. It supports AWS Lambda and other serverless environments. Apollo Client: It is a GraphQL client that manages data and state in an application. It comes with integrations for React, React Native, Vue, Angular, and other view layers. iOS and Android clients: These clients allows to query a GraphQL API from native iOS and Android applications. Apollo CLI: It is a command line client that provides access to Apollo cloud services. Cloud services Schema registry: It is a central registry that acts as a central source of truth for a schema. It propagates all changes and details of your data,allowing multiple teams to collaborate with full visibility and security on a single data graph. Client registry: It is a registry that enables you to track each known consumer of a schema, which can include both pre-registered and ad-hoc clients. Operation registry: It is a registry of all the known operations against the schema, which similarly can include both pre-registered and ad-hoc operations. Trace warehouse: It is a data pipeline and storage layer that captures structured information about each GraphQL operation processed by an Apollo Server. Apollo Gateway GraphQL gateway is the commercial plugin for Apollo Server. It allows multiple teams to collaborate on a single, organization-wide schema without mixing everyone’s code together in a monolithic single point of failure. To do that, the gateway deploys “micro-schemas” that reference each other into a single master schema. This master schema then looks to a client just like any regular GraphQL schema. Workflows In addition to these components, Apollo also implements some useful workflows for managing a GraphQL API. Some of these workflows are: Schema change validation: It checks the compatibility of a given schema against a set of previously-observed operations using the trace warehouse, operation registry, and (typically) the client registry. Safelisting: Apollo provides an end-to-end mechanism for safelisting known clients and queries, a recommended best practice that limits production use of a GraphQL API to specific pre-arranged operations. To read the full announcement check out Apollo’s official announcement. Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’ 7 reasons to choose GraphQL APIs over REST for building your APIs Baidu open sources ApolloScape and collaborates with Berkeley DeepDrive to further machine learning in automotives
Read more
  • 0
  • 0
  • 13628

article-image-facebooks-graphql-moved-to-a-new-graphql-foundation-backed-by-the-linux-foundation
Bhagyashree R
09 Nov 2018
3 min read
Save for later

Facebook’s GraphQL moved to a new GraphQL Foundation, backed by The Linux Foundation

Bhagyashree R
09 Nov 2018
3 min read
On Tuesday, The Linux Foundation announced that Facebook’s GraphQL project has been moved to a newly-established GraphQL Foundation, which will be hosted by the non-profit Linux Foundation. This foundation will be dedicated to enable widespread adoption and help accelerate the development of GraphQL and the surrounding ecosystem. GraphQL was developed by Facebook in 2012 and was later open-sourced in 2015. It has been adopted by many companies in production including Airbnb, Atlassian, Audi, CNBC, GitHub, Major League Soccer, Netflix, Shopify, The New York Times, Twitter, Pinterest, and Yelp. Why GraphhQL Foundation has been created? The foundation will provide a neutral home for the community to collaborate and encourage more participation and contribution. The community will be able to spread responsibilities and costs for infrastructure which will help in increasing the overall investment. This neutral governance will also ensure equal treatment in the community. The co-creator of GraphQL, Lee Byron said: “As one of GraphQL’s co-creators, I’ve been amazed and proud to see it grow in adoption since its open sourcing. Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support.” The foundation will also provide more resources for the GraphQL community which will benefit all contributors. It will help in organizing events and working groups, formalizing governance structures, providing marketing support to the project, and handling IP and other legal issues as they arise. The Executive Director of The Linux Foundation, Jim Zemlin believes that this new foundation will ensure the long-term support for GraphQL: “We are thrilled to welcome the GraphQL Foundation into the Linux Foundation. This advancement is important because it allows for long-term support and accelerated growth of this essential and groundbreaking technology that is changing the approach to API design for cloud-connected applications in any language.” In the next few months, The Linux Foundation with Facebook and the GraphQL community will be finalizing the founding members of the GraphQL Foundation. Read the full announcement on The Linux Foundation’s website and also check out the GraphQL Foundation’s website. Introducing Apollo GraphQL Platform for product engineering teams of all sizes to do GraphQL right 7 reasons to choose GraphQL APIs over REST for building your APIs Apollo 11 source code: A small step for a woman, and a huge leap for ‘software engineering’
Read more
  • 0
  • 0
  • 13595

article-image-google-open-sources-its-robots-txt-parser-to-make-robots-exclusion-protocol-an-official-internet-standard
Bhagyashree R
02 Jul 2019
3 min read
Save for later

Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard

Bhagyashree R
02 Jul 2019
3 min read
Yesterday, Google announced that it has teamed up with the creator of Robots Exclusion Protocol (REP), Martijn Koster and other webmasters to make the 25 year old protocol an internet standard. The REP, better known as robots.txt, is now submitted to IETF (Internet Engineering Task Force). Google has also open sourced its robots.txt parser and matcher as a C++ library. https://twitter.com/googlewmc/status/1145634145261051906 REP was created back in 1994 by Martijn Koster, a software engineer who is known for his contribution in internet searching. Since its inception, it has been widely adopted by websites to indicate whether web crawlers and other automatic clients are allowed to access the site or not. When any automatic client wants to visit a website it first checks for robots.txt that shows something like this: User-agent: * Disallow: / The User-agent: * statement means that this applies to all robots and Disallow: / means that the robot is not allowed to visit any page of the site. Despite being used widely on the web, it is still not an internet standard. With no set in stone rules, developers have interpreted the “ambiguous de-facto protocol” differently over the years. Also, it has not been updated since its creation to address the modern corner cases. This proposed REP draft is a standardized and extended version of REP that gives publishers fine-grained controls to decide what they like to be crawled on their site and potentially shown to interested users. The following are some of the important updates in the proposed REP: It is no longer limited to HTTP and can be used by any URI-based transfer protocol, for instance, FTP or CoAP. Developers need to at least parse the first 500 kibibytes of a robots.txt. This will ensure that the connections are not open for too long to avoid any unnecessary strain on servers. It defines a new maximum caching time of 24 hours after which crawlers cannot use robots.txt. This allows website owners to update their robots.txt whenever they want and also avoid the overloading robots.txt requests by crawlers. It also defines a provision for cases when a previously accessible robots.txt file becomes inaccessible because of server failures. In such cases the disallowed pages will not be crawled for a reasonably long period of time. This updated REP standard is currently in its draft stage and Google is now seeking feedback from developers. It wrote, “we uploaded the draft to IETF to get feedback from developers who care about the basic building blocks of the internet. As we work to give web creators the controls they need to tell us how much information they want to make available to Googlebot, and by extension, eligible to appear in Search, we have to make sure we get this right.” To know more in detail check out the official announcement by Google. Also, check out the proposed REP draft. Do Google Ads secretly track Stack Overflow users? Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers
Read more
  • 0
  • 0
  • 13578
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-django-2-1-2-fixes-major-security-flaw-that-reveals-password-hash-to-view-only-admin-users
Bhagyashree R
04 Oct 2018
2 min read
Save for later

Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users

Bhagyashree R
04 Oct 2018
2 min read
On Monday, Django 2.1.2 was released, which has addressed a security issue regarding password hash disclosure. Along with that, this version fixes several other bugs in 2.1.1 and also comes with the latest string translations from Transifex. Users password hash visible to “view only” admin users In Django 2.1.1, the admin users who had permissions to change the user model could see a part of the password hash in the change form. Also, admin users with “view only” permission to the user model were allowed to see the entire hash. This could prove to be a big problem if the password is weak or your site uses weaker password hashing algorithms such as MD5 or SHA1. This vulnerability has been named CVE-2018-16984 since 13th September, 2018. This issue has been solved in this new security release. Bug fixes A  bug is fixed where lookup using F() on a non-existing model field didn't raised FieldError. The migrations loader now ignores the files starting with a tilde or underscore. Migrations correctly detects changes made to Meta.default_related_name. Support for cx_Oracle 7 is added. Quoting of unique index names is now fixed. Sliced queries with multiple columns with the same name will not result in crash on Oracle 12.1 anymore. A crash is fixed when a user with the view only (but not change) permission made a POST request to an admin user change form. To read the release notes of Django, head over to its official website. Django 2.1 released with new model view permission and more Python web development: Django vs Flask in 2018
Read more
  • 0
  • 0
  • 13548

article-image-can-an-open-web-index-break-googles-stranglehold-over-the-search-engine-market
Bhagyashree R
22 Apr 2019
4 min read
Save for later

Can an Open Web Index break Google’s stranglehold over the search engine market?

Bhagyashree R
22 Apr 2019
4 min read
Earlier this month, Dirk Lewandowski, Professor of Information Research & Information Retrieval  at Hamburg University of Applied Sciences, Germany, published a proposal for building an index of the Web. His proposal aims to separate the infrastructure part of search engine from the services part. Search engines are our way to the web, which makes them an integral part of the Web’s infrastructure. While there are a significant number of search engines present in the market, there are only a few relevant search engines that have their own index, for example, Google, Bing, Yandex and Baidu. Other search engines that pull results from these search engines, for instance, Yahoo, cannot really be considered search engines in the true sense. The US search engine market is split between Google and Bing with roughly two thirds to one-third, respectively, In most European countries, Google covers the 90% of the market share. Highlighting the implications of Google’s dominance in the current search engine market, the report reads, “As this situation has been stable over at least the last few years, there have been discussions about how much power Google has over what users get to see from the Web, as well as about anti-competitive business practices, most notably in the context of the European Commission's competitive investigation into the search giant.” The proposal aims to bring plurality in the search engine market, not only in terms of the numbers of search engine providers but also in the number of search results users get to see when using search engines. The idea is to implement the “missing part of the Web’s infrastructure” called searchable index. The author proposes to separate the infrastructure part of the search engine from services part. This will allow multitude of services, whether existing as search engines or otherwise to be run on a shared infrastructure. The following figure shows how the public infrastructure crawls the web for indexing its content and provides an interface to the services that are built on top of the index. The indexing stage is split into basic indexing and advanced indexing. Basic indexing is responsible for providing the data in a form that services built on top of the index can easily and rapidly process the data. Though services are allowed to do their further indexing to prepare the documents, the open infrastructure also provides some advanced indexing. This provides additional information to the indexed documents, for example, semantic annotations. This advanced indexing requires an extensive infrastructure for data mining and processing. Services will be able to decide for themselves to what extent they want to rely on the pre-processing infrastructure provided by the Open Web Index. A common design principle can be adopted is allowing services a maximum of flexibility. Credits: arXiv Many users are supporting this idea. One Redditor said, “I have been wanting this for years...If you look at the original Yahoo Page when Yahoo first started out it attempted to solve this problem.I believe this index could be regionally or language based.” Some others do believe that implementing an open web index will come with its own challenges. “One of the challenges of creating a "web index" is first creating indexes of each website. "Crawling" to discover every page of a website, as well as all links to external sites, is labour-intensive and relatively inefficient. Part of that is because there is no 100% reliable way to know, before we begin accessing a website, each and every URL for each and every page of the site. There are inconsistent efforts such "site index" pages or the "sitemap" protocol (introduced by Google), but we cannot rely on all websites to create a comprehensive list of pages and to share it,” adds another Redditor. To read more in detail, check out the paper titled: The Web is missing an essential part of infrastructure: an Open Web Index. Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more Dark Web Phishing Kits: Cheap, plentiful and ready to trick you  
Read more
  • 0
  • 0
  • 13536

article-image-f5-networks-is-acquiring-nginx-a-popular-web-server-software-for-670-million
Bhagyashree R
12 Mar 2019
3 min read
Save for later

F5 Networks is acquiring NGINX, a popular web server software for $670 million

Bhagyashree R
12 Mar 2019
3 min read
Yesterday, F5 Networks, the company that offers businesses cloud and security application services, announced that it is set to acquire NGNIX, the company behind the popular open-source web server software, for approximately $670 million. These two companies are coming together to provide their customers with consistent application services across every environment. F5 has been seeing some stall in its growth lately given that its last quarterly earnings have only shown a 4% growth compared to the year before. On the other hand, NGINX continues to show a 100 percent year-on-year growth since 2014. The company currently boasts of 375 million users with about 1,500 customers for its paid services like support, load balancing, and API gateway and analytics. This acquisition will enable F5 to accelerate  ‘time to market’ of its services to customers for building modern applications. F5 plans to enhance the current offerings by NGINX using its security solutions and will also be integrating its cloud-native innovations with NGINX’s load balancing technology. Along with these advancements, F5 will help scale NGINX selling opportunities using its global sales force, channel infrastructure, and partner ecosystem. François Locoh-Donou, President and CEO of F5, sharing his vision behind acquiring NGINX said, “F5’s acquisition of NGINX strengthens our growth trajectory by accelerating our software and multi-cloud transformation”. He adds, “By bringing F5’s world-class application security and rich application services portfolio for improving performance, availability, and management together with NGINX’s leading software application delivery and API management solutions, unparalleled credibility and brand recognition in the DevOps community, and massive open source user base, we bridge the divide between NetOps and DevOps with consistent application services across an enterprise’s multi-cloud environment.” NGINX’s open source community was also a major factor behind this acquisition. F5 will continue investing in the NGINX open source project as open source is a core part of its multi-cloud strategy. F5 expects that this will help it accelerate product integrations with leading open source projects and open doors for more partnership opportunities. Gus Robertson, CEO of NGINX, Inc, said, “NGINX and F5 share the same mission and vision. We both believe applications are at the heart of driving digital transformation. And we both believe that an end-to-end application infrastructure—one that spans from code to customer—is needed to deliver apps across a multi-cloud environment.” The acquisition is now approved by the boards of directors of both F5 and NGINX and is expected to close in the second calendar quarter of 2019. Once the acquisition is complete, the NGINX founders, Gus Robertson, Igor Sysgoev, and Maxim Konovalov will be joining F5 Networks. To know more in detail, check out the announcement by F5 Networks. Now you can run nginx on Wasmjit on all POSIX systems Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack  
Read more
  • 0
  • 0
  • 13308

article-image-masonite-2-0-released-a-python-web-development-framework
Sugandha Lahoti
18 Jun 2018
2 min read
Save for later

Masonite 2.0 released, a Python web development framework

Sugandha Lahoti
18 Jun 2018
2 min read
Masonite, the popular Python web development framework, has released a new version. Masonite 2.0 comes with several new features to Masonite including new status codes, database seeding, built in cron scheduling, controller constructor resolving, speed improvements and much more. A new ‘Tinker’ Command Masonite 2.0 adds a new Tinker command that starts a Python shell and imports the container. It works as a great debugging tool and can be used for verifying that objects are loaded into the container correctly. A new Task Scheduler Masonite 2.0 adds a task scheduler,  a new default package that allows scheduling recurring tasks. You can read about the Masonite Scheduler under the Task Scheduling documentation. Automatic Server Reloading A huge update to Masonite is the new --reload flag on the serve command. Now the server will automatically restart when it detects a file change. You can use the -r flag as a shorthand. Autoloading With the new autoloading feature, you can list directories in the AUTOLOAD constant in the config/application.py file and it will automatically load all classes into the container. Autoloading is great for loading command and models into the container when the server starts up. Database Seeding Support Masonite 2.0 adds the ability to seed the database with dummy data. Seeding the database helps in populating the database with data that would be needed for future development. Explicitly Imported Providers Providers are now explicitly imported at the top of the file and added to the PROVIDERS list, located in config/providers.py. This completely removes the need for string providers and boosts the performance of the application substantially. Status Code Provider Masonite 2 removes the bland error codes such as 404 and 500 errors and replaces them with a cleaner view. It also allows adding of custom error pages. Upgrading from Masonite 1.6 to Masonite 2.0 Masonite 1.6 to Masonite 2.0 has quite a large number of changes and updates in a single release. However, upgrading takes only around 30 mins for an average sized project. Read the Masonite upgrade guide for a step-by-step guide to upgrading. You can read the release notes, for the full list of features. Python web development: Django vs Flask in 2018 What the Python Software Foundation & Jetbrains 2017 Python Developer Survey had to reveal Should you move to Python 3? 7 Python experts’ opinions
Read more
  • 0
  • 0
  • 13268
article-image-google-chrome-developers-clarify-the-speculations-around-manifest-v3-after-a-study-nullifies-their-performance-hit-argument
Bhagyashree R
18 Feb 2019
4 min read
Save for later

Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument

Bhagyashree R
18 Feb 2019
4 min read
On Friday, a study was published on WhoTracks.me where the performance of the most commonly used ad blockers was analyzed. This study was motivated by the recent Manifest V3 controversy, which reveals that Google developers are planning to introduce an update that could lead to crippling all ad blockers. What update Chrome developers are introducing? The developers are planning to introduce an alternative to the webRequest API named the declrativeNetRequest API, which limits the blocking version of the webRequest API. According to Manifest V3, the declarativeNetRequest API will be treated as the primary content-blocking API in extensions. The Chrome developers listed two reasons behind this new update, one was performance and the other was better privacy guarantee to users. What this API does is, allow extensions to tell Chrome what to do with a given request, rather than have Chrome forward the request to the extension. This allows Chrome to handle a request synchronously. One of the ad blocker maintainers have reported an issue on the Chromium bug tracker for this feature: “If this (quite limited) declarativeNetRequest API ends up being the only way content blockers can accomplish their duty, this essentially means that two content blockers I have maintained for years, uBlock Origin (“uBO”) and uMatrix, can no longer exist.” What the study by Ghostery revealed? This study addresses the performance argument made by the developers. For this study, the Ghostery team analyzed the network performance of the most commonly used ad blockers: uBlock Origin, Adblock Plus, Brave, DuckDuckGo and Cliqz'z Ghostery. The study revealed that these content-blockers, except DuckDuckGo, have only sub-millisecond median decision time per request. This small amount of time will not have any overhead noticeable by users. Additionally, the efficiency of content blockers is continuously being improved with innovative approaches or with the help of technologies like WebAssembly. How Google developers reacted to this study and all the feedbacks surrounding Manifest V3? Following the publication of the study and after looking at the feedbacks, Devlin Cronin, a Software Engineer at Google, clarified that these changes are not really meant to prevent content blocking. Cronin added that the changes listed in Manifest V3 are still in the draft and design stage. In the Google group, Manifest V3: Web Request Changes, Cronin said, “We are committed to preserving that ecosystem and ensuring that users can continue to customize the Chrome browser to meet their needs. This includes continuing to support extensions, including content blockers, developer tools, accessibility features, and many others. It is not, nor has it ever been, our goal to prevent or break content blocking.” The team is not planning to remove the webRequest API. Cronin added, “In particular, there are currently no planned changes to the observational capabilities of webRequest (i.e., anything that does not modify the request).” Based on the feedback and concerns shared, the Chrome team did do some revisions including adding support for the dynamic rule to the declarativeNetRequest API. They are also planning to increase the ruleset size, which was 30k earlier. Users are, however, not convinced by this clarification. One user commented on Hacker News, “Keep in mind that their story about performance has been shown to be a complete lie. There is no performance hit from using webRequest like this. This is about removing sophisticated ad blockers in order to defend Google's revenue stream, plain and simple.” Coincidentally, a Chrome 72 upgrade seems to break ad blockers in a way that they can’t see or block analytics anymore if the web page uses a service worker. https://twitter.com/jviide/status/1096947294920949760 Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Regulate Google, Facebook, and other online platforms to protect journalism, says a UK report Google announces the general availability of a new API for Google Docs
Read more
  • 0
  • 0
  • 13124

article-image-mastodon-2-7-a-decentralized-alternative-to-social-media-silos-is-now-out
Bhagyashree R
21 Jan 2019
2 min read
Save for later

Mastodon 2.7, a decentralized alternative to social media silos, is now out!

Bhagyashree R
21 Jan 2019
2 min read
Yesterday, the Mastodon team released Mastodon 2.7, which comes with major improvements to the admin interface, a new moderation warning system, and more. Mastodon is a free, open-source social network server, which is based on open web protocols like ActivityPub and OStatus. This server aims to provide users with a decentralized alternative to commercial social media silos and returns the control of the content distribution channels to the people. Profile directory The new profile directory allows users to see active posters on a given Mastodon server and filter them by the hashtags in their profile bio. With profile directory, users can find people with common interests without having to read through public timelines. A new moderation warning system This version comes with a new moderation warning system for Mastodon. Moderators can now inform users if their account is suspended or disabled. They can also send official warnings via e-mails, which are reflected in the moderator interface to keep other moderators up to date. Improvements in the administration interface Mastodon 2.7 combines administration interfaces for known servers and domain blocks into a common area. Users can see information like the number of accounts known from a particular server, the number of accounts followed from your server, the number of individuals blocked or reported, etc. A registration API A new registration API is introduced, which allows apps to directly accept new registration from their users, instead of having to send them to a web browser. Users still receive a confirmation e-mail when they sign up through the app, which contains an activation link that can open the app. New commands for managing a Mastodon server The tootctl command-line utility used for managing a Mastodon server has received two new commands: tootctl domains crawl: You can scan the Mastodon network to discover servers and aggregate statistics about Mastodon’s usage. tootctl accounts follow: You can make the users on your server follow a specified account. This command comes in handy in cases where an administrator needs to change their account. You can read the full list of improvements in Mastodon 2.7 on its website. How Dropbox uses automated data center operations to reduce server outage and downtime Obfuscating Command and Control (C2) servers securely with Redirectors [Tutorial] Fortnite server suffered a minor outage, Epic Games was quick to address the issue
Read more
  • 0
  • 0
  • 12875

article-image-netlify-raises-30-million-new-application-delivery-network-replace-servers
Savia Lobo
11 Oct 2018
3 min read
Save for later

Netlify raises $30 million for a new ‘Application Delivery Network’, aiming to replace servers and infrastructure management

Savia Lobo
11 Oct 2018
3 min read
On Tuesday, Netlify, a San Francisco based company announced that it has raised $30 million in a series B round of funding for a new platform named as ‘Application Delivery Network’ designed specifically to assist web developers in building newer applications. The funding was led by Kleiner Perkins’ Mamoon Hamid with Andreessen Horowitz and the founders of Slack, Yelp, GitHub and Figma participating. Founded in 2015, Netlify provides all-in-one workflow to build, deploy, and manage modern web projects. This new platform for the web, will enable all content and applications to be created directly on a global network, thus, bypassing the need to ever setup or manage servers. Vision behind the global ‘Application Delivery Network’ Netlify has assisted a lot of organizations to dump web servers with no requirement of infrastructure. It also replaced a need for CDN and thus a lot of servers. In order to implement the new architecture, Netlify provides developers with a git-centric workflow that supports APIs and microservices. Netlify’s Application Delivery Network removes the last dependency on origin infrastructure, allowing companies to host the entire application globally using APIs and microservices. Mathias Biilmann, Netlify Founder and CEO, said that more amount of devices brings additional complications. He further adds, “Customers have come to us with AWS environments that have dozens or even hundreds of them for a single application. Our goal is to remove the requirement for those servers completely. We’re not trying to make managing infrastructure easy. We want to make it totally unnecessary.” Investor’s take Talking about the investment in Netlify, Mamoon Hamid, Managing Member and General Partner at the venture capital firm Kleiner Perkins, said, “In a sense, they are completely rethinking how the modern web works. But the response to what they are doing has been overwhelming. Most of the top projects in this developer space have already migrated their sites: React, Vue, Gatsby, Docker, and Kubernetes are all Netlify powered. The early traction really shows they hit a nerve with the developer community.” To top it up as an icing on the cake, Chris Coyier, CSS expert and co-founder of Codepen says, “This is where the web is going. Netlify is just bringing it to us all a lot faster. With all the innovation in the space, this is an exciting time to be a developer.” What users say about Netlify In a discussion thread on Hacker News, users absolutely love how Netlify provides a helping hand to all the web developers in their day-to-day web application based tasks. Some of the features mentioned by users include: Netlify provides users with forms, lambdas and very easy testing by just pushing to another git branch It provides the ability to publish using a simple `git push` and does all the rest of the work including assets minification and bundling. Netlify connects to GitHub and rebuilds your site automatically when a change is made in the master branch. Users just have to connect their GitHub account with their UI. To know more about this news in detail, read Netlify’s official announcement. How to build a real-time data pipeline for web developers – Part 1 [Tutorial] How to build a real-time data pipeline for web developers – Part 2 [Tutorial] Google wants web developers to embrace AMP. Great news for users, more work for developers
Read more
  • 0
  • 0
  • 12374
article-image-chrome-safari-opera-and-edge-to-make-hyperlink-auditing-compulsorily-enabled
Bhagyashree R
08 Apr 2019
3 min read
Save for later

Chrome, Safari, Opera, and Edge to make hyperlink auditing compulsorily enabled

Bhagyashree R
08 Apr 2019
3 min read
Last week, Bleeping Computer reported that the latest versions of Google Chrome, Safari, Opera, and Microsoft Edge will not allow users to disable hyperlink auditing that was possible in previous versions. What is hyperlink auditing? The Web Applications 1.0 specification introduced a new feature in HTML5 called hyperlink auditing for tracking clicks on the links. To track user clicks, the “a” and “area” elements support a “ping” attribute that takes one or more URIs as a value. For example: When you click on the hyperlink, the “href” link will be loaded as expected, but additionally, the browser will also send an HTTP POST request to the ping URL. The request headers can then be examined by the scripts that receive the ping POST request to find out where the ping came from. Which browsers have made hyperlink auditing compulsory? After finding this issue in Safari Technology Preview 72, Jeff Johnson, a professional Mac, and iOS software engineer reported this to Apple. Despite this, Apple released Safari 12.1 without any settings to disable hyperlink auditing. Prior to Safari 12.1, users were able to disable this feature with a hidden preference. Similar to Safari, in Google Chrome hyperlink auditing was enabled by default. Users could previously disable this by going to “chrome://flags#disable-hyperlink-auditing” and setting the flag to “Disabled”. But, in Chrome 74 Beta and Chrome 75 Canary builds, this flag has been completely removed. Microsoft Edge and Opera 61 Developer build also removes the option to disable/enable hyperlink auditing. Firefox and Brave, on the other hand, have disabled hyperlink auditing by default. In Firefox 66, Firefox Beta 67, and Firefox Nightly 68 users can enable it using the browser.send_pings setting, the Brave browser, however, does not allow users to enable it at all. How people are reacting to this development? The hyperlink auditing feature has received mixed reactions from developers and users. While some were concerned about its privacy implications, others think that this process makes the user experience more transparent. Sharing how this development can be misused, Chris Weber co-founder of Casaba Security wrote in a blog post,  “the URL could easily be appended with junk causing large HTTP requests to get sent to an inordinately large list of URIs. Information could be leaked in the usual sense of Referrer/Ping-From leaks.” One Reddit user said that this feature is privacy neutral as this kind of tracking can be done with JavaScript or non-JavaScript redirects. Sharing other advantages of the ping attribute, another user said, “The ping attribute for hyperlinks aims to make this process more transparent, with additional benefits such as optimizing network traffic to the target page loads more quickly, as well as an option to disable sending the pings for more user-friendly privacy.” Though this feature brings some advantages, the Web Hypertext Application Technology Working Group (WHATWG) encourages user agents to put control in the hands of the users by providing them a feature to disable this behavior. “User agents should allow the user to adjust this behavior, for example in conjunction with a setting that disables the sending of HTTP `Referer` (sic) headers. Based on the user's preferences, UAs may either ignore the ping attribute altogether or selectively ignore URLs in the list,” mentions WHATWG. To read the full story, visit Bleeping Computer. Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination Mozilla is exploring ways to reduce notification permission prompt spam in Firefox
Read more
  • 0
  • 0
  • 11913

article-image-mozilla-considers-blocking-darkmatter-after-reuters-reported-its-link-with-a-secret-hacking-operation-project-raven
Bhagyashree R
07 Mar 2019
3 min read
Save for later

Mozilla considers blocking DarkMatter after Reuters reported its link with a secret hacking operation, Project Raven

Bhagyashree R
07 Mar 2019
3 min read
Back in January this year, Reuters in an investigative piece shared that DarkMatter was providing staff for a secret hacking operation called Project Raven. After reading this report, Mozilla is now thinking whether it should block DarkMatter from serving as one of its internet security providers. The unit working for Project Raven were mostly former US intelligence officials, who were allegedly conducting privacy-threatening operations for the UAE government. The team behind this project was working in a converted mansion in Abu Dhabi, which they called “the Villa”.  These operations included hacking accounts of human rights activists, journalists, and officials from rival governments. On February 25, DarkMatter in a letter addressed to Mozilla, CEO Karim Sabbagh denied all the allegations reported by Reuters and refused that it has anything to do with Project Raven. Sabbagh wrote in the letter, “We have never, nor will we ever, operate or manage non-defensive cyber activities against any nationality.” Mozilla’s response to the Reuter report In an interview last week, Mozilla executive said that Reuter’s report has raised concerns inside the company about DarkMatter misusing its authority to certify websites as safe. Mozilla is yet to decide whether they should deny DarkMatter from this authority. Selena Deckelmann, a senior director of engineering for Mozilla, “We don’t currently have technical evidence of misuse (by DarkMatter) but the reporting is strong evidence that misuse is likely to occur in the future if it hasn’t already.” Deckelmann further shared that Mozilla is also concerned about the certifications DarkMatter has granted and may strip some or all of the 400 certifications that DarkMatter has granted to websites under a limited authority since 2017. Marshall Erwin, director of trust and security for Mozilla, said that DarkMatter could use its authority for “offensive cybersecurity purposes rather than the intended purpose of creating a more secure, trusted web.” A website is designated as secure if it is certified by an external authorized organization called Certification Authority (CA). This certifying organization is also responsible for securing the connection between an approved website and its users. To get this authority, these organizations need to apply to individual browser makers like Mozilla and Apple. DarkMatter has been threatening Mozilla to gain full authority to grant certifications since 2017. Giving it a full authority will allow them to issue certificates to hackers impersonating real websites, including banks. https://twitter.com/GossiTheDog/status/1103596200891244545 To know more about this news in detail, read the full story at Reuters’ official website. Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Broswer Mozilla engineer shares the implications of rewriting browser internals in Rust Mozilla shares key takeaways from the Design Tools survey
Read more
  • 0
  • 0
  • 10963
Modal Close icon
Modal Close icon