Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Web Development

1802 Articles
article-image-firefox-70-released-with-better-security-css-and-javascript-improvements
Savia Lobo
23 Oct 2019
6 min read
Save for later

Firefox 70 released with better security, CSS, and JavaScript improvements

Savia Lobo
23 Oct 2019
6 min read
Mozilla team announced the much-awaited release of Firefox 70 yesterday with amazing new features like secure password generation with Lockwise and the new Firefox Privacy Protection Report. Firefox 70 also includes a plethora of additions for developers such as DOM mutation breakpoints and inactive CSS rule indicators in the DevTools, several new CSS text properties, two-value display syntax, and JS numeric separators, and much more. Firefox 70 centers around enhanced privacy and security The new Firefox 70 includes an Enhanced Tracking Protection, which includes a Firefox Privacy Protection Report that gives additional details and more visibility into how you’re being tracked online so you can better combat it. The Enhanced Tracking Protection was set up as default by the browser in September this year. The report highlights how ETP prevents third-party trackers from building a user’s profile based on their online activity. The report also includes the number of cross-site and social media trackers, finger-printers and crypto-miners Mozilla blocked. The report also helps users to keep themselves updated with Firefox Monitor and Firefox Lockwise. Firefox Monitor helps users to get a summary of the number of unsafe passwords that may have been used in a breach so that you can take action to update and change those passwords. Firefox Lockwise helps users to manage passwords and different synced devices. Firefox Lockwise includes a button where users can click to view their logins and updates. They can also have the ability to quickly view and manage how many devices they syncing and sharing passwords with. To know more about security in Firefox 70, read Mozilla’s blog. What’s new in Firefox 70 Updated HTML forms and secure passwords To generate secure passwords, the team has updated HTML input elements. Here, any input element of type password will have an option to generate a secure password available in the context menu, which can then be stored in Lockwise. In addition, any type="password" field with autocomplete=”new-password” set on it will have an autocomplete UI to generate a new password in-context. New CSS improvements Firefox 70 includes some CSS improvements like new options for styling underlines and new set of two-keyword values. Options for styling underlines include three new properties for text-decoration (underline): text-decoration-thickness: sets the thickness of lines added via text-decoration. text-underline-offset: sets the distance between a text-decoration and the text it is set on. Bear in mind that this only works on underlines. text-decoration-skip-ink: sets whether underlines and overlines are drawn if they cross descenders and ascenders. The default value, auto, causes them to only be drawn where they do not cross over a glyph. To allow underlines to cross glyphs, set the value to none. Two-keyword display values Until now, the display property has taken a single value. However, the team says that “the boxes on a page have an outer display type, which determines how the box is laid out in relation to other boxes on the page, and an inner display type, which determines how the box’s children will behave.” The two-keyword values allow you to explicitly specify the outer and inner display values. In supporting browsers (which currently includes only Firefox), the single keyword values will map to new two-keyword values, for example: display: flex; is equivalent to display: block flex; display: inline-flex; is equivalent to display: inline flex; JavaScript improvements Firefox 70 now supports numeric separators for JavaScript. Underscores can now be used as separators in large numbers so that they are more readable. Other improvements in JavaScript include: Intl improvements Firefox 70 includes improved JavaScript i18n (internationalization), starting with the implementation of the Intl.RelativeTimeFormat.formatToParts() method. This is a special version of Intl.RelativeTimeFormat.format() that returns an array of objects, each one representing a part of the value, rather than returning a string of the localized time value. Also,  Intl.NumberFormat.format() and Intl.NumberFormat.formatToParts() now accept BigInt values. Performance Improvements The inclusion of the new baseline interpreter has speeded up JavaScript. The code for the new interpreter includes shared code from the existing Baseline JIT. You can read more about the interpreter on The Baseline Interpreter: a faster JS interpreter in Firefox 70. New Developer tools The Developer Tools Accessibility panel now includes an audit for keyboard accessibility and a color deficiency simulator for systems with WebRender enabled. Pause option in DOM Mutation in Debugger DOM Mutation Breakpoints (aka DOM Change Breakpoints) let you pause scripts that add, remove, or change specific elements. Once a DOM mutation breakpoint is set, you’ll see it listed under “DOM Mutation Breakpoints” in the right-hand pane of the Debugger; this is also where you’ll see breaks reported. Source: Mozilla Hacks Color contrast information in the color picker! In the CSS Rules view, you can click foreground colors with the color picker to determine if their contrast with the background color meets accessibility guidelines. Accessibility inspector: keyboard checks The Accessibility inspector‘s Check for issues dropdown now includes keyboard accessibility checks: Selecting this option causes Firefox to go through each node in the accessibility tree and highlight all that have a keyboard accessibility issue: Hovering over or clicking each one will reveal information about what the issue is, along with a “Learn more” link for more details on how to fix it. Web socket inspector In Firefox DevEdition, the Network monitor now has a new “Messages” panel, which appears when you are monitoring a web socket connection (i.e. a 101 response). This can be used to inspect web socket frames sent and received through the connection. This functionality was originally supposed to be in Firefox 70 general release, but the team had a few more bugs to resolve, so expect it in Firefox 71! For now, users can explore it in the DevEdition. Fixed issues in Firefox 70 Built-in Firefox pages now follow the system dark mode preference Aliased theme properties have been removed, which may affect some themes Passwords can now be imported from Chrome on macOS in addition to existing support for Windows Readability is now greatly improved on under- or overlined texts, including links. The lines will now be interrupted instead of crossing over a glyph. Improved privacy and security indicators A new crossed-out lock icon will indicate sites delivered via insecure HTTP The formerly green lock icon is now grey The Extended Validation (EV) indicator has been moved to the identity popup that appears when clicking the lock icon To know more about other improvements and bug fixes in Firefox 70 in detail read Mozilla’s official blog. Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70 Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Mozilla Thunderbird 78 will include OpenPGP support, expected to be released by Summer 2020
Read more
  • 0
  • 0
  • 4220

article-image-the-new-websocket-inspector-will-be-released-in-firefox-71
Fatema Patrawala
17 Oct 2019
4 min read
Save for later

The new WebSocket Inspector will be released in Firefox 71

Fatema Patrawala
17 Oct 2019
4 min read
On Tuesday,  Firefox DevTools team announced that the new WebSocket (WS) inspector will be available in Firefox 71. It is currently ready for developers to use in Firefox Developer Edition. The WebSocket API is used to create a persistent connection between a client and server. Because the API sends and receives data at any time, it is used mainly in applications requiring real-time communication. Although it is possible to work directly with the WS API, some existing libraries come in handy and help save time. These libraries can help with connection failures, proxies, authentication and authorization, scalability, and much more. The WS inspector in Firefox DevTools currently supports Socket.IO and SockJS, and more support is still a work in progress. Key features included in Firefox WebSocket Inspector The WebSocket Inspector is part of the existing Network panel UI in DevTools. It was possible to filter the content for opened WS connections in the panel, but now you can see the actual data transferred through WS frames. The WS UI now offers a fresh new Messages panel that can be used to inspect WS frames sent and received through the selected WS connection. There are Data and Time columns visible by default, and you can customize the interface to see more columns by right-clicking on the header. The WS inspector currently supports the following WS protocols: Plain JSON Socket.IO SockJS SignalR and WAMP will be supported soon 5. You can use the pause/resume button in the Network panel toolbar to stop intercepting WS traffic. Firefox team is still working on a few things for this release for example, binary payload viewer, indicating closed connections, more protocols like SignalR and WAMP and exporting WS frames and more. For developers, this is a major improvement and the community is really happy with this news. One of them comments on Reddit, “Finally! Have been stuck rolling with Chrome whenever I'm debugging websocket issues until now, because it's just so damn useful to see the exact messages sent and received.” Another user commented, “This came at the most perfect time... trying to interface with a Socket.IO server from a Flutter app is difficult without tools to really look at the internals and see what’s going on” Some of them also feel that with such improvements in Firefox it will soon replace the current Chromium dominance. The comment reads, “I hope that in improving its dev tooling with things like WS inspection, Firefox starts to turn the tide from the Chromium's current dominance. Pleasing webdevs seems to be the key to winning browser wars. The general pattern is, the devs switch to their preferred browser. When building sites, they do all their build testing against their favourite browser, and only make sure it functions on other browsers (however poorly) as an afterthought. Then everyone else switches to suit, because it's a better experience. It happened when IE was dominant (partly becuse of dodgy business practices, but also partly because ActiveX was more powerful than early JS). But then Firefox was faster and had [better] devtools and add-ons, so the devs switched to Firefox and everyone followed suit. Then Chrome came onto the scene as a faster browser with even better devtools, and now Chromium+Forks is over three quarters of the browser market share. A browser monopoly is bad for the web ecosystem, no matter what browser happens to be dominant.” To know more about this news, check out the official announcement on the Firefox blog. Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta  
Read more
  • 0
  • 0
  • 3984

article-image-modern-web-development-what-makes-it-modern
Richard Gall
14 Oct 2019
10 min read
Save for later

Modern web development: what makes it ‘modern’?

Richard Gall
14 Oct 2019
10 min read
The phrase 'modern web development' is one that I have clung to during years writing copy for Packt books. But what does it really mean? I know it means something because it sounds right - but there’s still a part of me that feels that it’s a bit vague and empty. Although it might sound somewhat fluffy the truth is that there probably is such a thing as modern web development. Insofar as the field has changed significantly in a matter of years, and things are different now from how they were in, say, 2013, modern web development can be easily characterised as all the things that are being done in web development in 2019 that are different from 5-10 years ago. By this I don’t just mean trends like artificial intelligence and mobile development (although those are both important). I’m also talking about the more specific ways in which we actually build web projects. So, let’s take a look at how we got to where we are and the various ways in which ‘modern web development’ is, well, modern. The story of modern web development: how we got here It sounds obvious, but the catalyst for the changes that we currently see in web development today is the rise of mobile. Mobile and the rise of the web applications There are a few different parts to this that have got us to where we are today. In the first instance the growth of mobile in the middle part of this decade (around 2013 or 2014) initiated the trend of mobile-first or responsive web design. Those terms might sound a bit old-hat. If they do, it’s a mark of how quickly the web development world has changed. Primarily, though this was about appearance and UI - making web properties easy to use and navigate on mobile devices, rather than just desktop. Tools like Bootstrap grew quickly, providing an easy and templated way to build mobile-first and responsive websites. But what began as a trend concerned primarily with appearance later shifted as mobile usage grew. This called for a more sophisticated approach as mobile users came to expect richer and faster web experiences, and businesses a new way to monetize these significant changes user behavior. Explore Packt's Bootstrap titles here. Lightweight apps for data-intensive user experiences This is where concepts like the single page web app came to the fore. Lightweight and dynamic, and capable of handling data-intensive tasks and changes in state, single page web apps were unique in that they handled logic in the browser rather than on the server. This was arguably a watershed in changing how we think about web development. It was instrumental in collapsing the well-established distinction between backend and front end. Behind this trend we saw a shift towards new technologies. Node.js quietly emerged on the scene (arguably its only in the last couple of years that its popularity has really exploded), and frameworks like Angular were at the height of their popularity. Find a massive range of Node.js eBooks and videos on the Packt store. Full-stack web development It’s around this time that full stack development started to accelerate as a trend. Look at Google trends. You can see how searches for the phrase have grown since the beginning of 2012: If you look closely, it’s around 2015 that the term undergoes a step change in the level of interest. Undoubtedly one of the reasons for this is that the relationship between client and server was starting to shift. This meant the web developer skill set was starting to change as well. As a web developer, you weren’t only concerned with how to build the front end, but also how that front end managed dynamic content and different states. The rise and fall of Angular A good corollary to this tale is the fate of AngularJS. While it rose to the top amidst the chaos and confusion of the mid-teens framework bubble, as the mobile revolution matured in a way that gave way to more sophisticated web applications, the framework soon became too cumbersome. And while Google - the frameworks’ creator - aimed to keep it up to date with Angular 2 and subsequent versions, delays and missteps meant the project lost ground to React. Indeed, this isn’t to say that Angular is dead and buried. There are plenty of reasons to use Angular over React and other JavaScript tools if the use case is right. But it is nevertheless the case that the Angular project no longer defines web development to the extent that it used to. Explore Packt's Angular eBooks and videos. The fact that Ionic, the JavaScript mobile framework, is now backed by Web Components rather than Angular is an important indicator for what modern web development actually looks like - and how it contrasts with what we were doing just a few years ago. The core elements of modern web development in 2019 So, there are a number of core components to modern web development that have evolved out of industry changes over the last decade. Some of these are tools, some are ideas and approaches. All of them are based on the need to manage a balance of complex requirements with performance and simplicity. Web Components Web Components are the most important element if we’re trying to characterise ‘modern’ web development. The principle is straightforward: Web Components provides a set of reusable custom elements. This makes it easier to build web pages and applications without writing additional lines of code that add complexity to your codebase. The main thing to keep in mind here is that Web Components improve encapsulation. This concept, which is really about building in a more modular and loosely coupled manner, is crucial when thinking about what makes modern web development modern. There are three main elements to Web Components: Custom elements, which are a set of JavaScript APIs that you can call and define however you need them to work. The shadow DOM, which acts as a DOM that’s attached to individual elements on your page. This essentially isolates the resources different elements and components need to work on your page which makes it easier to manage from a development perspective, and can unlock better performance for users. HTML Templates, which are bits of HTML that can be reused and called upon only when needed. These elements together paint a picture of modern web development. This is one in which developers are trying to handle more complexity and sophistication while improving their productivity and efficiency. Want to get started with Web Components? You can! Read Getting Started with Web Components. React.js One of the reasons that React managed to usurp Angular is the fact that it does many of the things that Google wanted Angular to do. Perhaps the most significant difference between React and Angular is that React tackles some of the scalability issues presented by Angular’s two-way data binding (which was, for a while, incredibly exciting and innovative) with unidirectional flow. There’s a lot of discussion around this, but by moving towards a singular model of data flow, applications can handle data on a much larger scale without running into problems. Elsewhere, concepts like the virtual DOM (which is distinct from a shadow DOM, more on that here) help to improve encapsulation for developers. Indeed, flexibility is one of the biggest draws of React. To use Angular you need to know TypeScript, for example. And although you can use TypeScript when working with React, it isn’t essential. You have options. Explore Packt's React.js eBooks and videos. Redux, Flux, and how we think about application state The growth of React has got web developers thinking more and more about application state. While this isn’t something that’s new, as applications have become more interactive and complex it has become more important for developers to take the issue of ‘statefulness’ seriously. Consequently, libraries such as Flux and Redux have emerged on the scene which act as objects in which all the values that comprise an application’s state can be stored. This article on egghead.io explains why state is important in a clear and concise way: "For me, the key to understanding state management was when I realised that there is always state… users perform actions, and things change in response to those actions. State management makes the state of your app tangible in the form of a data structure that you can read from and write to. It makes your ‘invisible’ state clearly visible for you to work with." Find Redux eBooks and videos from Packt. Or, check out a wide range of Flux titles. APIs and microservices One software trend that we haven’t mentioned yet but nevertheless remains important when thinking about modern web development is the rise of APIs and microservices. For web developers, the trend is reinforcing the importance of encapsulation and modularity that things like Web Components and React are designed to help with. Insofar as microservices are simplifying the development process but adding architectural complexity, it’s not hard to see that web developers are having to think in a more holistic manner about how their applications interact with a variety of services and data sources. Indeed, you could even say that this trend is only extending the growth of the full-stack developer as a job role. If development is today more about pulling together multiple different services and components, rather than different roles building different parts of a monolithic app, it makes sense that the demand for full stack developers is growing. But there’s another, more specific, way in which the microservices trend is influencing modern web development: micro frontends. Micro frontends Micro frontends take the concept of microservices and apply them to the frontend. Rather than just building an application in which the front end is powered by microservices (in a way that’s common today), you also treat individual constituent parts of the frontend as a microservice. In turn, you build teams around each of these parts. So, perhaps one works on search, another on check out, another on user accounts. This is more of an organizational shift than a technological one. But it again feeds into the idea that modern web development is something modular, broken up and - usually - full-stack. Conclusion: modern web development is both a set of tools and a way of thinking The web development toolset has been evolving for more than a decade. Mobile was the catalyst for significant change, and has helped get us to a world that is modular, lightweight, and highly flexible. Heavyweight frameworks like AngularJS paved the way, but it appears an alternative has found real purchase with the wider development community. Of course, it won’t always be this way. And although React has dominated developer mindshare for a good three years or so (quite a while in the engineering world), something will certainly replace it at some point. But however the tool chain evolves, the basic idea that we build better applications and websites when we break things apart will likely stick. Complexity won’t decrease. Even if writing code gets easier, understanding how component parts of an application fit together - from front end elements to API integration - will become crucial. Even if it starts to bend your mind, and pose new problems you hadn’t even thought of, it’s clear that things are going to remain interesting as far as web development in the future is concerned.
Read more
  • 0
  • 0
  • 12286

article-image-5-pitfalls-of-react-hooks-you-should-avoid-kent-c-dodds
Sugandha Lahoti
09 Sep 2019
7 min read
Save for later

5 pitfalls of React Hooks you should avoid - Kent C. Dodds

Sugandha Lahoti
09 Sep 2019
7 min read
The React community first introduced Hooks, back in October 2018 as a JavaScript function to allow using React without classes. The idea was simple - With the help of Hooks, you will be able to “hook into” or use React state and other React features from function components. In February, React 16.8 released with the stable implementation of Hooks. As much as Hooks are popular, there are certain pitfalls which developers should avoid when they are learning and adopting React Hooks. In his talk, “React Hook Pitfalls” at React Rally 2019 (August 22-23 2019), Kent C. Dodds talks about 5 common pitfalls of React Hooks and how to avoid/fix them. Kent is a world renowned speaker, maintainer and contributor of hundreds of popular npm packages. He's actively involved in the open source community of React and general JavaScript ecosystem. He’s also the creator of react-testing-library which provides simple and complete React DOM testing utilities that encourage good testing practices. Tl;dr Problem: Starting without a good foundation Solution: Read the React Hooks docs and the FAQ Problem: Not using (or ignoring) the ESLint plugin Solution: Install, use, and follow the ESLint plugin Problem: Thinking in Lifecycles Solution: Don't think about Lifecycles, think about synchronizing side effects to state Problem: Overthinking performance Solution: React is fast by default and so research before applying performance optimizations pre-maturely Problem: Overthinking the testing of React hooks Solution: Avoid testing ‘implementation details’ of the component. Pitfall #1 Starting without a good foundation Often React developers begin coding without reading the documentation and that leads to a number of issues and small problems. Kent recommends developers to start by reading the React Hooks documentation and the FAQ section thoroughly. He jokingly adds, “Once you read the frequently asked questions, you can ask the infrequently asked questions. And then maybe those will get in the docs, too. In fact, you can make a pull request and put it in yourself.” Pitfall #2: Not using or (ignoring) the ESLint plugin The ESLint plugin is the official plugin built by the React team. It has two rules: "rules of hooks" and "exhaustive deps." The default recommended configuration of these rules is to set "rules of hooks" to an error, and the "exhaustive deps" to a warning. The linter plugin enforces these rules automatically. The two “Rules of Hooks” are: Don’t call Hooks inside loops, conditions, or nested functions Instead, always use Hooks at the top level of your React function. By following this rule, you ensure that Hooks are called in the same order each time a component renders. Only Call Hooks from React Functions Don’t call Hooks from regular JavaScript functions. Instead, you can either call Hooks from React function components or call them from custom Hooks. Kent agrees that sometimes the rule is incapable of performing static analysis on your code properly due to limitations of ESLint. “I believe”, he says, “ this is why it's recommended to set the exhaustive deps rule to "warn" instead of "error." When this happens, the plugin will tell you so in the warning. He recommends  developers should restructure their code to avoid that warning. The solution Kent offers for this pitfall is to Install, follow, and use the ESLint plugin. The ESLint plugin, he says will not only catch easily missable bugs, but it will also teach you things about your code and hooks in the process. Pitfall #3: Thinking in Lifecycles In Hooks the components are declarative. Kent says that this feature allows you to stop thinking about "when things should happen in the lifecycle of the component" (which doesn't matter that much) and more about "when things should happen in relation to state changes" (which matters much more.) With React Hooks, he adds, you're not thinking about component Lifecycles, instead you're thinking about synchronizing the state of the side-effects with the state of the application. This idea is difficult for React developers to grasp initially, however once you do it, he adds, you will naturally experience fewer bugs in your apps thanks to the design of the API. https://twitter.com/ryanflorence/status/1125041041063665666 Solution: Think about synchronizing side effects to state, rather than lifecycle methods. Pitfall #4: Overthinking performance Kent says that even though it's really important to be considerate of performance, you should also think about your code complexity. If your code is complex, you can't give people the great features they're looking for, as you will be spending all your time, dealing with the complexity of your code. He adds, "unnecessary re-renders" are not necessarily bad for performance. Just because a component re-renders, doesn't mean the DOM will get updated (updating the DOM can be slow). React does a great job at optimizing itself; it’s fast by default. For this, he mentions. “If your app's unnecessary re-renders are causing your app to be slow, first investigate why renders are slow. If rendering your app is so slow that a few extra re-renders produces a noticeable slow-down, then you'll likely still have performance problems when you hit "necessary re-renders." Once you fix what's making the render slow, you may find that unnecessary re-renders aren't causing problems for you anymore.” If still unnecessary re-renders are causing you performance problems, then you can unpack the built-in performance optimization APIs like React.memo, React.useMemo, and React.useCallback. More information on this on Kent’s blogpost on useMemo and useCallback. Solution: React is fast by default and so research before applying performance optimizations pre-maturely; profile your app and then optimize it. Pitfall #5: Overthinking the testing of React Hooks Kent says, that people are often concerned that they need to rewrite their tests along with all of their components when they refactor to hooks from class components. He explains, “Whether your component is implemented via Hooks or as a class, it is an implementation detail of the component. Therefore, if your test is written in such a way that reveals that, then refactoring your component to hooks will naturally cause your test to break.” He adds, “But the end-user doesn't care about whether your components are written with hooks or classes. They just care about being able to interact with what those components render to the screen. So if your tests interact with what's being rendered, then it doesn't matter how that stuff gets rendered to the screen, it'll all work whether you're using classes or hooks.” So, to avoid this pitfall, Kent’s recommendation is that you write tests that will work irrespective of whether you're using classes or hook. Before you upgrade to Hooks, start writing your tests free of implementation detail and your refactored hooks can be validated by the tests that you've written for your classes. The more your tests resemble the way your software is used, the more confidence they can give you. In review: Read the docs and the FAQ. Install, use and follow the ESLint plugin. Think about synchronizing side effects to state. Profile your app and then optimize it. Avoid testing implementation details. Watch the full talk on YouTube. https://www.youtube.com/watch?v=VIRcX2X7EUk Read more about React #Reactgate forces React leaders to confront community’s toxic culture head on React.js: why you should learn the front end JavaScript library and how to get started Ionic React RC is now out!
Read more
  • 0
  • 0
  • 8235

article-image-how-to-integrate-a-medium-editor-in-angular-8
Guest Contributor
05 Sep 2019
5 min read
Save for later

How to integrate a Medium editor in Angular 8

Guest Contributor
05 Sep 2019
5 min read
In the world of text editing, there is a new era of WYSIWYG (What You See Is What You Get). We all know how styling and formatting become the important elements of your website but most of the times it is tough to pick a simple, easy-to-use and powerful editor. Currently, the good days are coming back with the new Medium Editor! Medium Editor is an independent Javascript library to make a coasting content manager bar which springs up when you select a bit of content of your page which is enlivened by the magnificence of Medium.com. You can turn every field from a message on your contact form to a whole article on the back-end into a professionally styled text paragraph contains quote blocks, heading, hyperlinks or just a few selected words. You can also try to incorporate text editor into Angular 8 for the ease of updates and edits of your content. Angular 8 has released its latest feature - beta 6 with the attractive new functionalities for testing your software and fixing bugs. One of them is Bazel - Google's open-source part of its internal build tool called Blaze which is capable of performing incremental builds and tests. Let us check how you can integrate a Medium editor in the Angular 8 platform. Also Read: Angular CLI 8.3.0 releases with a new deploy command, faster production builds, and more Steps to create an editor using Angular 8 Step 1: First thing first, Create a project in Angular and you can also make use of bootstrap for making it look pretty good by adding CDN links in the index.html After entering the above-given command line, it will generate an angular starter application after it has completed installing all the dependencies. Step 2: Install an npm package by entering the below-given line. And then, include the CSS and js in angular.json file Step 3: Create a component with your chosen name and then create the one with the name create Step 4: Click to the newly created component.html and make a div by giving it a template reference of the name Try the above-given code snippet under a few bootstrap classes just to give a basic stylings Step 5: Select your component class and make a variable editor to view the child property as listed below: Step 6: Then, we will make use of one lifecycle hook of angular which is ngAfterViewInit. Paste the above-given scrap and you may get a mistake like media supervisor that isn't characterized all things considered, in this way, we have to declare it on the top like In the wake of rolling out the above improvements, you can, for the most part, make a little medium-supervisor to utilize it for yourself. You can pick over to compose anything and simply select the content you have written to see the enchantment. Step 7: After this, you may require some more alternatives in your editorial manager toolbar. For doing as such, you have to pass an arrangement object in the MediumEditor Constructor. By making the selective changes, you will be able to see a load of available options. Step 8: So, now you have got an editor, you can easily get the data from it. If someone writes a post then you need to have an HTML write of the same. Once more, you have to partition the screen into two parts wherein one half, there will be a supervisor and the other half will show the see of the post. [box type="shadow" align="" class="" width=""]In the second half of the screen, you need to assign the value of inner HTML as given above[/box] Wrap Up Every system is prone to pros and cons and so does the Angular too. Angular offers a clean code development along with the high-performance framework that manages to route, providing seamless updates using Command Line Interface and retrieving the state of location services. Also, you can debug the templates in Angular 8 and supports multiple applications in one domain. Contrary to this, Angular might be confusing for the newcomers as there is no accurate manual which includes the proper documentation of the framework. Also, it lacks the developer community and there is limited scope to debug Limited Routing. However, Angular 8 supports multiple applications in one domain and user-friendly for all the versions of the operating system. So, here we come to the end of the article. We hope you have gained information on how to integrate the latest medium editor to Angular 8. Do give it a try! Till then - keep learning! Author Bio Dave Jarvis is working as a Business Development Executive at eTatvaSoft.com, an enterprise-level mobile & web application development company. He aims to sharpen his analytical skills, deepening his data understanding and broaden his business knowledge in these years of his career. Click here to find more information about the company. Follow him on Twitter. Other interesting news in Web development Google Chrome 76 now supports native lazy-loading Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 10247

article-image-wasmers-first-postgres-extension-to-run-webassembly-is-here
Vincy Davis
30 Aug 2019
2 min read
Save for later

Wasmer's first Postgres extension to run WebAssembly is here!

Vincy Davis
30 Aug 2019
2 min read
Wasmer, the WebAssembly runtime have been successfully embedded in many languages like Rust, Python, Ruby, PHP, and Go. Yesterday, Ivan Enderlin, a PhD Computer Scientist at Wasmer, announced a new Postgres extension version 0.1 for WebAssembly. Since the project is still under heavy development, the extension only supports integers (on 32- and 64-bits) and works on Postgres 10 only. It also does not support strings, records, views or any other Postgres types yet. https://twitter.com/WasmWeekly/status/1167330724787171334 The official post states, “The goal is to gather a community and to design a pragmatic API together, discover the expectations, how developers would use this new technology inside a database engine.” The Postgres extension provides two foreign data wrappers wasm.instances and wasm.exported_functions in the wasm foreign schema. The wasm.instances is a table with an id and wasm_file columns. The wasm.exported_functions is a table with the instance_id, name, inputs, and outputs columns. Enderlin says that these information are enough for the wasm Postgres extension “to generate the SQL function to call the WebAssembly exported functions.” Read Also: Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module The Wasmer team ran a basic benchmark of computing the Fibonacci sequence computation to compare the execution time between WebAssembly and PL/pgSQL. The benchmark was run on a MacBook Pro 15" from 2016, 2.9Ghz Core i7 with 16Gb of memory. Image Source: Wasmer The result was that, “Postgres WebAssembly extension is faster to run numeric computations. The WebAssembly approach scales pretty well compared to the PL/pgSQL approach, in this situation.” The Wasmer team believes that though it is too soon to consider WebAssembly as an alternative to PL/pgSQL, the above result makes them hopeful that it can be explored more. To know more about Postgres extension, check out its Github page. 5 barriers to learning and technology training for small software development teams Mozilla CEO Chris Beard to step down by the end of 2019 after five years in the role JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3
Read more
  • 0
  • 0
  • 3638
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-5-barriers-to-learning-and-technology-training-for-small-software-development-teams
Richard Gall
30 Aug 2019
9 min read
Save for later

5 barriers to learning and technology training for small software development teams

Richard Gall
30 Aug 2019
9 min read
Managing, supporting, and facilitating training for software developers isn’t easy. Such is the pace of change that it can be tough to ensure that not only do your developers have the skills they need, but also that they have the resources at their disposal to explore new technologies, try new approaches and solve complex problems. Okay, sure: there are a wealth of free resources out there. But using these is really just learning to walk for many developers. It’s essential, but it can’t be the only thing developers have at their disposal. If it is, their company is doing them a disservice. Even if a company is committed to their team’s development, how to go about it still isn’t straightforward. There are a number of barriers that need to be overcome when it comes to technology training and learning. Here are 5 of them... Barrier 1: Picking resources on the right topics The days of developers specializing are long gone - particularly in a small tech team where everyone has to get stuck in. Full-stack has ensured that today’s software developers need to be confident on everything from databases to UI design, while DevOps means they need to assume responsibility for how their code actually runs in production. The days of it worked on my machine! are over. Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin When you factor in cloud native and hybrid cloud, developers today might well not only be working on cloud platforms with a bewildering array of features and opportunities, but also working on a number of different platforms at once. This means that understanding exactly what your developers need to know can be immensely difficult. It also means that you have to set the technology agenda from the off, immediately limiting your developers ability to solve problems themselves. That’s only going to alienate your developers - you’re effectively telling them that their curiosity, and their desire to explore new topics and alternative solutions is pointless. What can you do? The only way to solve this is to ensure your developers have a rich set of learning resources. Don’t limit them to a specific tool or set of technologies. Allow them to explore new topics as they see fit - don’t force them down a pre-set path. The other benefit of this is that it means you can be flexible and open in terms of the technology you use. Barrier 2: Legacy software and brownfield projects Sometimes, however, you can’t be flexible about the technology you use. Your developers simply need to use existing tools to manage legacy systems and brownfield projects (which build on or demolish existing legacy software). This means that learning needs can become complex. Not only might your development team needs a diverse range of skills for one particular project, they might also need to learn very different skill sets for different projects. Maybe you’re planning a new project built in React.js - great, get your developers on some cutting-edge content. But then, alongside this, what if they also need to tackle legacy software built using PHP? It’s fine if they know PHP, but what if you have a new developer? Or what if they haven’t used it for years? What can you do? As above, a rich set of training and learning resources is vital. But in this instance it’s particularly important to ensure developers have access to learning materials on a wide range of technologies. So yes, cutting-edge topic coverage is essential, but so is content on established and even more residual technologies. Barrier 3: Lack of time for learning new software skills and concepts Lack of time is one of the biggest barriers to learning and training in the tech industry. At least that’s the perception - in reality, a lack of time to learn is caused by resource challenges, and the cultural status of learning inside an organization. If it isn’t a priority, then it’s often the thing that gets pushed back as new day to day activities manage to insert themselves in your team’s day or existing ones stretch out to fill the hours in a way that no one expected. This is risky. Although there are times when things simply need to get done quickly, overlooking learning will not only lead to disengagement in your team, it will also leave them unprepared for challenges that may emerge throughout the course of a project. It’s often said that software development is all about solving problems - but how confident can we be that we’re equipped to solve them if we’re not committed to making time for learning? What can you do? The first step is obvious: make time for learning. One method is to set aside a specific period in the week which your team is encouraged to use for personal development. While that can work well, sometimes trying to structure learning in this way can make it feel like a chore for employees, as if you’re trying to fit them into a predetermined routine. Another option is to simply encourage continuous learning. That might mean a short period every day just to learn a new concept or approach. Or it could be something like a learning diary that allows developers to record things they learn and plan what they want to learn next. Essentially, what’s important is putting your developers in control. Give them the tools to make time - as well as the resources that they can use quickly and easily, and they’ll not only learn new skills much more quickly, they’ll also be more engaged and more curious engineers. Read next: “All of my engineering teams have a machine learning feature on their roadmap” – Will Ballard talks artificial intelligence in 2019 [Interview] Barrier 4: Different learning preferences within the team In even a small engineering team, every person will have unique needs and preferences when it comes to learning. So, even if all your team wants to learn the same topics, they still might disagree about how they want to do it. Some developers, for example, can’t stand learning with video. It forces you to learn at its pace, and - shock horror - you have to listen to someone. Others, however, love it - it’s visual, immediate and, you know, maybe having someone with a voice explaining how things work isn’t actually that bad? Similarly, some people love training courses - the idea of sitting with someone, rather than going it alone - while others value their independence and agency. This means that keeping everyone happy can be tough. It might be tempting to go one of two ways - either decide how you want people to learn and let them get on with it, or let everyone go it alone, but neither approach is ideal. Forcing people to learn a certain way will penalise those who don’t like your preferred learning method, hurting not only their ability to learn new skills but also their trust with you. Letting everyone be independent, meanwhile, means you never have any oversight on what people are doing or using - there’s no level playing field. What can you do? The key to getting over this barrier is balance. You want to preserve your team's independence and sense of agency while also having some transparency over what people are using. Resources that offer a mix of formats are great for this. That way, no one is forced to watch hours and hours of video courses or to trawl through text that will just send them to sleep. Equally, another important thing to think about is how the learning resources you provide your team complement other learning activities that they may be doing independently (things like looking for answers to questions in blog posts or on YouTube). By doing that you can allow your developers some level of independence while also ensuring that you have a set of organizational resources and tools that are available and accessible to everyone. Read next: Why do IT teams need to transition from DevOps to DevSecOps? Barrier 5: The cost of technology training and learning resources Cost is perhaps the biggest barrier to training for development teams. Specific courses and events can be astronomical - especially if more than one person needs to attend - and even some learning platforms can cost significant amounts of money. Now, that’s not always a problem for engineering teams operating in a more corporate or enterprise environment. But for smaller teams in small and medium sized businesses, training can become quite an overhead. In turn, this can have other consequences - from a complete disregard and deprioritization of training and learning to internal frustration at who is getting support and investment and who isn’t, it’s not uncommon to see training become a catalyst for many other organizational and cultural problems. What can you do? The trick here is to not invest heavily in one single thing. Don’t blow your budget on a single course - what if it’s not up to scratch? Don’t commit to an expensive learning platform. However good the marketing looks, if your team doesn't like it they’re certainly not going to use it. Fortunately, there are affordable solutions out there that can ensure you’re not breaking the bank when it comes to training. It might even leave you with some cash left over that you can invest on other resources and materials. Learning new skills isn’t easy. It requires patience and commitment. But developers need support and resources to take some of the strain out of their learning challenges. There’s no one way to meet the learning needs of developers. But Packt for Teams can help you overcome many of the barriers of developer training. Learn more here.
Read more
  • 0
  • 0
  • 4623

article-image-react-forces-leaders-to-confront-community-toxic-culture
Sugandha Lahoti
27 Aug 2019
7 min read
Save for later

#Reactgate forces React leaders to confront community's toxic culture head on

Sugandha Lahoti
27 Aug 2019
7 min read
On Thursday last week, Twitter account @heydonworks posted a tweet that “Vue developers like cooking/quiet activities and React developers like trump, guns, weightlifting and being "bros". He also talked about the rising number of super conservative React dev accounts. https://twitter.com/heydonworks/status/1164506235518910464 This was met with disapproval from people within both the React and Vue communities. “Front end development isn’t a competition,” remarked one user. https://twitter.com/mattisadev/status/1164633489305739267 https://twitter.com/nsantos_pessoal/status/1164629726499102720 @heydonworks responded to the chorus of disapproval by saying that his intention was to  highlight how a broad and diverse community of thousands of people can be eclipsed by an aggressive and vocal toxic minority. He then went on to ask Dan Abramov, React co-founder, “Perhaps a public disowning of the neocon / supremacist contingent on your part would land better than my crappy joke?” https://twitter.com/heydonworks/status/1164653560598093824 He also clarified how his original tweet was supposed to paint a picture of what React would be like if it was taken over by hypermasculine conservatives. “I admit it's not obvious”, he tweeted, “but I am on your side. I don't want that to happen and the joke was meant as a warning.” @heydonworks also accused a well known React Developer of playing "the circle game" at a React conference. The “circle game” is a school prank that has more recently come to be associated with white supremacism in the U.S. @heydonworks later deleted this tweet and issued  an apology admitting that he was wrong to accuse the person of making the gesture. https://twitter.com/heydonworks/status/1165439718512824320 This conversation then developed into a wider argument about how toxicity is enabled and allowed in the React community - and, indeed, other tech communities as well. The crucial point that many will have to reckon with is what behaviors people allow and overlook. Indeed, to a certain extent, the ability to be comfortable with certain behaviors is related to an individual’s privilege - what may seem merely an aspect or a quirk of someone’s persona to one person, might be threatening and a cause of discomfort to another person. This was the point made by web developer Nat Alison (@tesseralis): “Remember that fascists and abusers can often seem like normal people to everyone but the people that they're harming.” Alison’s thread highlights that associating with people without challenging toxic behaviors or attitudes is a way of enabling and tacitly supporting them. https://twitter.com/tesseralis/status/1165111494062641152 Web designer Tatiana Mac quits the tech industry following the React controversy Web designer Tatiana Mac’s talk at Clarity Conf (you can see the slides here) in San Francisco last week (21 August) took place just a few hours before @heydonworks sent the first of his tweets mentioned above. The talk was a powerful statement on how systems can be built in ways that can either reinforce power or challenge it. Although it was well-received by many present at the event and online, it also was met with hostility, with one Twitter user (now locked) tweeting in response to an image of Mac’s talk that it “most definitely wasn't a tech conference… Looks to be some kind of SJW (Social justice warrior) conference.” This only added an extra layer of toxicity to the furore that has been engulfing the React community. Following the talk, Mac offered her thoughts, criticizing those she described as being more interested in “protecting the reputation of a framework than listening to multiple marginalized people.” https://twitter.com/TatianaTMac/status/1164912554876891137 She adds, “I don’t perceive this problem in the other JS framework communities as intensively.  Do White Supremacists exist in other frameworks? Likely. But there is a multiplier/feeder here that is systemically baked. That’s what I want analysed by the most ardent supporters of the community.” She says that even after bringing this issue multiple times, she has been consistently ignored. Her tweet reads, “I'm disappointed by repeatedly bringing this shit up and getting ignored/gaslit, then having a white woman bring it up and her getting praised for it? White supremacy might as well be an opiate—some people take it without ever knowing, others microdose it to get ahead.” “Why is no one like, ‘Tatiana had good intentions in bringing up the rampant racism problem in our community?’ Instead, it’s all, ‘Look at all the impact it had on two white guys!’ Is cuz y’all finally realise intent doesn’t erase impact?”, she adds. She has since decided to quit the tech industry following these developments. In a tweet, she wrote that she is “incredibly sad, disappointed, and not at all surprised by *so* many people.” Mac has described in detail the emotional and financial toll the situation is having on her. She has said she is committed to all contracts through to 2020, but also revealed that she may need to sell belongings to support herself. This highlights the potential cost involved in challenging the status quo. To provide clarity on what has happened, Tatiana approached her friend, designer Carlos Eriksson, who put together a timeline of the Reactgate controversy. Dan Abramov and Ken Wheeler quit and then rejoin Twitter Following the furore, both Dan Abramov and Ken Wheeler quit Twitter over the weekend. They have now rejoined. After he deactivated, Abramov talked about his disappearance from Twitter on Reddit: “Hey all. I'm fine, and I plan to be back soon. This isn't a ‘shut a door in your face’ kind of situation.  The real answer is that I've bit off more social media than I can chew. I've been feeling anxious for the past few days and I need a clean break from checking it every ten minutes. Deactivating is a barrier to logging in that I needed. I plan to be back soon.” Abramov returned to Twitter on August 27. He apologized for his sudden disappearance. He apologized, calling deactivating his account “a desperate and petty thing.” He also thanked Tatiana Mac for highlighting issues in the React community. “I am deeply thankful to @TatianaTMac for highlighting issues in the React community,” Abramov wrote. “She engaged in a dialog despite being on the receiving end of abuse and brigading. I admire her bravery and her kindness in doing the emotional labor that should have fallen on us instead.” Wheeler also returned to Twitter. “Moving forward, I will be working to do better. To educate myself. To lift up minoritized folks. And to be a better member of the community. And if you are out there attacking and harassing people, you are not on my side,” he said. Mac acknowledged  Abramov and Wheeler’s apologies, writing that, “it is unfair and preemptive to call Dan and Ken fragile. Both committed to facing the white supremacist capitalist patriarchy head on. I support the promise and will be watching from the sidelines supporting positive influence.” What can the React community do to grow from this experience? This news has shaken the React community to the core. At such distressing times, the React community needs to come together as a whole and offer constructive criticism to tackle the issue of unhealthy tribalism, while making minority groups feel safe and heard. Tatiana puts forward a few points to tackle the toxic culture. “Pay attention to your biggest proponents and how they reject all discussion of the injustices of tech. It’s subtle like that, and, it’s as overt as throwing white supremacist hand gestures at conferences on stage. Neither is necessarily more dangerous than the other, but instead shows the journey and spectrum of radicalization—it’s a process.” She urges, “If you want to clean up the community, you’ve got to see what systemic forces allow these hateful dingdongs to sit so comfortably in your space.  I’m here to help and hope I have today already, as a member of tech, but I need you to do the work there.” “Developers don’t belong on a pedestal, they’re doing a job like everyone else” – April Wensel on toxic tech culture and Compassionate Coding [Interview] Github Sponsors: Could corporate strategy eat FOSS culture for dinner? Microsoft’s #MeToo reckoning: female employees speak out against workplace harassment and discrimination
Read more
  • 0
  • 0
  • 7579

article-image-bitbucket-to-no-longer-support-mercurial-users-must-migrate-to-git-by-may-2020
Fatema Patrawala
21 Aug 2019
6 min read
Save for later

Bitbucket to no longer support Mercurial, users must migrate to Git by May 2020

Fatema Patrawala
21 Aug 2019
6 min read
Yesterday marked an end of an era for Mercurial users, as Bitbucket announced to no longer support Mercurial repositories after May 2020. Bitbucket, owned by Atlassian, is a web-based version control repository hosting service, for source code and development projects. It has used Mercurial since the beginning in 2008 and then Git since October 2011. Now almost after ten years of sharing its journey with Mercurial, the Bitbucket team has decided to remove the Mercurial support from the Bitbucket Cloud and its API. The official announcement reads, “Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.” The Bitbucket team also communicated the timeline for the sunsetting of the Mercurial functionality. After February 1, 2020 users will no longer be able to create new Mercurial repositories. And post June 1, 2020 users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed. Additionally all current Mercurial functionality in Bitbucket will be available through May 31, 2020. The team said the decision was not an easy one for them and Mercurial held a special place in their heart. But according to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. Apart from this Mercurial usage on Bitbucket saw a steady decline, and the percentage of new Bitbucket users choosing Mercurial fell to less than 1%. Hence they decided on removing the Mercurial repos. How can users migrate and export their Mercurial repos Bitbucket team recommends users to migrate their existing Mercurial repos to Git. They have also extended support for migration, and kept the available options open for discussion in their dedicated Community thread. Users can discuss about conversion tools, migration, tips, and also offer troubleshooting help. If users prefer to continue using the Mercurial system, there are a number of free and paid Mercurial hosting services for them. The Bitbucket team has also created a Git tutorial that covers everything from the basics of creating pull requests to rebasing and Git hooks. Community shows anger and sadness over decision to discontinue Mercurial support There is an outrage among the Mercurial users as they are extremely unhappy and sad with this decision by Bitbucket. They have expressed anger not only on one platform but on multiple forums and community discussions. Users feel that Bitbucket’s decision to stop offering Mercurial support is bad, but the decision to also delete the repos is evil. On Hacker News, users speculated that this decision was influenced by potential to market rather than based on technically superior architecture and ease of use. They feel GitHub has successfully marketed Git and that's how both have become synonymous to the developer community. One of them comments, “It's very sad to see bitbucket dropping mercurial support. Now only Facebook and volunteers are keeping mercurial alive. Sometimes technically better architecture and user interface lose to a non user friendly hard solutions due to inertia of mass adoption. So a lesson in Software development is similar to betamax and VHS, so marketing is still a winner over technically superior architecture and ease of use. GitHub successfully marketed git, so git and GitHub are synonymous for most developers. Now majority of open source projects are reliant on a single proprietary solution Github by Microsoft, for managing code and project. Can understand the difficulty of bitbucket, when Python language itself moved out of mercurial due to the same inertia. Hopefully gitlab can come out with mercurial support to migrate projects using it from bitbucket.” Another user comments that Mercurial support was the only reason for him to use Bitbucket when GitHub is miles ahead of Bitbucket. Now when it stops supporting Mercurial too, Bitbucket will end soon. The comment reads, “Mercurial support was the one reason for me to still use Bitbucket: there is no other Bitbucket feature I can think of that Github doesn't already have, while Github's community is miles ahead since everyone and their dog is already there. More importantly, Bitbucket leaves the migration to you (if I read the article correctly). Once I download my repo and convert it to git, why would I stay with the company that just made me go through an annoying (and often painful) process, when I can migrate to Github with the exact same command? And why isn't there a "migrate this repo to git" button right there? I want to believe that Bitbucket has smart people and that this choice is a good one. But I'm with you there - to me, this definitely looks like Bitbucket will die.” On Reddit, programming folks see this as a big change from Bitbucket as they are the major mercurial hosting provider. And they feel Bitbucket announced this at a pretty short notice and they require more time for migration. Apart from the developer community forums, on Atlassian community blog as well users have expressed displeasure. A team of scientists commented, “Let's get this straight : Bitbucket (offering hosting support for Mercurial projects) was acquired by Atlassian in September 2010. Nine years later Atlassian decides to drop Mercurial support and delete all Mercurial repositories. Atlassian, I hate you :-) The image you have for me is that of a harmful predator. We are a team of scientists working in a university. We don't have computer scientists, we managed to use a version control simple as Mercurial, and it was a hard work to make all scientists in our team to use a version control system (even as simple as Mercurial). We don't have the time nor the energy to switch to another version control system. But we will, forced and obliged. I really don't want to check out Github or something else to migrate our projects there, but we will, forced and obliged.” Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note BitBucket goes down for over an hour
Read more
  • 0
  • 0
  • 10092

article-image-apple-announces-webkit-tracking-prevention-policy-that-considers-web-tracking-as-a-security-vulnerability
Bhagyashree R
19 Aug 2019
5 min read
Save for later

Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability

Bhagyashree R
19 Aug 2019
5 min read
Inspired by Mozilla’s anti-tracking policy, Apple has announced its intention to implement the WebKit Tracking Prevention Policy into Safari, the details of which it shared last week. This policy outlines the types of tracking techniques that will be prevented in WebKit to ensure user privacy. The anti-tracking mitigations listed in this policy will be applied “universally to all websites, or based on algorithmic, on-device classification.” https://twitter.com/webkit/status/1161782001839607809 Web tracking is the collection of user data over multiple web pages and websites, which can be linked to individual users via a unique user identifier. All your previous interactions with any website could be recorded and recalled with the help of a tracking system like cookies. Among the data tracked include the things you have searched, the websites you visited, the things you have clicked on, the movements of your mouse around a web page, and more. Organizations and companies rely heavily on web tracking to gain insight into their user behavior and preferences. One of the main purposes of these insights is user profiling and targeted marketing. While this user tracking helps businesses, it can be pervasive and used for other sinister purposes. In the recent past, we have seen many companies including the big tech like Facebook and Google involved in several scandals related to violating user online privacy. For instance, Facebook’s Cambridge Analytica scandal and Google’s cookie case. Apple aims to create “a healthy web ecosystem, with privacy by design” The WebKit Prevention Policy will prevent several tracking techniques including cross-site tracking, stateful tracking, covert stateful tracking, navigational tracking, fingerprinting, covert tracking, and other unknown techniques that do not fall under these categories. WebKit will limit the capability of using a tracking technique in case it is not possible to prevent it without any undue harm to the user. If this also does not help, users will be asked for their consent. Apple will treat any attempt to subvert the anti-tracking methods as a security vulnerability. “We treat circumvention of shipping anti-tracking measures with the same seriousness as an exploitation of security vulnerabilities,” Apple wrote. It warns to add more restrictions without prior notice against parties who attempt to circumvent the tracking prevention methods. Apple further mentioned that there won’t be any exception even if you have a valid use for a technique that is also used for tracking. The announcement reads, “But WebKit often has no technical means to distinguish valid uses from tracking, and doesn’t know what the parties involved will do with the collected data, either now or in the future.” WebKit Tracking Prevention Policy’s unintended impact With the implementation of this policy, Apple warns of certain unintended repercussions as well. Among the possibly affected features are funding websites using targeted or personalized advertising, federated login using a third-party login provider, fraud prevention, and more. In cases of tradeoffs, WebKit will prioritize user benefits over current website practices. Apple promises to limit this unintended impact and might update the tracking prevention methods to permit certain use cases. In the future, it will also come up with new web technologies that will allow these practices without comprising the user online privacy such as Storage Access API and Privacy-Preserving Ad Click Attribution. What users are saying about Apple’s anti-tracking policy A time when there is increasing concern regarding user online privacy, this policy comes as a blessing. Many users are appreciating this move, while some do fear that this will affect some of the user-friendly features. In an ongoing discussion on Hacker News, a user commented, “The fact that this makes behavioral targeting even harder makes me very happy.” Some others also believe that focusing on online tracking protection methods will give browsers an edge over Google’s Chrome. A user said, “One advantage of Google's dominance and their business model being so reliant on tracking, is that it's become the moat for its competitors: investing energy into tracking protection is a good way for them to gain a competitive advantage over Google, since it's a feature that Google will not be able to copy. So as long as Google's competitors remain in business, we'll probably at least have some alternatives that take privacy seriously.” When asked about the added restrictions that will be applied if a party is found circumventing tracking prevention, a member of the WebKit team commented, “We're willing to do specifically targeted mitigations, but only if we have to. So far, nearly everything we've done has been universal or algorithmic. The one exception I know of was to delete tracking data that had already been planted by known circumventors, at the same time as the mitigation to stop anyone else from using that particular hole (HTTPS supercookies).” Some users had questions about the features that will be impacted by the introduction of this policy. A user wrote, “While I like the sentiment, I hate that Safari drops cookies after a short period of non-use. I wind up having to re-login to sites constantly while Chrome does it automatically.” Another user added, “So what is going to happen when Apple succeeds in making it impossible to make any money off advertisements shown to iOS users on the web? I'm currently imagining a future where publishers start to just redirect iOS traffic to install their app, where they can actually make money. Good news for the walled garden, I guess?” Read Apple’s official announcement, to know more about the WebKit Tracking Prevention Policy. Firefox Nightly now supports Encrypted Server Name Indication (ESNI) to prevent 3rd parties from tracking your browsing history All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users  
Read more
  • 0
  • 0
  • 3035
article-image-opentracing-and-opencensus-merge-into-opentelemetry-project-google-introduces-opencensus-web
Sugandha Lahoti
13 Aug 2019
4 min read
Save for later

OpenTracing and OpenCensus merge into OpenTelemetry project; Google introduces OpenCensus Web

Sugandha Lahoti
13 Aug 2019
4 min read
Google has introduced an extension of OpenCensus called the OpenCensus Web which is a library for collecting application performance and behavior monitoring data of web pages. This library focuses on the frontend web application code that executes in the browser allowing it to collect user-side performance data. It is still in alpha stage with the API subject to change. This is great news for websites that are heavy by nature, such as media-driven pages like Instagram, Facebook, YouTube, and Amazon, and WebApps. OpenCensus Web interacts with three application components, the Frontend web server, the Browser JS, and the OpenCensus Agent. The agent receives traces from the frontend web server proxy endpoint or directly from the browser JS, and exports them to a trace backend. Features of OpenCensus Web OpenCensus Web traces spans for initial load including server-side HTML rendering The OpenCensus Web spans also includes detailed annotations for DOM load events as well as network events It automatically traces all the click events as long as the click is done in a DOM element and it is not disabled OC Web traces route transitions between the different sections of your page by monkey-patching the History API It allows users to create custom spans for their web application for tasks or code involved in user interaction It performs automatic spans for HTTP requests and browser performance data OC web relates user interactions back to the initial page load tracing. Along with this release, the OpenCensus family of projects is merging with OpenTracing into OpenTelemetry. This means all of the OpenCensus community will be moving over to OpenTelemetry, Google and Omnition included. OpenCensus Web’s functionality will be migrated into OpenTelemetry JS once this project is ready. Omnition founder wrote on Hacker News, “Although Google will be heavily involved in both the client libraries and agent development, Omnition, Microsoft, and others will also be major contributors.” Another comment on Hacker News, explains the merger more in detail. “OpenCensus is a Google project to standardize metrics and distributed tracing. It's an API spec and libraries for various languages with varying backend support. OpenTracing is a CNCF project as an API for distributed tracing with a separate project called OpenMetrics for the metrics API. Neither include libraries and rely on the community to provide them.  The industry decided for once that we don't need all this competing work and is consolidating everything into OpenTelemetry that combines an API for tracing and metrics along with libraries. Logs (the 3rd part of observability) are in the planning phase.  OpenCensus Web is bringing the tracing/metrics part to your frontend JS so you can measure how your webapp works in addition to your backend apps and services.” By September 2019, OpenTelemetry plans to reach parity with existing projects for C#, Golang, Java, NodeJS, and Python. When each language reaches parity, the corresponding OpenTracing and OpenCensus projects will be sunset (old projects will be frozen, but the new project will continue to support existing instrumentation for two years, via a backwards compatibility bridge). Read more on the OpenTelemetry roadmap. Public reaction for OpenCensus Web has been positive. People have expressed their opinions on a Hacker News thread. “This is great, as the title says, this means that web applications can now have tracing across the whole stack, all within the same platform.” “I am also glad to know that the merge between OpenTracing and OpenCensus is still going well. I started adding telemetry to the projects I maintain in my current job and so far it has been very helpful to detect not only bottlenecks in the operations but also sudden spikes in the network traffic since we depend on so many 3rd-party web API that we have no control over. Thank you OpenCensus team for providing me with the tools to learn more.” For more information about OpenCensus Web, visit Google’s blog. CNCF Sandbox, the home for evolving cloud-native projects, accepts Google’s OpenMetrics Project Google open sources ClusterFuzz, a scalable fuzzing tool Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard
Read more
  • 0
  • 0
  • 2679

article-image-es2019-whats-new-in-ecmascript-the-javascript-specification-standard
Bhagyashree R
31 Jul 2019
4 min read
Save for later

ES2019: What’s new in ECMAScript, the JavaScript specification standard

Bhagyashree R
31 Jul 2019
4 min read
Every year a new edition of the ECMAScript (ES) scripting-language specification standard comes out. This year it is its tenth edition, also known as ES2019 or ES10. The feature set of ES2019 got finalized earlier this year and was published last month. Some of the exciting features this specification brings are Object.fromEntries(), trimStart(), trimEnd(), flat(), flatMap(), description property for symbol objects, optional catch binding, and more. These features have also landed in the latest versions of Firefox and Chrome for developers to try out. Let’s take a look at some of the features in ES2019: Object.fromEntries() In JavaScript, you can easily convert objects into arrays with the Object.entries() method that was introduced in the ES2017 standard. ES2019 introduces the Object.fromEntries() method that enables you to do exactly the opposite. Similar to the dict() function in Python, this method allows you to transform a list of key-value pairs into an object. Array.prototype.flat() and Array.prototype.flatMap() The method Array.prototype.flatten() was renamed to Array.prototype.flat() method in ES2019 after last year it ended up breaking MooTools' implementation of it. It recursively flattens an array up to the specified depth, which defaults to 1. The second method, ‘Array.prototype.flatMap’ performs the mapping of each element and then flattens the result into a new array. trimStart() and trimEnd() The purpose of the new trimStart() and trimEnd() methods proposed in ES2019 is same as the trimLeft() and trimRight() methods. While trimStart() is used to remove whitespace from the beginning of a string, trimEnd() is used to remove whitespace characters from the end of a string. These are introduced to maintain consistency with the padStart/padEnd the standard functions. To maintain web compatibility trimLeft() and trimRight() will be their aliases. Optional catch binding In JavaScript, it is mandatory to specify the catch() parameter when using try...catch, no matter whether you use it or not. However, there are a few use cases where you wouldn’t want to use the parameter or catch binding. Axel Rauschmayer, the author of JavaScript for impatient programmers (ES1–ES2019), lists the following two: If you want to completely ignore the error. You don’t care about the error or you already know what it will be, but you do want to react to it. This new proposal allows you to completely omit the unused catch binding without any syntax errors. Function.toString() Earlier, when you called the toString() method on a function it used to strip all the whitespaces, newlines, and comments from the source code. Now, it will return the function source code exactly as it was defined. Description property for Symbol objects ES2019 introduces a new read-only ‘description’ property for Symbol objects. You can add it to a Symbol object to return a string containing its description for debugging purposes. Well-formed JSON.stringify() According to a JSON RFC, JSON text when shared “outside the scope of a closed ecosystem” should be encoded using UTF-8. However, JSON.Stringify() can sometimes return strings and code points, particularly, the surrogate range (U+D800—U+DFFF), that cannot be represented in UTF-8. This ES2019 proposal prevents JSON.stringify() from returning such ill-formed Unicode strings. Many developers are excited about these new ES2019 proposals. A user on Hacker News commented, “That array.flat() and array.flatMap() stuff is great to see. Always having to rely on lodash and friends to do that type of work. Exciting to see how JS is evolving.” Another user added, “Object.fromEntries will be super useful, surprised it’s taken this long to become a native feature.” Others are waiting for the pattern matching and optional chaining proposals to reach the stage 4 of TC39 process, “Now if we could just get pattern matching and optional chaining, that would really elevate things.” These were some of the features introduced in ES2019. To know more, check out the specification published on the ECMA International website. Introducing QuickJS, a small and easily embeddable JavaScript engine Firefox 67 will come with faster and reliable JavaScript debugging tools Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more  
Read more
  • 0
  • 0
  • 5212

article-image-laracon-us-2019-highlights-laravel-6-release-update-laravel-vapor-and-more
Bhagyashree R
26 Jul 2019
5 min read
Save for later

Laracon US 2019 highlights: Laravel 6 release update, Laravel Vapor, and more

Bhagyashree R
26 Jul 2019
5 min read
Laracon US 2019, probably the biggest Laravel conference, wrapped up yesterday. Its creator, Tylor Otwell kick-started the event by talking about the next major release, Laravel 6. He also showcased a project that he has been working on called Laravel Vapor, a full-featured serverless management and deployment dashboard for PHP/Laravel. https://twitter.com/taylorotwell/status/1154168986180997125 This was a two-day event from July 24-25 hosted at Time Square, NYC. The event brings together people passionate about creating applications with Laravel, the open-source PHP web framework. Many exciting talks were hosted at this game-themed event. Evan You, the creator of Vue, was there presenting what’s coming in Vue.js 3.0. Caleb Porzio, a developer at Tighten Co., showcased a Laravel framework named Livewire that enables you to build dynamic user interfaces with vanilla PHP.  Keith Damiani, a Principal Programmer at Tighten, talked about graph database use cases. You can watch this highlights video compiled by Romega Digital to get a quick overview of the event: https://www.youtube.com/watch?v=si8fHDPYFCo&feature=youtu.be Laravel 6 coming next month Since its birth, Laravel has followed a custom versioning system. It has been on 5.x release version for the past four years now. The team has now decided to switch to semantic versioning system. The framework currently stands at version 5.8, and instead of calling the new release 5.9 the team has decided to go with Laravel 6, which is scheduled for the next month. Otwell emphasized that they have decided to make this change to bring more consistency in the ecosystem as all optional Laravel components such as Cashier, Dusk, Valet, Socialite use semantic versioning. This does not mean that there will be any “paradigm shift” and developers have to rewrite their stuff. “This does mean that any breaking change would necessitate a new version which means that the next version will be 7.0,” he added. With the new version comes new branding Laravel gets a fresh look with every major release ranging from updated logos to a redesigned website. Initially, this was a volunteer effort with different people pitching in to help give Laravel a fresh look. Now that Laravel has got some substantial backing, Otwell has worked with Focus Lab, one of the top digital design agencies in the US. They have together come up with a new logo and a brand new website. The website looks easy to navigate and also provides improved documentation to give developers a good reading experience. Source: Laravel Laravel Vapor, a robust serverless deployment platform for Laravel After giving a brief on version 6 and the updated branding, Otwell showcased his new project named Laravel Vapor. Currently, developers use Forge for provisioning and deploying their PHP applications on DigitalOcean, Linode, AWS, and more. It provides painless Virtual Private Server (VPS) management. It is best suited for medium and small projects and performs well with basic load balancing. However, it does lack a few features that could have been helpful for building bigger projects such as autoscaling. Also, developers have to worry about updating their operating systems and PHP versions. To address these limitations, Otwell created this deployment platform. Here are some of the advantages Laravel Vapor comes with: Better scalability: Otwell’s demo showed that it can handle over half a million requests with an average response time of 12 ms. Facilitates collaboration: Vapor is built around teams. You can create as many teams as you require by just paying for one single plan. Fine-grained control: It gives you fine-grained control over what each team member can do. You can set what all they can do across all the resources Vapor manages. A “vanity URL” for different environments: Vapor gives you a staging domain, which you can access with what Otwell calls a “vanity URL.” It enables you to immediately access your applications with “a nice domain that you can share with your coworkers until you are ready to assign a custom domain,“ says Otwell. Environment metrics: Vapor provides many environment metrics that give you an overview of an application environment. These metrics include how many HTTP requests have the application got in the last 24 hours, how many CLI invocations, what’s the average duration of those things, how much these cost on lambda, and more. Logs: You can review and search your recent logs right from the Vapor UI. It also auto-updates when any new entry comes in the log. Databases: With Vapor, you can provision two types of databases: fixed-sized database and serverless database. The fixed-sized database is the one where you have to pick its specifications like VCPU, RAM, etc. In the serverless one, however, if you do not select these specifications and it will automatically scale according to the demand. Caches: You can create Redis clusters right from the Vapor UI with as many nodes as you want. It supports the creation and management of elastic Redis cache clusters, which can be scaled without experiencing any downtime. You can attach them to any of the team’s projects and use them with multiple projects at the same time. To watch the entire demonstration by Otwell check out this video: https://www.youtube.com/watch?v=XsPeWjKAUt0&feature=youtu.be Laravel 5.7 released with support for email verification, improved console testing Building a Web Service with Laravel 5 Symfony leaves PHP-FIG, the framework interoperability group
Read more
  • 0
  • 0
  • 5525
article-image-django-3-0-is-going-async
Bhagyashree R
23 Jul 2019
4 min read
Save for later

Django 3.0 is going async!

Bhagyashree R
23 Jul 2019
4 min read
Last year, Andrew Godwin, a Django contributor, formulated a roadmap to bring async functionality into Django. After a lot of discussion and amendments, the Django Technical Board approved his DEP 0009: Async-capable Django yesterday. Godwin wrote in a Google group, “After a long and involved vote, I can announce that the Technical Board has voted in favour of DEP 0009 (Async Django), and so the DEP has been moved to the "accepted" state.” The reason why Godwin thinks that this is the right time to bring async-native support in Django is that starting from version 2.1, it supports Python 3.5 and up. These Python versions have async def and similar native support for coroutines. Also, the web is now slowly shifting to use cases that prefer high concurrency workloads and large parallelizable queries. The motivation behind Async in Django The Django Enhancement Proposal (DEP) 0009 aims to address one of the core flaws in Python: inefficient threading. Python is not considered to be a perfect asynchronous language. Its ‘asyncio’ library for writing concurrent code suffers from some core design flaws. There are alternative async frameworks for Python but are incompatible. Django Channels brought some async support to Django but they primarily focus on WebSocket handling. Explaining the motivation, the DEP says, “At the same time, it's important we have a plan that delivers our users immediate benefits, rather than attempting to write a whole new Django-size framework that is natively asynchronous from the start.” Additionally, most developers are unacquainted with developing Python applications that have async support. There is also a lack of proper documentation, tutorials, and tooling to help them. Godwin believes that Django can become a “good catalyst” to help in creating guidance documentation. Goals this DEP outlines to achieve The DEP proposes to bring support for asynchronous Python into Django while maintaining synchronous Python support as well in a backward-compatible way. Here are its end goals, that Godwin listed in his roadmap: Making the blocking parts in Django such as sessions, auth, the ORM, and handlers asynchronous natively with a synchronous wrapper exposed on top where needed to ensure backward compatibility. Keeping familiar models/views/templates/middleware layout intact with very few changes. Ensuring that these updates do not compromise speed and cause significant performance regressions at any stage of this plan. Enabling developers to write fully-async websites if they want to, but not enforcing this as the default way of writing websites. Welcoming new talent into the Djang team to help out on large-scale features. Timeline to achieve these goals Godwin in his "A Django Async Roadmap" shared the following timeline: Django Version Updates 2.1 Current in-progress release. No async work 2.2 Initial work to add async ORM and view capability, but everything defaults to sync by default, and async support is mostly threadpool-based. 3.0 Rewrite the internal request handling stack to be entirely asynchronous, add async middleware, forms, caching, sessions, auth. Start the deprecation process for any APIs that are becoming async-only. 3.1 Continue improving async support, potential async templating changes 3.2 Finish deprecation process and have a mostly-async Django. Godwin posted a summary of the discussion he had with the Django Technical Board in the Google Group. Some of the queries they raised were how the team plans to distinguish async versions of functions/method from sync ones, how this implementation will ensure that there is no performance hit if the user opts out of async mode, and more. In addition to these technical queries, the board also raised a non-technical concern, “The Django project has lost many contributors over the years, is essentially in a maintenance mode, and we likely do not have the people to staff a project like this.” Godwin sees a massive opportunity to lurking in this fundamental challenge - namely to revive the Django project. He adds, “I agree with the observation that things have substantially slowed down, but I personally believe that a project like async is exactly what Django needs to get going again. There's now a large amount of fertile ground to change and update things that aren't just fixing five-year-old bugs.” Read the DEP 0009: Async-capable Django to know more in detail. Which Python framework is best for building RESTful APIs? Django or Flask? Django 2.2 is now out with classes for custom database constraints  
Read more
  • 0
  • 0
  • 12730

article-image-npm-inc-co-founder-and-chief-data-officer-quits-leaving-the-community-to-question-the-stability-of-the-javascript-registry
Fatema Patrawala
22 Jul 2019
6 min read
Save for later

Npm Inc. co-founder and Chief data officer quits, leaving the community to question the stability of the JavaScript Registry

Fatema Patrawala
22 Jul 2019
6 min read
On Thursday, The Register reported that Laurie Voss, the co-founder and chief data officer of JavaScript package registry, NPM Inc left the company. Voss’s last day in office was 1st July while he officially announced the news on Thursday. Voss joined NPM in January 2014 and decided to leave the company in early May this year. NPM has faced its share of unrest in the company in the past few months. In the month of March  5 NPM employees were fired from the company in an unprofessional and unethical way. Later 3 of those employees were revealed to have been involved in unionization and filed complaints against NPM Inc with the National Labor Relations Board (NLRB).  Earlier this month NPM Inc at the third trial settled the labor claims brought by these three former staffers through the NLRB. Voss’ s resignation will be third in line after Rebecca Turner, former core contributor who resigned in March and Kat Marchan, former CLI and community architect who resigned from NPM early this month. Voss writes on his blog, “I joined npm in January of 2014 as co-founder, when it was just some ideals and a handful of servers that were down as often as they were up. In the following five and a half years Registry traffic has grown over 26,000%, and worldwide users from about 1 million back then to more than 11 million today. One of our goals when founding npm Inc. was to make it possible for the Registry to run forever, and I believe we have achieved that goal. While I am parting ways with npm, I look forward to seeing my friends and colleagues continue to grow and change the JavaScript ecosystem for the better.” Voss also told The Register that he supported unions, “As far as the labor dispute goes, I will say that I have always supported unions, I think they're great, and at no point in my time at NPM did anybody come to me proposing a union,” he said. “If they had, I would have been in favor of it. The whole thing was a total surprise to me.” The Register team spoke to one of the former staffers of NPM and they said employees tend not to talk to management in the fear of retaliation and Voss seemed uncomfortable to defend the company’s recent actions and felt powerless to affect change. In his post Voss is optimistic about NPM’s business areas, he says, “Our paid products, npm Orgs and npm Enterprise, have tens of thousands of happy users and the revenue from those sustains our core operations.” However, Business Insider reports that a recent NPM Inc funding round of the company raised only enough to continue operating until early 2020. https://twitter.com/coderbyheart/status/1152453087745007616 A big question on everyone’s mind currently is the stability of the public Node JS Registry. Most users in the JavaScript community do not have a fallback in place. While the community see Voss’s resignation with appreciation for his accomplishments, some are disappointed that he could not raise his voice against these odds and had to quit. "Nobody outside of the company, and not everyone within it, fully understands how much Laurie was the brains and the conscience of NPM," Jonathan Cowperthwait, former VP of marketing at NPM Inc, told The Register. CJ Silverio, a principal engineer at Eaze who served as NPM Inc's CTO said that it’s good that Voss is out but she wasn't sure whether his absence would matter much to the day-to-day operations of NPM Inc. Silverio was fired from NPM Inc late last year shortly after CEO Bryan Bogensberger’s arrival. “Bogensberger marginalized him almost immediately to get him out of the way, so the company itself probably won’t notice the departure," she said. "What should affect fundraising is the massive brain drain the company has experienced, with the entire CLI team now gone, and the registry team steadily departing. At some point they’ll have lost enough institutional knowledge quickly enough that even good new hires will struggle to figure out how to cope." Silverio also mentions that she had heard rumors of eliminating the public registry while only continuing with their paid enterprise service, which will be like killing their own competitive advantage. She says if the public registry disappears there are alternative projects like the one spearheaded by Silverio and a fellow developer Chris Dickinson, Entropic. Entropic is available under an open source Apache 2.0 license, Silverio says "You can depend on packages from any other Entropic instance, and your home instance will mirror all your dependencies for you so you remain self-sufficient." She added that the software will mirror any packages installed by a legacy package manager, which is to say npm. As a result, the more developers use Entropic, the less they'll need NPM Inc's platform to provide a list of available packages. Voss feels the scale of npm is 3x bigger than any other registry and boasts of an extremely fast growth rate i.e approx 8% month on month. "Creating a company to manage an open source commons creates some tensions and challenges is not a perfect solution, but it is better than any other solution I can think of, and none of the alternatives proposed have struck me as better or even close to equally good." he said. With  NPM Inc. sustainability at stake, the JavaScript community on Hacker News discussed alternatives in case the public registry comes to an end. One of the comments read, “If it's true that they want to kill the public registry, that means I may need to seriously investigate Entropic as an alternative. I almost feel like migrating away from the normal registry is an ethical issue now. What percentage of popular packages are available in Entropic? If someone else's repo is not in there, can I add it for them?” Another user responds, “The github registry may be another reasonable alternative... not to mention linking git hashes directly, but that has other issues.” Other than Entropic another alternative discussed is nixfromnpm, it is a tool in which you can translate NPM packages to Nix expression. nixfromnpm is developed by Allen Nelson and two other contributors from Chicago. Surprise NPM layoffs raise questions about the company culture Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems? Npm Inc, after a third try, settles former employee claims, who were fired for being pro-union, The Register reports
Read more
  • 0
  • 0
  • 3308