Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-developers-think-managers-dont-know-about-technology
Fatema Patrawala
01 Jun 2018
7 min read
Save for later

Developers think managers don’t know enough about technology. And that’s hurting business.

Fatema Patrawala
01 Jun 2018
7 min read
It's not hard to find jokes online about management not getting software. There has long been a perception that those making key business decisions don't actually understand the technology and software that is at the foundation of just about every organization's operations. Now, research has confirmed that the management and engineering divide is real. In this year's Skill Up survey that we ran among 8000 developers, we found that more than 60% of developers believe they know more about technology than their manager. Source: Packtpub Skill Up Survey 2018 Developer perceptions on the topic aren't simply a question of ego; they're symptomatic of some a barriers to business success. 42% of the respondents listed management's lack of technical knowledge as a barrier to success. It also appears as one of the top 3 organizational barriers to achieving business goals. Source: Packtpub Skill Up Survey 2018 To dissect the technical challenges faced by organizations, we also asked respondents to pick the top technical barriers to success. As can be seen from the below graph, a lot of the barriers directly or indirectly relate to management’s understanding of technology. Take for example management’s decisions to continue with legacy systems, or investment or divestment from certain projects, choice of training programs and vendors etc. Source: Packtpub Skill Up Survey 2018 Management tends to weigh decisions based on the magnitude of investment against returns in immediate or medium term. Also, unless there is hard evidence of performance benefits or cost savings, management is generally wary of approving new projects or spending more on existing ones. This approach is generally robust and has saved business precious dollars by curbing pet projects and unruly experiments and research. However, with technology, things are always so straightforward. One day some tool is the talk of the town (think Adobe Flash) and everyone seems to be learning it or buying it and then in a few months or a couple of years down the line, it has gone completely off radar. Conversely, something that didn’t exist yesterday or was present in some obscure research lab (think self-driving tech, gene-editing, robotics etc), is now changing the rules of the game and businesses whose leadership teams have had their ears on the ground topple everyone else, including time tested veterans. Early adopters and early jumpers make the most of tech trends. This requires one (in the position to make decisions within organizations) to be aware of the changing tech landscape to the extent that one can predict what’s going to replace the current reigning tech and in what timeframe. It requires that management is aware of what’s happening in adjacent industries or even in seemingly unrelated ones. Who knew Unity (game platform), Nvidia (chipmaker), Google (search engine), would enter the auto industry, (all thanks to self driving tech)? While these are some over the top factors, let us look at each one of them in detail. Why do developers believe there is a management knowledge gap? Few reasons listed to justify the response: Rapid pace of technology change: The rapid rate of technology change is significantly impacting IT strategy. Not only are there plenty of emerging technology trends, from AI to cloud, they’re all coming at the same time, and even affecting each other. It’s clear that keeping up with the rate of digital advancement - for example automation, harnessing big data, emerging technologies and cyber security - will pose significant challenge for leaders and senior management. Adding a whole new layer of complexity as they try to stay ahead of competition and innovate. Balancing strategic priorities while complying to changing regulations: Another major challenge for senior management is to balance strategic priorities with the regulatory demands of the industry. In 2018, GDPR has been setting a new benchmark for the protection of consumer data rights by making organisations more accountable. Governed by GDPR, organisations and senior management will now be responsible for guarding every piece of information connected to an individual. In order to be GDPR compliant, management will begin introducing the correct security protocols in their business processes. This will include encryption, two-factor authentication and key management strategies to avoid severe legal, financial and reputational consequences.To make the right decisions, they will need to be technically competent enough to understand the strengths and limitations of the tools and techniques involved in the compliance process Finding right IT talent: Identifying the right talent with the skill sets that you need is a big challenge for senior management. They are constantly trying to find and hire IT talent, such as skilled data scientists and app developers, to accommodate and capitalize on emerging trends in cloud and the API economy. The team has to take care to bring in the right people and let them create magic with their development skills. Alongside this they also need to reinvent how they manage, retract, retain, motivate, and compensate these folks. Responses to this quora question highlight that it can be a difficult process for managers to go through a lengthy recruitment cycle. And the worst feeling is when after all the effort the candidate declines the offer for another lucrative one. So much promising technology, so little time: Time is tight in business and tech. Keeping pace with how quickly innovative and promising technologies crop up is easier said than done. There are so many interesting technologies out there, and there's so little time to implement them fast enough. Before anyone can choose a technology that might work for the company, a new product appears to be on the horizon. Once you see something you like, there's always something else popping up. While managers are working on a particular project to make all the parts work together for an outstanding customer experience, it requires time to do so and implement these technologies. When juggling with all of these moving parts, managers are always looking for technologies and ways to implement great things faster. That's the major reason behind companies having a CTO, VP of engineering and CEO separately to function at their own levels and departments. Murphy’s law of unforeseen IT problems: One of the biggest problems when you’re working in tech is Murphy's Law. This is the law that states  "Anything that can go wrong, will -- at the worst possible moment." It doesn't matter how hard we have worked, how strong the plan is, or how many times things are tested. You get to doing the project and if something has to go wrong, it will. There are times we face IT problems that we don't see coming. It doesn't matter how much you try to plan -- stuff happens. When management doesn’t properly understand technology it’s often hard for them to appreciate how problems arise and how long it can take to solve them. That puts pressure on engineers and developers which can make managing projects even harder. Overcoming perfectionism with an agile mindset: Senior management often wants things done yesterday, and they want it done perfectly. Of course, this is impossible. While Agile can help improve efficiency in the development process, perfectionism is anathema to Agile. It’s about delivering quickly and consistently, not building something perfect and then deploying it. Getting management to understand this is a challenge for engineers - good management teams will understand Agile and what the trade offs are. At the forefront of everyone’s mind should be what the customer needs and what is going to benefit the business. Concluding with Dilbert comic for a lighter note. Source With purpose, process, and changing technologies, managers need to change in the way they function and manage. People don't leave companies, they leave bad managers and the same could be said true for technical workers. They don't leave bad companies they leave non-technical managers who make bad technical decisions. Don’t call us ninjas or rockstars, say developers 96% of developers believe developing soft skills is important  
Read more
  • 0
  • 0
  • 34485

article-image-shapefiles-leaflet
Packt
18 Aug 2014
5 min read
Save for later

Shapefiles in Leaflet

Packt
18 Aug 2014
5 min read
This article written by Paul Crickard III, the author of Leaflet.js Essentials, describes the use of shapefiles in Leaflet. It shows us how a shapefile can be used to create geographical features on a map. This article explains how shapefiles can be used to add a pop up or for styling purposes. (For more resources related to this topic, see here.) Using shapefiles in Leaflet A shapefile is the most common geographic file type that you will most likely encounter. A shapefile is not a single file, but rather several files used to create geographic features on a map. When you download a shapefile, you will have .shp, .shx, and .dbf at a minimum. These files are the shapefiles that contain the geometry, the index, and a database of attributes. Your shapefile will most likely include a projection file (.prj) that will tell that application the projection of the data so the coordinates make sense to the application. In the examples, you will also have a .shp.xml file that contains metadata and two spatial index files, .sbn and .sbx. To find shapefiles, you can usually search for open data and a city name. In this example, we will be using a shapefile from ABQ Data, the City of Albuquerque data portal. You can find more data on this at http://www.cabq.gov/abq-data. When you download a shapefile, it will most likely be in the ZIP format because it will contain multiple files. To open a shapefile in Leaflet using the leaflet-shpfile plugin, follow these steps: First, add references to two JavaScript files. The first, leaflet-shpfile, is the plugin, and the second depends on the shapefile parser, shp.js: <script src="leaflet.shpfile.js"></script> <script src="shp.js"></script> Next, create a new shapefile layer and add it to the map. Pass the layer path to the zipped shapefile: var shpfile = new L.Shapefile('council.zip'); shpfile.addTo(map); Your map should display the shapefile as shown in the following screenshot: Performing the preceding steps will add the shapefile to the map. You will not be able to see any individual feature properties. When you create a shapefile layer, you specify the data, followed by specifying the options. The options are passed to the L.geoJson class. The following code shows you how to add a pop up to your shapefile layer: var shpfile = new L.Shapefile('council.zip',{onEachFeature:function(feature, layer) { layer.bindPopup("<a href='"+feature.properties.WEBPAGE+"'>Page</a><br><a href='"+feature. properties.PICTURE+"'>Image</a>"); }}); In the preceding code, you pass council.zip to the shapefile, and for options, you use the onEachFeature option, which takes a function. In this case, you use an anonymous function and bind the pop up to the layer. In the text of the pop up, you concatenate your HTML with the name of the property you want to display using the format feature.properties.NAME-OF-PROPERTY. To find the names of the properties in a shapefile, you can open .dbf and look at the column headers. However, this can be cumbersome, and you may want to add all of the shapefiles in a directory without knowing its contents. If you do not know the names of the properties for a given shapefile, the following example shows you how to get them and then display them with their value in a pop up: var holder=[]; for (var key in feature.properties){holder.push(key+": "+feature.properties[key]+"<br>");popupContent=holder.join(""); layer.bindPopup(popupContent);} shapefile.addTo(map); In the preceding code, you first create an array to hold all of the lines in your pop up, one for each key/value pair. Next, you run a for loop that iterates through the object, grabbing each key and concatenating the key name with the value and a line break. You push each line into the array and then join all of the elements into a single string. When you use the .join() method, it will separate each element of the array in the new string with a comma. You can pass empty quotes to remove the comma. Lastly, you bind the pop up with the string as the content and then add the shapefile to the map. You now have a map that looks like the following screenshot: The shapefile also takes a style option. You can pass any of the path class options, such as the color, opacity, or stroke, to change the appearance of the layer. The following code creates a red polygon with a black outline and sets it slightly transparent: var shpfile = new L.Shapefile('council.zip',{style:function(feature){return {color:"black",fillColor:"red",fillOpacity:.75}}}); Summary In this article, we learned how shapefiles can be added to a geographical map. We learned how pop ups are added to the maps. This article also showed how these pop ups would look once added to the map. You will also learn how to connect to an ESRI server that has an exposed REST service. Resources for Article: Further resources on this subject: Getting started with Leaflet [Article] Using JavaScript Effects with Joomla! [Article] Quick start [Article]
Read more
  • 0
  • 0
  • 34358

article-image-bitbucket-to-no-longer-support-mercurial-users-must-migrate-to-git-by-may-2020
Fatema Patrawala
21 Aug 2019
6 min read
Save for later

Bitbucket to no longer support Mercurial, users must migrate to Git by May 2020

Fatema Patrawala
21 Aug 2019
6 min read
Yesterday marked an end of an era for Mercurial users, as Bitbucket announced to no longer support Mercurial repositories after May 2020. Bitbucket, owned by Atlassian, is a web-based version control repository hosting service, for source code and development projects. It has used Mercurial since the beginning in 2008 and then Git since October 2011. Now almost after ten years of sharing its journey with Mercurial, the Bitbucket team has decided to remove the Mercurial support from the Bitbucket Cloud and its API. The official announcement reads, “Mercurial features and repositories will be officially removed from Bitbucket and its API on June 1, 2020.” The Bitbucket team also communicated the timeline for the sunsetting of the Mercurial functionality. After February 1, 2020 users will no longer be able to create new Mercurial repositories. And post June 1, 2020 users will not be able to use Mercurial features in Bitbucket or via its API and all Mercurial repositories will be removed. Additionally all current Mercurial functionality in Bitbucket will be available through May 31, 2020. The team said the decision was not an easy one for them and Mercurial held a special place in their heart. But according to a Stack Overflow Developer Survey, almost 90% of developers use Git, while Mercurial is the least popular version control system with only about 3% developer adoption. Apart from this Mercurial usage on Bitbucket saw a steady decline, and the percentage of new Bitbucket users choosing Mercurial fell to less than 1%. Hence they decided on removing the Mercurial repos. How can users migrate and export their Mercurial repos Bitbucket team recommends users to migrate their existing Mercurial repos to Git. They have also extended support for migration, and kept the available options open for discussion in their dedicated Community thread. Users can discuss about conversion tools, migration, tips, and also offer troubleshooting help. If users prefer to continue using the Mercurial system, there are a number of free and paid Mercurial hosting services for them. The Bitbucket team has also created a Git tutorial that covers everything from the basics of creating pull requests to rebasing and Git hooks. Community shows anger and sadness over decision to discontinue Mercurial support There is an outrage among the Mercurial users as they are extremely unhappy and sad with this decision by Bitbucket. They have expressed anger not only on one platform but on multiple forums and community discussions. Users feel that Bitbucket’s decision to stop offering Mercurial support is bad, but the decision to also delete the repos is evil. On Hacker News, users speculated that this decision was influenced by potential to market rather than based on technically superior architecture and ease of use. They feel GitHub has successfully marketed Git and that's how both have become synonymous to the developer community. One of them comments, “It's very sad to see bitbucket dropping mercurial support. Now only Facebook and volunteers are keeping mercurial alive. Sometimes technically better architecture and user interface lose to a non user friendly hard solutions due to inertia of mass adoption. So a lesson in Software development is similar to betamax and VHS, so marketing is still a winner over technically superior architecture and ease of use. GitHub successfully marketed git, so git and GitHub are synonymous for most developers. Now majority of open source projects are reliant on a single proprietary solution Github by Microsoft, for managing code and project. Can understand the difficulty of bitbucket, when Python language itself moved out of mercurial due to the same inertia. Hopefully gitlab can come out with mercurial support to migrate projects using it from bitbucket.” Another user comments that Mercurial support was the only reason for him to use Bitbucket when GitHub is miles ahead of Bitbucket. Now when it stops supporting Mercurial too, Bitbucket will end soon. The comment reads, “Mercurial support was the one reason for me to still use Bitbucket: there is no other Bitbucket feature I can think of that Github doesn't already have, while Github's community is miles ahead since everyone and their dog is already there. More importantly, Bitbucket leaves the migration to you (if I read the article correctly). Once I download my repo and convert it to git, why would I stay with the company that just made me go through an annoying (and often painful) process, when I can migrate to Github with the exact same command? And why isn't there a "migrate this repo to git" button right there? I want to believe that Bitbucket has smart people and that this choice is a good one. But I'm with you there - to me, this definitely looks like Bitbucket will die.” On Reddit, programming folks see this as a big change from Bitbucket as they are the major mercurial hosting provider. And they feel Bitbucket announced this at a pretty short notice and they require more time for migration. Apart from the developer community forums, on Atlassian community blog as well users have expressed displeasure. A team of scientists commented, “Let's get this straight : Bitbucket (offering hosting support for Mercurial projects) was acquired by Atlassian in September 2010. Nine years later Atlassian decides to drop Mercurial support and delete all Mercurial repositories. Atlassian, I hate you :-) The image you have for me is that of a harmful predator. We are a team of scientists working in a university. We don't have computer scientists, we managed to use a version control simple as Mercurial, and it was a hard work to make all scientists in our team to use a version control system (even as simple as Mercurial). We don't have the time nor the energy to switch to another version control system. But we will, forced and obliged. I really don't want to check out Github or something else to migrate our projects there, but we will, forced and obliged.” Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note BitBucket goes down for over an hour
Read more
  • 0
  • 0
  • 34327

article-image-and-running-views
Packt
27 Apr 2016
21 min read
Save for later

Up and Running with Views

Packt
27 Apr 2016
21 min read
 In this article by Gregg Marshall, the author of Mastering Drupal 8 Views, we will get introduced to the world of Views in Drupal. Drupal 8 was released November 19, 2015, after almost 5 years of development by over 3,000 members of the Drupal community. Drupal 8 is the largest refactoring in the project's history. One of the most important changes in Drupal 8 was the inclusion of the most popular contributed module, Views. Similar to including CCK in Drupal 7, adding Views to Drupal 8 influenced how Drupal operates as many of the administration pages, such as the content list page, are now Views that can be modified or extended by site builders. Every site builder needs to master the Views module to really take advantage of Drupal's content structuring capabilities by giving site builders the ability to create lists of content formatted in many different ways. A single piece of content can be used for different displays, and all the content in each View is dynamically created when a visitor comes to a page. It was the only contributed module included in the Acquia Site Builder certification examination for Drupal 7. In this article, we will discuss the following topics: Looking at the Views administration page Reviewing the general Views module settings Modifying one of the views from Drupal core to create a specialized administrative page (For more resources related to this topic, see here.) Drupal 8 is here, should I upgrade? "Jim, this is Lynn, how are things at Fancy Websites?" "I read that Drupal 8 is being released on November 19. From our conversations this year, I guess that means it is time to upgrade our current Drupal 6 site. Should I upgrade to Drupal 7 or Drupal 8?" "Lynn, we're really excited that Drupal 8 is finally ready. It is a game changer, and I can name 10 reasons why Drupal 8 is the way to go": Mobile device compatibility is built into Drupal 8's DNA. Analytics show that 32% of your site traffic is coming from buyers using phones, and that's up from only 19% compared to last year. Multilingual is baked in and really works, so we can go ahead and add the Spanish version of the site we have been talking about. There's a new theme engine that will make styling the new site much easier. It's time to update the look of your site; it's looking pretty outdated compared to the competition. Web services is built in. When you're ready to add an app for your customer's phones, Drupal 8 will be ready. There are lots of new fields, so we won't need to add half a dozen contributed modules to let you build your content types. Drupal 8 is built using industry standards. This was a huge change you won't see, but it means that our shop will be able to recruit new developers more easily. The configuration is now stored in code. Finally, we'll have a way for you to develop on your local computer and move your changes to staging and then to production without having to rebuild content types and Views manually over and over. The WYSIWYG editor is built in. The complex setup we went through to get the right buttons and make the output work won't be necessary in Drupal 8. There's a nice tour capability built in so that you can set up custom "how to" demonstrations for your new users. This should free up a lot of your time, which is good given how you are growing. I've saved the best for last. Your favorite module, Views, is now built into core! Between Fields in Drupal 7 and now Views in Drupal 8, you've got the tools to extend your site built right into core. The bottom line is I can't imagine not going ahead and upgrading to Drupal 8. Views in core is reason enough. Why don't I set up a Drupal 8 installation on your development server so that you can start playing with Drupal 8? We're not doing any development work on your site right now, and we still have staging to test any updates." "That sounds great, Jim! Let me know when I can log in." Less than an hour later, the e-mail arrived; the Drupal 8 development site was set up and ready for Lynn to start experimenting. Based on the existing Drupal 6 site, Lynn set up four content types with the same fields she had on the current site. Jim was able to use the built-in migrate module to move some of her data to the new site. Lynn was ready to start exploring Views in Drupal 8. Looking at the Views administration page That evening, Lynn logged into the new site. Clicking on the Manage menu item, she then clicked on the Structure submenu item, and at the bottom of the list displayed on the Structure page, she clicked on the Views option. About that time, Jackson came in and settled into his spot near her terminal. "Hi Jackson, ready to explore Views with me?" Looking at the Views administration page, Lynn noticed there were already a number of Views defined. Scanning the list, she said "Look Jackson, Drupal 8 uses Views for administration pages. This means we can customize them to fit our way of doing things. I like Drupal 8 already!". Jackson purred. Lynn studied the Views administration page shown here: Views administration page As Lynn looked at each view, the listing looked familiar; she had seen the same kind of listing on her Drupal 6 site. Trying the OPERATIONS pull-down menu on the first View, she saw that the options were Edit, Duplicate, Disable, and Delete. "That's pretty clear; I guess Duplicate is the same as Clone on my old version of Views. I can change a View, create a new one using this one as a template, make it temporarily unavailable, or wipe it completely off the face of the earth." "I wonder what kind of settings there are on the Settings tab of this listing page. Look, Jackson, there's a couple of subtabs hiding on the Settings page." As Lynn didn't want to mess up her new Drupal site, she called Jim. "Hi, Jim. Can you give me a quick rundown on the Views Settings tab?" "Sure," he replied. Views settings "Looking at the Views Settings tab, you'll notice two subtabs, Basic and Advanced. Select the advanced settings tab by clicking on Advanced to show the following display: The Views advanced settings configuration page Views advanced settings Let's look at the Advanced tab first since you'll probably never use these settings. The first option, Disable views data caching, shouldn't be checked unless you are having issues with Views not updating when the data changes. Even then, you should probably disable caching on a per-View basis using the caching setting in the View's edit page in the third column, labeled Advanced, near the bottom of the column. Disabling Views' data caching can really slow down the page loads on your site. You might actually use the Advanced settings tab if you need to clear all the Views' caches, which you would do by clicking on the Clear Views' cache button. Views basic settings The other advanced setting is DEBUGGING with a Add Views signature to all SQL queries checkbox. Unless you are using MySQL's logs to debug queries, which only an advanced developer would do, you aren't going to want this overhead added to Views queries, so just leave it unselected. Moving to the Basic tab, there are a number of settings that might be handy, and I'd recommend changing the default settings. Click on Basic to show the following display: The Views basic settings configuration page The first option, Always show the master (default) display, might or might not be useful. If you create a new View and don't select either create a page or create a block (or provide a REST export if this module is enabled), then a default View display is created called master. If you select either option or both, then page and/or block View displays are created, and generally, you won't see master. It's there; it's just hidden. Sometimes, it is handy to be able to edit or use the master display. While I don't like creating a lot of displays in each View, sometimes, I do create two or three if the content being displayed is very similar. An obvious example is when you want to display the same blog listing as either a page or in a block on other pages. The same teaser information is displayed, just in different ways. So, having the two displays in the same View makes sense. Just make sure when you customize each display that any changes you make are set to only apply to the current display and not all displays. Otherwise, you might make changes you hadn't planned on in the other displays. Most of the time, you will see a pull-down menu that defaults to All displays, but you can select This page (override) to have the setting change apply only to this display. Having the master display show lets you create the information that will be the same in all the displays you are creating; then, you can create and customize the different displays. Using our blog example, you may create a master display that has a basic list of titles, with the titles linking to the full blog post. Then, you can create a blog display page, and using the This page (override) option, you can add summaries, add more links, and set the results to 10 per page. Using the master display, you can go back and add a display block that shows only the last five blog posts without any pager, again applying each setting only to the block display. You might then go back to the master display and create a second block that uses the tags to select five blog posts that are related, again making sure that the changes are applied to the current block and not all displays. Finally, when you want to change something that will affect all the displays, make the change on the master display, and this time, use the All displays option to make sure the other displays are updated. In our blog example, you might decide to change the CSS class used to display the titles to apply formatting from the theme; you probably want this to look the same in every possible display of the blog posts. The next basic setting for Views is Allow embedded displays. You will not enable this option; it is for developers who will use Views-generated content in their custom code. However, if you see it enabled, don't disable it; doing this would likely break something on your site using this feature. The last setting before the LIVE PREVIEW SETTINGS field set is Label for "Any" value on non-required single-select exposed filters, which lets you pick either <Any> or -Any- as the format for exposed filters that would allow a user to ignore the filter. Live Preview Settings There are several LIVE PREVIEW SETTINGS field sets I like to enable because they make debugging your Views easier. If the LIVE PREVIEW SETTINGS field set is closed (that is, the options are not showing), click on the title next to the arrow, and it will open. It will look similar to this: Live Preview Settings I generally enable the Automatically update preview on changes option. This way, any change I make to the View when I edit it shows the results that would occur after each change. Seeing things change right away gives me a clue whether a change will have an effect I'm not expecting. A lot of Views options can be tricky to understand, so a bit of trial and error is often required. Hence, expect to make a change and not see what you expect; just change the setting back, rethink the problem, and try again. Almost always, you'll get the answer eventually. If you have a View that is really complex and very slow, you can always disable the live preview while you edit the View by selecting the Auto preview option in the grey Preview bar just under all the View's settings. The next two options control whether Views will display the SQL query generated by the Views options you selected in the edit screen. I like to display the SQL query, so I will select the Above the preview option under Show SQL query and then select the Show the SQL query checkbox that follows it. If you don't check the Show the SQL query option, it doesn't matter what you select for above or below the preview, and if you expect to see the SQL queries and don't, it is likely that you set one option and not the other. Showing the SQL query can be confusing at first, but after a while, you'll find it handy to figure out what is going on, especially if you have relationships (or should have relationships and don't realize it). And, of course, if you can't read the query, you can always e-mail me for a translation to English. The next option, Show performance statistics, is handy when trying to figure out why some Views-generated page is loading slowly. But usually, this isn't an issue you'd be thinking of, so I'd leave it off. You want to focus on getting the right information to display exactly the way you want without thinking about the performance. If we later decide it's too slow, the developer we'll assign to it will use this information and turn the option on in development. The same is true about Show other queries run during render during live preview. This information is handy to figure out performance issues and occasionally a display formatting issue during theming, but it isn't something you as a nonprogrammer should be worried about. Seeing all the extra queries can be confusing and intimidating, yet it doesn't really offer you any help creating a View. "Oh, don't forget to click on Save configuration if you change any settings. I don't know how many times I've forgotten to save a configuration change in Drupal and then wondered why my change hasn't stuck. Does this help?" "Thanks Jim, that is great. I owe you a coffee next time we get together." Hanging up the phone, Lynn said, "What do you think, Jackson? Let's start off by creating a property maintenance page for our salespeople to use? I think I'll get a quick win by modifying one of Drupal's core views." Adapting an existing View Lynn will use her knowledge from using Views on her existing Drupal site, and so move quickly. The existing content page provided by Views is general purpose and offers lots of options, and not all these options are appropriate for all content editors. This page looks similar to the following one: Drupal's standard content listing page Lynn started creating her property maintenance page by going to the Views listing page (Manage | Structure | Views) and selecting Duplicate from the OPERATIONS pull-down menu on the right-hand side of this row. On the next screen, she named the Property Maintenance view and clicked on the Duplicate button. When the View edit screen appeared, she was ready to adapt it to her need. First, she selected the Page display, assuming the Always show the master (default) display setting was already selected; otherwise, the Page display will be selected by default as it is the only display in this View. Remember that any change made in the View edit page isn't saved until you click on the Save button. Also, unsaved changes won't show up when the page/block is displayed. If you make a change, look at it using another browser or tab, and if you don't see the change reflected, it is likely that you didn't save the change you just made. The Property Maintenance screen before making any changes Editing the Property Maintenance view Starting with the left-hand side column of the View edit screen, Lynn changed the title by clicking on the Content link next to the Title label. She changed the title to Property Maintenance. Moving down the column, Lynn decided that the table display and settings were okay on the original screen and skipped them. Under the FIELDS section, Lynn decided to delete the Content: Node operations bulk form, Content: Type (Content Type), and (author) User: Name (Author) fields/columns as they weren't useful to the real estate salespeople who would be using this page. To do this, she clicked on Content: Node operations bulk form and then on the Remove link at the bottom of the Configure field modal that appeared. She repeated the removing of the field for the Content: Type (Content Type) and (author) User: Name (Author) fields. Lynn noted that the username field appeared to be the only field reference to the author entity, so she could delete the relationship later. Moving on to FILTER CRITERIA, Lynn was a bit confused by the first two filters. When she clicked on Content: Published status or admin user, the description said "Filters out unpublished content if the current user cannot view it". "This seems reasonable, let's keep this filter," she thought, and she clicked on Cancel. Next was Content: Publishing status (grouped), an exposed filter that lets the user filter by either published or unpublished. This seemed useful, so Lynn kept it and clicked on Cancel. The next filter, Content: Type (exposed), is necessary but shouldn't be selectable by the user, so Lynn clicked on it to edit the filter, unselected the Expose this filter to visitors option, and selected just the Property content type, making the filter only select content that are properties. The next filter, Content: Title (exposed), is handy, so Lynn left it as is. The final filter, Content: Translation language (exposed), isn't needed as Lynn's site isn't multilingual, so Lynn deleted the filter. Moving on to the center column of the View edit page, under the PAGE SETTINGS heading, Lynn changed the path for the View to /admin/property-maintenance by clicking on the existing /admin/content/node path, making the change, and clicking on the Apply button. Next in this column was the menu setting. Lynn doesn't want the property maintenance page to be part of the administration content page, so she clicked on Tab: Content and changed the menu type to Normal menu entry. This changed the fields displayed on the right-hand side of the modal, so Lynn changed the Menu link title to Property Maintenance, left the description blank, and left Show as expanded unselected. In the Parent pull-down menu, she selected the <Tools> menu. Tools is the default Drupal menu for site tools that is only shown to authorized users, who are logged into the site and can view the page linked to, which real estate salespeople will be able to view. She left the weight at -10, planning on reorganizing this menu when she has most of it configured. As this is the last option, she clicked on Apply to exit the modal. The last setting in the PAGE SETTINGS section is Access. Lynn knew she needed to change the required permission as she didn't plan on giving real estate salespeople access to the main content page, but she wasn't sure which permission to give them. Looking through the permissions page (the People | Permissions tab), Lynn didn't see any permission that made sense for who should be able to see this maintenance page. So, she clicked on the Permission link in the center column of the View edit page and changed the Access value from Permission to Role, and when she clicked on the Apply (all displays) button, she could select the role(s) she wanted to be able to see on this page. She selected the Administrator, Real Estate Salesperson, and Office Administrator roles. One way to test access while you develop is to use a second browser and log in as the other kind of user. A common mistake in Drupal is to see content while logged in as an administrator that can't be seen by other users. This can also be done using a second tab opened in "incognito" mode, but I find it easier to use a different browser (for example, Chrome and Firefox). You can even have three browsers open to the same page to test a third kind of user. Continuing down the column, Lynn decided she didn't need a header or footer on this administration page at least for now, but she did want to change the NO RESULTS BEHAVIOR message. Drupal has a text message defined, so she clicked on the Global: Unfiltered text (Global: Unfiltered text) link, changed the Content field to No properties meeting your filter criteria are available., and clicked on the Apply (all displays) button. The final section, PAGER, seemed fine, so Lynn skipped over it and moved to the third column of the view edit page, ADVANCED SETTINGS. As Lynn had changed the setting to always show the advanced settings, Lynn noticed that there was a relationship for author. As she had deleted displaying the author name, there wasn't any reason to keep the relationship because she wasn't using any of the author's details. She clicked on the author link and then on the Remove link at the bottom of the modal. Reviewing the results of the live preview, Lynn was satisfied and clicked on the Save button to save her modified view. There is a maxim in computers, Save Early, Save Often. As you develop or modify your View, when you reach a point where your progress so far is okay, click on the Save button. Then, if you make a terrible mistake in the next change, you can click on the Cancel button and then click on Edit to resume from where you last saved. Before saving the View, the result looked similar to the following screen: The resulting Property Maintenance View edit screen with all the changes Debugging – Live Preview is your friend Assuming you enabled Live Preview in your Views settings earlier in this article, as you are building your View, Views will show what will be displayed. Formatting and some JavaScript displays, such as Google mapping, can't be displayed in Live Preview, but to debug, you generally don't need them. Many Views challenges are getting the data that you want to display or getting data to be displayed the way you want. Many Views are created using the fields content display. Often, you will see fields that you don't want displayed when reviewing Live Preview because you didn't check the Exclude from display option in the field configuration. Or, you will select a field from the Add fields list that isn't actually the field you want to display the data you want—for instance, do you want article tags or article tags (field_tags: delta)? Sometimes you have to just try one and see what happens. If it isn't the right option, delete the field and try another. Experience will guide you as you use Views, but even the most experienced site builders wonder what some field or field option does in the context of the View they are building. Remember to save the View before you experiment with this next idea. Then, if it doesn't work out, you can just click on Cancel and not lose all the previous work you put in. If you disabled Live Preview, hopefully, you have decided to go back and enable it; seeing the output and looking at the generated SQL queries is really very useful in trying to figure out what might be going wrong. "Okay, Jackson, I see that a lot of what I knew from the previous versions of Views applies to the version in Drupal 8. Now that I've quickly gone through the edit screen to modify a core View, let's get serious and really learn the ins and outs of this version of Views." Summary In this article, we covered the Views administration page, where you can add, delete, edit, and duplicate views. Then, we reviewed all the general Views module settings. Finally, we modified a core View, quickly going through several configuration options. If you have used Views in older versions of Drupal, you should feel comfortable. If this is your first introduction to Views, don't panic that we glossed over a lot or if you felt lost. Resources for Article: Further resources on this subject: Working with Drupal Audio in Flash (part 2) [article] Modular Programming in ECMAScript 6 [article] Using NoSQL Databases [article]
Read more
  • 0
  • 0
  • 34294

article-image-4-predictions-by-richard-feldman-on-the-future-of-the-web-typescript-webassembly-and-more
Bhagyashree R
26 Nov 2019
8 min read
Save for later

4 predictions by Richard Feldman on the future of the web: TypeScript, WebAssembly, and more

Bhagyashree R
26 Nov 2019
8 min read
At ReactiveConf 2019, Richard Feldman, author of Elm in Action and creator of ‘elm-css’ made four predictions about how the future of web development will look like by the end of 2020 and 2025. ReactiveConf 2019 was a three-day functional programming event that happened from October 30 to November 1 at Prague. The event hosted a number of great talks sharing the latest global trends in web and mobile development. This year among the topics covered were PWA, optimizations, security, visualizations, accessibility, and diversity. Predicting the future of the web is about safer bets than about trends Feldman started out by asking a question that developers often come across: “which technology stack to choose for their next project?” Previously, it was often advised to go for technologies that are “boring” or mature instead of the latest and shiniest ones. Going by this advice Feldman and his team chose the technology that had the biggest library ecosystem, most mature in the LAMP stack, and was adopted by many successful companies: Perl. Since then, however, Perl gradually started to lose its popularity. The lesson that Feldman learned here was that “any technology that we choose, no matter how popular, how mainstream, how much traction it got today, you are still making a bet.” He says that predicting how the future for the current technologies will look like, and following that is safer than blindly accepting what everyone else is doing. After setting up the premise, Feldman moved on to sharing his predictions: Prediction 1: "TypeScript takes over the JS world" Back in 2012, Anders Hejlsberg, who is the original designer of C#, Delphi, and Turbo Pascal, came up with another programming language called TypeScript. This language was introduced as a “superset” of JavaScript that will help developers build JavaScript apps that scale. Some of the positives that this language brought to JavaScript development was excellent tooling enabled by static typing, self-documentation of code, continuous feedback from autocomplete, and more. Since its introduction, TypeScript has seen huge adoption. Almost all the big frontend frameworks such as React, Angular and Vue have extensive Typescript support. More and more JavaScript developers and framework authors are taking advantage of the excellent tooling and other benefits it provides. Its latest release, TypeScript 3.7 includes most-awaited features like assert signatures, recursive type aliases, top-level await, null coalescing, and optional chaining. [box type="shadow" align="" class="" width=""] Further Learning If you are interested in building with TypeScript and its latest features, check out our book, Learn TypeScript 3 by Building Web Applications by Sebastien Dubois and Alexis Georges. This book covers the very basics to the more advanced concepts while explaining many design patterns, techniques, frameworks, libraries, and tools along the way. You will learn a ton about modern web frameworks like Angular, Vue.js and React, and build cool web applications using those. It also covers modern front-end development tooling such as Node.js, npm, yarn, Webpack, Parcel, Jest, and many others. [/box] Despite its popularity, not everyone is using TypeScript. Along with verbose code, it is “unsound” by design and gives a false sense of security in some instances, Feldman shared. So, there are people who like TypeScript and there are people who don’t. The most important factor to predict how its future will look like is by seeing how it is affecting the teams actually using it. Feldman said, "I hear a lot of teams saying we are trying Typescript, we have used Typescript, or we are using TypeScript. I hear almost no teams saying we tried Typescript and then went back to JavaScript." [box type="shadow" align="" class="" width=""] Feldman predicted that by the end of 2020, Typescript will be the most common choice for new JS commercial projects. And by the end of 2025, he predicted that there will be more people writing in TypeScript on a daily basis than people writing vanilla JavaScript. [/box] Prediction 2: “WebAssembly is going to expand the web app pie” First announced in 2015, WebAssembly is Assembly for the browser with a compact binary format that runs with near-native execution speed. It is also a compilation target for other high-level languages including C/C++ and Rust. Its “closer to metal” property enables a number of computationally-intensive use cases on the web including games, media editing, speech synthesis, client-side computer vision, among others. Start your WebAssembly journey with our book Hands-On Game Development with WebAssembly by Rick Battagline. This book introduces web and game devs to the world of WebAssembly by walking through the development of a retro arcade game. WebAssembly is designed to work alongside JavaScript, which means you can call WebAssembly modules from JavaScript code. Though it can be used to improve the performance of JavaScript apps and libraries, Feldman doubts that this will be the only major way developers are going to use it in the future. This is because the existing performance of JavaScript is generally accepted and promising some percentage of improvement in speed is not going to be a game-changer for WebAssembly. Instead, Feldman believes that WebAssmebly will enable browsers to compete with apps stores and installers. Getting users to install an app can be a significant obstacle to adoption. WebAssembly can help distribute native code without code signing, app stores, and development kits. Also, the web as a delivery platform provides deep linking and other sharing capabilities. He explained this through the example of Figma, a collaborative interface design tool built in C++, which users can access just going to a URL. However, distributing applications built in Rust, C++, or Go on web does not mean the end of HTML, CSS, and JavaScript. WebAssembly will simply expand what he calls as the “web app pie.” [box type="shadow" align="" class="" width=""]Feldman predicted that by the end of 2020, WebAssembly will not make much difference to the makeup of the web. By the end of 2025, however, we will start to see the niche of heavyweight web apps that are basically native apps distributed through the browser.[/box] Prediction 3: “npm lasts, surviving further problems” In recent years, developers have witnessed and survived quite a few npm disasters. In 2016, a developer unpublished more than 250 npm-managed modules that affected Node, Babel and thousands of other projects. Then in 2018, we saw the event-stream case, in which an ill-intentioned user took ownership of the widely-used package through social engineering and infected it with a malicious package. Another problem with npm is that it can allow the execution of arbitrary code from thousands and thousands of packages through the “postinstall” hook in package.json. Feldman recommends disabling “postinstall” and “preinstall” scripts by using the following command: npm config set ignore-scripts true Also, we are seeing some alternatives to npm. Feldman mentioned about Entropic, a federated package registry with a new CLI introduced by the former CTO of npm, C J Silverio. Feldman believes that despite these alternatives, financial, security, or other problems developers will continue to use npm because of its strong network effects. [box type="shadow" align="" class="" width=""]Drawing from these events, Feldman predicted that by the end of 2020, we can expect one more security incident. By the end of 2025, he predicts that we might see at least one malicious npm package infecting many developers’ machines.[/box] Prediction 4: "JS alternatives stay niche, but age well" When it comes to JavaScript alternatives, we have two options: JS dialects and non-JS dialects. Some of the JS dialects are TypeScript, Dart, Coffeescript, among others. Whereas, non-JS dialects include ClojureScript, ReasonML, and Elm, which provide a different experience than writing JavaScript. Representing the Elm core team at the event, Feldman listed a few reasons why developers should try Elm. It renders faster and generates smaller builds than most top JS frameworks and almost never crashes. It has its own package ecosystem and is often praised for its very detailed error messages. After sharing the benefits of Elm, Feldman concluded that JavaScript alternatives will stay niche, but age well. This essentially means that people who have chosen these alternatives and are happy with them will continue to use them regardless of the popularity of TypeScript. [box type="shadow" align="" class="" width=""]By the end of 2020, compile-to-JS languages will continue to grow, but not as fast as TypeScript. By the end of 2025, non-JavaScript dialects will have aged well, although at that time TypeScript will still be more popular.[/box] Want to add TypeScript to your skillset? Check out our book, Learn TypeScript 3 by Building Web Applications by Sebastien Dubois and Alexis Georges. It is a comprehensive guide that teaches how to wisely use the latest features in TypeScript 3.  You will learn how to build web applications with Angular, Vue.js and React and use modern front-end development tooling such as Node.js, npm, yarn, Webpack, Parcel, Jest, and many others. Microsoft releases TypeScript 3.7 with much-awaited features like Optional Chaining, Assertion functions and more Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices An introduction to TypeScript types for ASP.NET core [Tutorial]
Read more
  • 0
  • 0
  • 34024

article-image-10-commandments-for-effective-ux-design
Will Grant
11 Mar 2019
8 min read
Save for later

Will Grant’s 10 commandments for effective UX Design

Will Grant
11 Mar 2019
8 min read
Somewhere along the journey of web maturity, we forgot something important: user experience is not art. It's the opposite of art. UX design should perform one primary function: serving users. Your UX design has to look great, but it should not be at the expense of hampering the working of the website. This is an extract from 101 UX Principles by Will Grant. Read our interview with Will here. #1 Empathy and objectivity are the primary skills of a  UX professional Empathy and objectivity are the primary skills you must possess to be good at UX. This is not to undermine those who have spent many years studying and working in the UX field — their insights and experience are valuable — rather say that study and practice alone are not enough. You need empathy to understand your users’ needs, goals, and frustrations. You need objectivity to look at your product with fresh eyes, spot the flaws and fix them. You can learn everything else. Read More: Soft skills every data scientist should teach their child #2 Don’t use more than two typefaces Too often designers add too many typefaces to their products. You should aim to use two typefaces maximum: one for headings and titles, and another for body copy that is intended to be read. Using too many typefaces creates too much visual ‘noise’ and increases the effort that the user has to put into understanding the view in front of them. What’s more, many custom-designed brand typefaces are made with punchy visual impact in mind, not readability. Use weights and italics within that font family for emphasis, rather than switching to another family. Typically, this means using your corporate brand font as the heading, while leaving the controls, dialogs and in-app copy (which need to be clearly legible) in a more proven, readable typeface. #3 Make your buttons look like buttons There are parts of your UI that can be interacted with, but your user doesn’t know which parts and doesn’t want to spend time learning. Flat design is bad. It’s really terrible for usability. It’s style over substance and it forces your users to think more about every interaction they make with your product. Stop making it hard for your customers to find the buttons! By drawing on real-world examples, we can make UI buttons that are obvious and instantly familiar. By using real-life inspiration to create affordances, a new user can identify the controls right away. Create the visual cues your user needs to know instantly that they’re looking at a button that can be tapped or clicked. #4 Make ‘blank slates’ more than just empty views The default behavior of many apps is to simply show an empty view where the content would be. For a new user, this is a pretty poor experience and a massive missed opportunity for you to give them some extra orientation and guidance. The blank slate is only shown once (before the user has generated any content). This makes it an ideal way of orienting people to the functions of your product while getting out of the way of more established users who will hopefully ‘know the ropes’ a little better. For that reason, it should be considered mandatory for UX designers to offer users a useful blank slate. #5 Hide ‘advanced’ settings from most users There’s no need to include every possible menu option on your menu when you can hide advanced settings away. Group settings together but separate out the more obscure ones for their own section of ‘power user’ settings. These should also be grouped into sections if there are a lot of them (don’t just throw all the advanced items in at random). Not only does hiding advanced settings have the effect of reducing the number of items for a user to mentally juggle, but it also makes the app appear less daunting, by hiding complex settings from most users. By picking good defaults, you can ensure that the vast majority of users will never need to alter advanced settings. For the ones that do, an advanced menu section is a pretty well-used pattern. #6 Use device-native input features where possible If you’re using a smartphone or tablet to dial a telephone number, the device’s built-in ‘phone’ app will have a large numeric keypad, that won’t force you to use a fiddly ‘QWERTY’ keyboard for numeric entry. Sadly, too often we ask users to use the wrong input features in our products. By leveraging what’s already there, we can turn painful form entry experiences into effortless interactions. No matter how good you are, you can’t justify spending the time and money that other companies have spent on making usable system controls. Even if you get it right, it’s still yet another UI for your user to learn, when there’s a perfectly good one already built into their device. Use that one. #7 Always give icons a text label Icons are used and misused so relentlessly, across so many products, that you can’t rely on any 'one' single icon to convey a definitive meaning. For example, if you’re offering a ‘history’ feature,  there’s a wide range of pictogram clocks, arrows, clocks within arrows, hourglasses, and parchment scrolls to choose from. This may confuse the user and hence you need to add a text label to make the user understand what this icon means in this context within your product. Often, a designer will decide to sacrifice the icon label on mobile responsive views. Don’t do this. Mobile users still need the label for context. The icon and the label will then work in tandem to provide context and instruction and offer a recall to the user, whether they’re new to your product or use it every day. #8 Decide if an interaction should be obvious, easy or possible To help decide where (and how prominently) a control or interaction should be placed, it’s useful to classify interactions into one of three types. Obvious Interactions Obvious interactions are the core function of the app, for example, the shutter button on a camera app or the “new event” button on a calendar app. Easy Interactions An easy interaction could be switching between the front-facing and rear-facing lens in a camera app, or editing an existing event in a calendar app. Possible Interactions Interactions we classify as possible are rarely used and they are often advanced features. For example, it is possible to adjust the white balance or auto-focus on a camera app or make an event recurring on a calendar app. #9 Don’t join the dark side So-called ‘dark patterns’ are UI or UX patterns designed to trick the user into doing what the corporation or brand wants them to do. These are, in a way, exactly the same as the scams used by old-time fraudsters and rogue traders, now transplanted to the web and updated for the post-internet age. Shopping carts that add extra "add-on" items (like insurance, protection policies, and so on) to your cart before you check out, hoping that you won't remove them Search results that begin their list by showing the item they'd like to sell you instead of the best result Ads that don't look like ads, so you accidentally tap them Changing a user's settings—edit your private profile and if you don't explicitly make it private again, the company will switch it back to public Unsubscribe "confirmation screens", where you have to uncheck a ton of checkboxes just right to actually unsubscribe. In some fields, medicine, for example, professionals have a code of conduct and ethics that form the core of the work they do. Building software does not have such a code of conduct, but maybe it should do. #10 Test with real users There’s a myth that user testing is expensive and time-consuming, but the reality is that even very small test groups (less than 10 people) can provide fascinating insights. The nature of such tests is very qualitative and doesn’t lend itself well to quantitative analysis, so you can learn a lot from working with a small sample set of fewer than 10 users. Read More: A UX strategy is worthless without a solid usability test plan You need to test with real users, not your colleagues, not your boss and not your partner. You need to test with a diverse mix of people, from the widest section of society you can get access to. User testing is an essential step to understanding not just your product but also the users you’re testing: what their goals really are, how they want to achieve them and where your product delivers or falls short. Summary In the web development world, UX and UI professionals keep making UX mistakes, trying to reinvent the wheel, and forgetting to put themselves in the place of a user. Following these 10 commandments and applying them to the software design will create more usable and successful products, that look great but at the same time do not hinder functionality. Is your web design responsive? What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability Trends UX Design
Read more
  • 0
  • 0
  • 33959
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-object-oriented-programming-typescript
Packt
15 Sep 2015
12 min read
Save for later

Writing SOLID JavaScript code with TypeScript

Packt
15 Sep 2015
12 min read
In this article by Remo H. Jansen, author of the book Learning TypeScript, explains that in the early days of software development, developers used to write code with procedural programing languages. In procedural programming languages, the programs follow a top to bottom approach and the logic is wrapped with functions. New styles of computer programming like modular programming or structured programming emerged when developers realized that procedural computer programs could not provide them with the desired level of abstraction, maintainability and reusability. The development community created a series of recommended practices and design patterns to improve the level of abstraction and reusability of procedural programming languages but some of these guidelines required certain level of expertise. In order to facilitate the adherence to these guidelines, a new style of computer programming known as object-oriented programming (OOP) was created. (For more resources related to this topic, see here.) Developers quickly noticed some common OOP mistakes and came up with five rules that every OOP developer should follow to create a system that is easy to maintain and extend over time. These five rules are known as the SOLID principles. SOLID is an acronym introduced by Michael Feathers, which stands for the each following principles: Single responsibility principle (SRP): This principle states that software component (function, class or module) should focus on one unique tasks (have only one responsibility). Open/closed principle (OCP): This principle states that software entities should be designed with the application growth (new code) in mind (be open to extension), but the application growth should require the smaller amount of changes to the existing code as possible (be closed for modification). Liskov substitution principle (LSP): This principle states that we should be able to replace a class in a program with another class as long as both classes implement the same interface. After replacing the class no other changes should be required and the program should continue to work as it did originally. Interface segregation principle (ISP): This principle states that we should split interfaces which are very large (general-purpose interfaces) into smaller and more specific ones (many client-specific interfaces) so that clients will only have to know about the methods that are of interest to them. Dependency inversion principle (DIP): This principle states that entities should depend on abstractions (interfaces) as opposed to depend on concretion (classes). JavaScript does not support interfaces and most developers find its class support (prototypes) not intuitive. This may lead us to think that writing JavaScript code that adheres to the SOLID principles is not possible. However, with TypeScript we can write truly SOLID JavaScript. In this article we will learn how to write TypeScript code that adheres to the SOLID principles so our applications are easy to maintain and extend over time. Let's start by taking a look to interface and classes in TypeScript. Interfaces The feature that we will miss the most when developing large-scale web applications with JavaScript is probably interfaces. Following the SOLID principles can help us to improve the quality of our code and writing good code is a must when working on a large project. The problem is that if we attempt to follow the SOLID principles with JavaScript we will soon realize that without interfaces we will never be able to write truly OOP code that adheres to the SOLID principles. Fortunately for us, TypeScript features interfaces. The Wikipedia's definition of interfaces in OOP is: In object-oriented languages, the term interface is often used to define an abstract type that contains no data or code, but defines behaviors as method signatures. Implementing an interface can be understood as signing a contract. The interface is a contract and when we sign it (implement it) we must follow its rules. The interface rules are the signatures of the methods and properties and we must implement them. Usually in OOP languages, a class can extend another class and implement one or more interfaces. On the other hand, an interface can implement one or more interfaces and cannot extend another class or interfaces. In TypeScript, interfaces doesn't strictly follow this behavior. The main two differences are that in TypeScript: An interface can extend another interface or class. An interface can define data and behavior as opposed to only behavior. An interface in TypeScript can be declared using the interface keyword: interface IPerson { greet(): void; } Classes The support of Classes is another essential feature to write code that adheres to the SOLID principles. We can create classes in JavaScript using prototypes but its is not as trivial as it is in other OOP languages like Java or C#. The ECMAScript 6 (ES6) specification of JavaScript introduces native support for the class keyword but unfortunately ES6 is not compatible with many old browsers that still around. However, TypeScript features classes and allow us to use them today because can indicate to the compiler which version of JavaScript we would like to use (including ES3, ES5, and ES6). Let's start by declaring a simple class: class Person implements Iperson { public name : string; public surname : string; public email : string; constructor(name : string, surname : string, email : string){ this.email = email; this.name = name; this.surname = surname; } greet() { alert("Hi!"); } } var me : Person = new Person("Remo", "Jansen", "remo.jansen@wolksoftware.com"); We use classes to represent the type of an object or entity. A class is composed of a name, attributes, and methods. The class above is named Person and contains three attributes or properties (name, surname, and email) and two methods (constructor and greet). The class attributes are used to describe the objects characteristics while the class methods are used to describe its behavior. The class above uses the implements keyword to implement the IPerson interface. All the methods (greet) declared by the IPerson interface must be implemented by the Person class. A constructor is an especial method used by the new keyword to create instances (also known as objects) of our class. We have declared a variable named me, which holds an instance of the class Person. The new keyword uses the Person class's constructor to return an object which type is Person. Single Responsibility Principle This principle states that a software component (usually a class) should adhere to the Single Responsibility Principle (SRP). The Person class above represents a person including all its characteristics (attributes) and behaviors (methods). Now, let's add some email is validation logic to showcase the advantages of the SRP: class Person { public name : string; public surname : string; public email : string; constructor(name : string, surname : string, email : string) { this.surname = surname; this.name = name; if(this.validateEmail(email)) { this.email = email; } else { throw new Error("Invalid email!"); } } validateEmail() { var re = /S+@S+.S+/; return re.test(this.email); } greet() { alert("Hi! I'm " + this.name + ". You can reach me at " + this.email); } } When an object doesn't follow the SRP and it knows too much (has too many properties) or does too much (has too many methods) we say that the object is a God object. The preceding class Person is a God object because we have added a method named validateEmail that is not really related to the Person class behavior. Deciding which attributes and methods should or should not be part of a class is a relatively subjective decision. If we spend some time analyzing our options we should be able to find a way to improve the design of our classes. We can refactor the Person class by declaring an Email class, which is responsible for the e-mail validation and use it as an attribute in the Person class: class Email { public email : string; constructor(email : string){ if(this.validateEmail(email)) { this.email = email; } else { throw new Error("Invalid email!"); } } validateEmail(email : string) { var re = /S+@S+.S+/; return re.test(email); } } Now that we have an Email class we can remove the responsibility of validating the e-mails from the Person class and update its email attribute to use the type Email instead of string. class Person { public name : string; public surname : string; public email : Email; constructor(name : string, surname : string, email : Email){ this.email = email; this.name = name; this.surname = surname; } greet() { alert("Hi!"); } } Making sure that a class has a single responsibility makes it easier to see what it does and how we can extend/improve it. We can further improve our Person an Email classes by increasing the level of abstraction of our classes. For example, when we use the Email class we don't really need to be aware of the existence of validateEmail method so this method could be private or internal (invisible from the outside of the Email class). As a result, the Email class would be much simpler to understand. When we increase the level of abstraction of an object, we can say that we are encapsulating that object. Encapsulation is also known as information hiding. For example, in the Email class allow us to use e-mails without having to worry about the e-mail validation because the class will deal with it for us. We can make this more clearly by using access modifiers (public or private) to flag as private all the class attributes and methods that we want to abstract from the usage of the Email class: class Email { private email : string; constructor(email : string){ if(this.validateEmail(email)) { this.email = email; } else { throw new Error("Invalid email!"); } } private validateEmail(email : string) { var re = /S+@S+.S+/; return re.test(email); } get():string { return this.email; } } We can then simply use the Email class without explicitly perform any kind of validation: var email = new Email("remo.jansen@wolksoftware.com"); Liskov Substitution Principle Liskov Substitution Principle (LSP) states: Subtypes must be substitutable for their base types. Let's take a look at an example to understand what this means. We are going to declare a class which responsibility is to persist some objects into some kind of storage. We will start by declaring the following interface: interface IPersistanceService { save(entity : any) : number; } After declaring the IPersistanceService interface we can implement it. We will use cookies the storage for the application's data: class CookiePersitanceService implements IPersistanceService{ save(entity : any) : number { var id = Math.floor((Math.random() * 100) + 1); // Cookie persistance logic... return id; } } We will continue by declaring a class named FavouritesController, which has a dependency on the IPersistanceService interface: class FavouritesController { private _persistanceService : IPersistanceService; constructor(persistanceService : IPersistanceService) { this._persistanceService = persistanceService; } public saveAsFavourite(articleId : number) { return this._persistanceService.save(articleId); } } We can finally create and instance of FavouritesController and pass an instance of CookiePersitanceService via its constructor. var favController = new FavouritesController(new CookiePersitanceService()); The LSP allows us to replace a dependency with another implementation as long as both implementations are based in the same base type. For example, we decide to stop using cookies as storage and use the HTML5 local storage API instead without having to worry about the FavouritesController code being affected by this change: class LocalStoragePersitanceService implements IpersistanceService { save(entity : any) : number { var id = Math.floor((Math.random() * 100) + 1); // Local storage persistance logic... return id; } } We can then replace it without having to add any changes to the FavouritesController controller class: var favController = new FavouritesController(new LocalStoragePersitanceService()); Interface Segregation Principle In the previous example, our interface was IPersistanceService and it was implemented by the cases LocalStoragePersitanceService and CookiePersitanceService. The interface was consumed by the class FavouritesController so we say that this class is a client of the IPersistanceService API. Interface Segregation Principle (ISP) states that no client should be forced to depend on methods it does not use. To adhere to the ISP we need to keep in mind that when we declare the API (how two or more software components cooperate and exchange information with each other) of our application's components the declaration of many client-specific interfaces is better than the declaration of one general-purpose interface. Let's take a look at an example. If we are designing an API to control all the elements in a vehicle (engine, radio, heating, navigation, lights, and so on) we could have one general-purpose interface, which allows controlling every single element of the vehicle: interface IVehicle { getSpeed() : number; getVehicleType: string; isTaxPayed() : boolean; isLightsOn() : boolean; isLightsOff() : boolean; startEngine() : void; acelerate() : number; stopEngine() : void; startRadio() : void; playCd : void; stopRadio() : void; } If a class has a dependency (client) in the IVehicle interface but it only wants to use the radio methods we would be facing a violation of the ISP because, as we have already learned, no client should be forced to depend on methods it does not use. The solution is to split the IVehicle interface into many client-specific interfaces so our class can adhere to the ISP by depending only on Iradio: interface IVehicle { getSpeed() : number; getVehicleType: string; isTaxPayed() : boolean; isLightsOn() : boolean; } interface ILights { isLightsOn() : boolean; isLightsOff() : boolean; } interface IRadio { startRadio() : void; playCd : void; stopRadio() : void; } interface IEngine { startEngine() : void; acelerate() : number; stopEngine() : void; } Dependency Inversion Principle Dependency Inversion (DI) principle states that we should: Depend upon Abstractions. Do not depend upon concretions In the previous section, we implemented FavouritesController and we were able to replace an implementation of IPersistanceService with another without having to perform any additional change to FavouritesController. This was possible because we followed the DI principle as FavouritesController has a dependency on the IPersistanceService interface (abstractions) rather than LocalStoragePersitanceService class or CookiePersitanceService class (concretions). The DI principle also allow us to use an inversion of control (IoC) container. An IoC container is a tool used to reduce the coupling between the components of an application. Refer to Inversion of Control Containers and the Dependency Injection pattern by Martin Fowler at http://martinfowler.com/articles/injection.html. If you want to learn more about IoC. Summary In this article, we looked upon classes, interfaces, and the SOLID principles. Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack [article] Introduction to Spring Web Application in No Time [article] Introduction to TypeScript [article]
Read more
  • 0
  • 0
  • 33906

article-image-how-to-build-and-deploy-microservices-using-payara-micro
Gebin George
28 Mar 2018
9 min read
Save for later

How to build and deploy Microservices using Payara Micro

Gebin George
28 Mar 2018
9 min read
Payara Micro offers a new way to run Java EE or microservice applications. It is based on the Web profile of Glassfish and bundles few additional APIs. The distribution is designed keeping modern containerized environment in mind. Payara Micro is available to download as a standalone executable JAR, as well as a Docker image. It's an open source MicroProfile compatible runtime. Today, we will learn to use payara micro to build and deploy microservices. Here’s a list of APIs that are supported in Payara Micro: Servlets, JSTL, EL, and JSPs WebSockets JSF JAX-RS Chapter 4 [ 91 ] EJB lite JTA JPA Bean Validation CDI Interceptors JBatch Concurrency JCache We will be exploring how to build our services using Payara Micro in the next section. Building services with Payara Micro Let's start building parts of our Issue Management System (IMS), which is going to be a one-stop-destination for collaboration among teams. As the name implies, this system will be used for managing issues that are raised as tickets and get assigned to users for resolution. To begin the project, we will identify our microservice candidates based on the business model of IMS. Here, let's define three functional services, which will be hosted in their own independent Git repositories: ims-micro-users ims-micro-tasks ims-micro-notify You might wonder, why these three and why separate repositories? We could create much more fine-grained services and perhaps it wouldn't be wrong to do so. The answer lies in understanding the following points: Isolating what varies: We need to be able to independently develop and deploy each unit. Changes to one business capability or domain shouldn't require changes in other services more often than desired. Organisation or Team structure: If you define teams by business capability, then they can work independent of others and release features with greater agility. The tasks team should be able to evolve independent of the teams that are handling users or notifications. The functional boundaries should allow independent version and release cycle management. Transactional boundaries for consistency: Distributed transactions are not easy, thus creating services for related features that are too fine grained, and lead to more complexity than desired. You would need to become familiar with concepts like eventual consistency, but these are not easy to achieve in practice. Source repository per service: Setting up a single repository that hosts all the services is ideal when it's the same team that works on these services and the project is relatively small. But we are building our fictional IMS, which is a large complex system with many moving parts. Separate teams would get tightly coupled by sharing a repository. Moreover, versioning and tagging of releases will be yet another problem to solve. The projects are created as standard Java EE projects, which are Skinny WARs, that will be deployed using the Payara Micro server. Payara Micro allows us to delay the decision of using a Fat JAR or Skinny WAR. This gives us flexibility in picking the deployment choice at a later stage. As Maven is a widely adopted build tool among developers, we will use the same to create our example projects, using the following steps: mvn archetype:generate -DgroupId=org.jee8ng -DartifactId=ims-micro-users - DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false mvn archetype:generate -DgroupId=org.jee8ng -DartifactId=ims-micro-tasks - DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false mvn archetype:generate -DgroupId=org.jee8ng -DartifactId=ims-micro-notify - DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false Once the structure is generated, update the properties and dependencies section of pom.xml with the following contents, for all three projects: <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <failOnMissingWebXml>false</failOnMissingWebXml> </properties> <dependencies> <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>8.0</version> <scope>provided</scope> </dependency> Chapter 4 [ 93 ] <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> </dependencies> Next, create a beans.xml file under WEB-INF folder for all three projects: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/beans_2_0.xsd" bean-discovery-mode="all"> </beans> You can delete the index.jsp and web.xml files, as we won't be needing them. The following is the project structure of ims-micro-users. The same structure will be used for ims-micro-tasks and ims-micro-notify: The package name for users, tasks, and notify service will be as shown as the following: org.jee8ng.ims.users (inside ims-micro-users) org.jee8ng.ims.tasks (inside ims-micro-tasks) org.jee8ng.ims.notify (inside ims-micro-notify) Each of the above will in turn have sub-packages called boundary, control, and entity. The structure follows the Boundary-Control-Entity (BCE)/Entity-Control-Boundary (ECB) pattern. The JaxrsActivator shown as follows is required to enable the JAX-RS API and thus needs to be placed in each of the projects: import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("resources") public class JaxrsActivator extends Application {} All three projects will have REST endpoints that we can invoke over HTTP. When doing RESTful API design, a popular convention is to use plural names for resources, especially if the resource could represent a collection. For example: /users /tasks The resource class names in the projects use the plural form, as it's consistent with the resource URL naming used. This avoids confusions such as a resource URL being called a users resource, while the class is named UserResource. Given that this is an opinionated approach, feel free to use singular class names if desired. Here's the relevant code for ims-micro-users, ims-micro-tasks, and ims-micronotify projects respectively. Under ims-micro-users, define the UsersResource endpoint: package org.jee8ng.ims.users.boundary; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("users") public class UsersResource { @GET Chapter 4 [ 95 ] @Produces(MediaType.APPLICATION_JSON) public Response get() { return Response.ok("user works").build(); } } Under ims-micro-tasks, define the TasksResource endpoint: package org.jee8ng.ims.tasks.boundary; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("tasks") public class TasksResource { @GET @Produces(MediaType.APPLICATION_JSON) public Response get() { return Response.ok("task works").build(); } } Under ims-micro-notify, define the NotificationsResource endpoint: package org.jee8ng.ims.notify.boundary; import javax.ws.rs.*; import javax.ws.rs.core.*; @Path("notifications") public class NotificationsResource { @GET @Produces(MediaType.APPLICATION_JSON) public Response get() { return Response.ok("notification works").build(); } } Once you build all three projects using mvn clean install, you will get your Skinny WAR files generated in the target directory, which can be deployed on the Payara Micro server. Running services with Payara Micro Download the Payara Micro server if you haven't already, from this link: https://www.payara.fish/downloads. The micro server will have the name payara-micro-xxx.jar, where xxx will be the version number, which might be different when you download the file. Here's how you can start Payara Micro with our services deployed locally. When doing so, we need to ensure that the instances start on different ports, to avoid any port conflicts: >java -jar payara-micro-xxx.jar --deploy ims-micro-users/target/ims-microusers. war --port 8081 >java -jar payara-micro-xxx.jar --deploy ims-micro-tasks/target/ims-microtasks. war --port 8082 >java -jar payara-micro-xxx.jar --deploy ims-micro-notify/target/ims-micronotify. war --port 8083 This will start three instances of Payara Micro running on the specified ports. This makes our applications available under these URLs: http://localhost:8081/ims-micro-users/resources/users/ http://localhost:8082/ims-micro-tasks/resources/tasks/ http://localhost:8083/ims-micro-notify/resources/notifications/ Payar Micro can be started on a non-default port by using the --port parameter, as we did earlier. This is useful when running multiple instances on the same machine. Another option is to use the --autoBindHttp parameter, which will attempt to connect on 8080 as the default port, and if that port is unavailable, it will try to bind on the next port up, repeating until it finds an available port. Examples of starting Payara Micro: Uber JAR option: Now, there's one more feature that Payara Micro provides. We can generate an Uber JAR as well, which would be the Fat JAR approach that we learnt in the Fat JAR section. To package our ims-micro-users project as an Uber JAR, we can run the following command: java -jar payara-micro-xxx.jar --deploy ims-micro-users/target/ims-microusers. war --outputUberJar users.jar This will generate the users.jar file in the directory where you run this command. The size of this JAR will naturally be larger than our WAR file, since it will also bundle the Payara Micro runtime in it. Here's how you can start the application using the generated JAR: java -jar users.jar The server parameters that we used earlier can be passed to this runnable JAR file too. Apart from the two choices we saw for running our microservice projects, there's a third option as well. Payara Micro provides an API based approach, which can be used to programmatically start the embedded server. We will expand upon these three services as we progress further into the realm of cloud based Java EE. We saw how to leverage the power of Payara Micro to run Java EE or microservice applications. You read an excerpt from the book, Java EE 8 and Angular written by Prashant Padmanabhan. This book helps you build high-performing enterprise applications using Java EE powered by Angular at the frontend.  
Read more
  • 0
  • 0
  • 33742

article-image-how-to-integrate-a-medium-editor-in-angular-8
Guest Contributor
05 Sep 2019
5 min read
Save for later

How to integrate a Medium editor in Angular 8

Guest Contributor
05 Sep 2019
5 min read
In the world of text editing, there is a new era of WYSIWYG (What You See Is What You Get). We all know how styling and formatting become the important elements of your website but most of the times it is tough to pick a simple, easy-to-use and powerful editor. Currently, the good days are coming back with the new Medium Editor! Medium Editor is an independent Javascript library to make a coasting content manager bar which springs up when you select a bit of content of your page which is enlivened by the magnificence of Medium.com. You can turn every field from a message on your contact form to a whole article on the back-end into a professionally styled text paragraph contains quote blocks, heading, hyperlinks or just a few selected words. You can also try to incorporate text editor into Angular 8 for the ease of updates and edits of your content. Angular 8 has released its latest feature - beta 6 with the attractive new functionalities for testing your software and fixing bugs. One of them is Bazel - Google's open-source part of its internal build tool called Blaze which is capable of performing incremental builds and tests. Let us check how you can integrate a Medium editor in the Angular 8 platform. Also Read: Angular CLI 8.3.0 releases with a new deploy command, faster production builds, and more Steps to create an editor using Angular 8 Step 1: First thing first, Create a project in Angular and you can also make use of bootstrap for making it look pretty good by adding CDN links in the index.html After entering the above-given command line, it will generate an angular starter application after it has completed installing all the dependencies. Step 2: Install an npm package by entering the below-given line. And then, include the CSS and js in angular.json file Step 3: Create a component with your chosen name and then create the one with the name create Step 4: Click to the newly created component.html and make a div by giving it a template reference of the name Try the above-given code snippet under a few bootstrap classes just to give a basic stylings Step 5: Select your component class and make a variable editor to view the child property as listed below: Step 6: Then, we will make use of one lifecycle hook of angular which is ngAfterViewInit. Paste the above-given scrap and you may get a mistake like media supervisor that isn't characterized all things considered, in this way, we have to declare it on the top like In the wake of rolling out the above improvements, you can, for the most part, make a little medium-supervisor to utilize it for yourself. You can pick over to compose anything and simply select the content you have written to see the enchantment. Step 7: After this, you may require some more alternatives in your editorial manager toolbar. For doing as such, you have to pass an arrangement object in the MediumEditor Constructor. By making the selective changes, you will be able to see a load of available options. Step 8: So, now you have got an editor, you can easily get the data from it. If someone writes a post then you need to have an HTML write of the same. Once more, you have to partition the screen into two parts wherein one half, there will be a supervisor and the other half will show the see of the post. [box type="shadow" align="" class="" width=""]In the second half of the screen, you need to assign the value of inner HTML as given above[/box] Wrap Up Every system is prone to pros and cons and so does the Angular too. Angular offers a clean code development along with the high-performance framework that manages to route, providing seamless updates using Command Line Interface and retrieving the state of location services. Also, you can debug the templates in Angular 8 and supports multiple applications in one domain. Contrary to this, Angular might be confusing for the newcomers as there is no accurate manual which includes the proper documentation of the framework. Also, it lacks the developer community and there is limited scope to debug Limited Routing. However, Angular 8 supports multiple applications in one domain and user-friendly for all the versions of the operating system. So, here we come to the end of the article. We hope you have gained information on how to integrate the latest medium editor to Angular 8. Do give it a try! Till then - keep learning! Author Bio Dave Jarvis is working as a Business Development Executive at eTatvaSoft.com, an enterprise-level mobile & web application development company. He aims to sharpen his analytical skills, deepening his data understanding and broaden his business knowledge in these years of his career. Click here to find more information about the company. Follow him on Twitter. Other interesting news in Web development Google Chrome 76 now supports native lazy-loading Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more #Reactgate forces React leaders to confront the community’s toxic culture head on
Read more
  • 0
  • 0
  • 33742

article-image-applying-spring-security-using-json-web-token-jwt
Vijin Boricha
10 Apr 2018
9 min read
Save for later

Applying Spring Security using JSON Web Token (JWT)

Vijin Boricha
10 Apr 2018
9 min read
Today, we will learn about spring security and how it can be applied in various forms using powerful libraries like JSON Web Token (JWT). Spring Security is a powerful authentication and authorization framework, which will help us provide a secure application. By using Spring Security, we can keep all of our REST APIs secured and accessible only by authenticated and authorized calls. Authentication and authorization Let's look at an example to explain this. Assume you have a library with many books. Authentication will provide a key to enter the library; however, authorization will give you permission to take a book. Without a key, you can't even enter the library. Even though you have a key to the library, you will be allowed to take only a few books. JSON Web Token (JWT) Spring Security can be applied in many forms, including XML configurations using powerful libraries such as Jason Web Token. As most companies use JWT in their security, we will focus more on JWT-based security than simple Spring Security, which can be configured in XML. JWT tokens are URL-safe and web browser-compatible especially for Single Sign-On (SSO) contexts. JWT has three parts: Header Payload Signature The header part decides which algorithm should be used to generate the token. While authenticating, the client has to save the JWT, which is returned by the server. Unlike traditional session creation approaches, this process doesn't need to store any cookies on the client side. JWT authentication is stateless as the client state is never saved on a server. JWT dependency To use JWT in our application, we may need to use the Maven dependency. The following dependency should be added in the pom.xml file. You can get the Maven dependency from: https://mvnrepository.com/artifact/javax.xml.Bind. We have used version 2.3.0 of the Maven dependency in our application: <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.0</version> </dependency> Note: As Java 9 doesn't include DataTypeConverter in their bundle, we need to add the preceding configuration to work with DataTypeConverter. We will cover DataTypeConverter in the following section. Creating a Jason Web Token To create a token, we have added an abstract method called createToken in our SecurityService interface. This interface will tell the implementing class that it has to create a complete method for createToken. In the createToken method, we will use only the subject and expiry time as these two options are important when creating a token. At first, we will create an abstract method in the SecurityService interface. The concrete class (whoever implements the SecurityService interface) has to implement the method in their class: public interface SecurityService { String createToken(String subject, long ttlMillis); // other methods } In the preceding code, we defined the method for token creation in the interface. SecurityServiceImpl is the concrete class that implements the abstract method of the SecurityService interface by applying the business logic. The following code will explain how JWT will be created by using the subject and expiry time: private static final String secretKey= "4C8kum4LxyKWYLM78sKdXrzbBjDCFyfX"; @Override public String createToken(String subject, long ttlMillis) { if (ttlMillis <= 0) { throw new RuntimeException("Expiry time must be greater than Zero :["+ttlMillis+"] "); } // The JWT signature algorithm we will be using to sign the token SignatureAlgorithm signatureAlgorithm = SignatureAlgorithm.HS256; byte[] apiKeySecretBytes = DatatypeConverter.parseBase64Binary(secretKey); Key signingKey = new SecretKeySpec(apiKeySecretBytes, signatureAlgorithm.getJcaName()); JwtBuilder builder = Jwts.builder() .setSubject(subject) .signWith(signatureAlgorithm, signingKey); long nowMillis = System.currentTimeMillis(); builder.setExpiration(new Date(nowMillis + ttlMillis)); return builder.compact(); } The preceding code creates the token for the subject. Here, we have hardcoded the secret key "4C8kum4LxyKWYLM78sKdXrzbBjDCFyfX " to simplify the token creation process. If needed, we can keep the secret key inside the properties file to avoid hard code in the Java code. At first, we verify whether the time is greater than zero. If not, we throw the exception right away. We are using the SHA-256 algorithm as it is used in most applications. Note: Secure Hash Algorithm (SHA) is a cryptographic hash function. The cryptographic hash is in the text form of a data file. The SHA-256 algorithm generates an almost-unique, fixed-size 256-bit hash. SHA-256 is one of the more reliable hash functions. We have hardcoded the secret key in this class. We can also store the key in the application.properties file. However to simplify the process, we have hardcoded it: private static final String secretKey= "4C8kum4LxyKWYLM78sKdXrzbBjDCFyfX"; We are converting the string key to a byte array and then passing it to a Java class, SecretKeySpec, to get a signingKey. This key will be used in the token builder. Also, while creating a signing key, we use JCA, the name of our signature algorithm. Note: Java Cryptography Architecture (JCA) was introduced by Java to support modern cryptography techniques. We use the JwtBuilder class to create the token and set the expiration time for it. The following code defines the token creation and expiry time setting option: JwtBuilder builder = Jwts.builder() .setSubject(subject) .signWith(signatureAlgorithm, signingKey); long nowMillis = System.currentTimeMillis(); builder.setExpiration(new Date(nowMillis + ttlMillis)); We will have to pass time in milliseconds while calling this method as the setExpiration takes only milliseconds. Finally, we have to call the createToken method in our HomeController. Before calling the method, we will have to autowire the SecurityService as follows: @Autowired SecurityService securityService; The createToken call is coded as follows. We take the subject as the parameter. To simplify the process, we have hardcoded the expiry time as 2 * 1000 * 60 (two minutes). HomeController.java: @Autowired SecurityService securityService; @ResponseBody @RequestMapping("/security/generate/token") public Map<String, Object> generateToken(@RequestParam(value="subject") String subject){ String token = securityService.createToken(subject, (2 * 1000 * 60)); Map<String, Object> map = new LinkedHashMap<>(); map.put("result", token); return map; } Generating a token We can test the token by calling the API in a browser or any REST client. By calling this API, we can create a token. This token will be used for user authentication-like purposes. Sample API for creating a token is as follows: http://localhost:8080/security/generate/token?subject=one Here we have used one as a subject. We can see the token in the following result. This is how the token will be generated for all the subjects we pass to the API: { result: "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJvbmUiLCJleHAiOjE1MDk5MzY2ODF9.GknKcywiIG4- R2bRmBOsjomujP0MxZqdawrB8TO3P4" } Note: JWT is a string that has three parts, each separated by a dot (.). Each section is base-64 encoded. The first section is the header, which gives a clue about the algorithm used to sign the JWT. The second section is the body, and the final section is the signature. Getting a subject from a Jason Web Token So far, we have created a JWT token. Here, we are going to decode the token and get the subject from it. In a future section, we will talk about how to decode and get the subject from the token. As usual, we have to define the method to get the subject. We will define the getSubject method in SecurityService. Here, we will create an abstract method called getSubject in the SecurityService interface. Later, we will implement this method in our concrete class: String getSubject(String token); In our concrete class, we will implement the getSubject method and add our code in the SecurityServiceImpl class. We can use the following code to get the subject from the token: @Override public String getSubject(String token) { Claims claims = Jwts.parser() .setSigningKey(DatatypeConverter.parseBase64Binary(secretKey)) .parseClaimsJws(token).getBody(); return claims.getSubject(); } In the preceding method, we use the Jwts.parser to get the claims. We set a signing key by converting the secret key to binary and then passing it to a parser. Once we get the Claims, we can simply get the subject by calling getSubject. Finally, we can call the method in our controller and pass the generated token to get the subject. You can check the following code, where the controller is calling the getSubject method and returning the subject in the HomeController.java file: @ResponseBody @RequestMapping("/security/get/subject") public Map<String, Object> getSubject(@RequestParam(value="token") String token){ String subject = securityService.getSubject(token); Map<String, Object> map = new LinkedHashMap<>(); map.put("result", subject); return map; } Getting a subject from a token Previously, we created the code to get the token. Here we will test the method we created previously by calling the get subject API. By calling the REST API, we will get the subject that we passed earlier. Sample API: http://localhost:8080/security/get/subject?token=eyJhbGciOiJIUzI1NiJ9.eyJzd WIiOiJvbmUiLCJleHAiOjE1MDk5MzY2ODF9.GknKcywiI-G4- R2bRmBOsjomujP0MxZqdawrB8TO3P4 Since we used one as the subject when creating the token by calling the generateToken method, we will get "one" in the getSubject method: { result: "one" } Note: Usually, we attach the token in the headers; however, to avoid complexity, we have provided the result. Also, we have passed the token as a parameter to get the subject. You may not need to do it the same way in a real application. This is only for demo purposes. This article is an excerpt from the book Building RESTful Web Services with Spring 5 - Second Edition, written by Raja CSP Raman. This book involves techniques to deal with security in Spring and shows how to implement unit test and integration test strategies. You may also like How to develop RESTful web services in Spring, another tutorial from this book. Check out other posts on Spring Security: Spring Security 3: Tips and Tricks Opening up to OpenID with Spring Security Migration to Spring Security 3  
Read more
  • 0
  • 0
  • 33632
article-image-adding-media-our-site
Packt
21 Jun 2016
19 min read
Save for later

Adding Media to Our Site

Packt
21 Jun 2016
19 min read
In this article by Neeraj Kumar et al, authors of the book, Drupal 8 Development Beginner's Guide - Second Edtion, explains a text-only site is not going to hold the interest of visitors; a site needs some pizzazz and some spice! One way to add some pizzazz to your site is by adding some multimedia content, such as images, video, audio, and so on. But, we don't just want to add a few images here and there; in fact, we want an immersive and compelling multimedia experience that is easy to manage, configure, and extend. The File entity (https://drupal.org/project/file_entity) module for Drupal 8 will enable us to manage files very easily. In this article, we will discover how to integrate the File entity module to add images to our d8dev site, and will explore compelling ways to present images to users. This will include taking a look at the integration of a lightbox-type UI element for displaying the File-entity-module-managed images, and learning how we can create custom image styles through UI and code. The following topics will be covered in this article: The File entity module for Drupal 8 Adding a Recipe image field to your content types Code example—image styles for Drupal 8 Displaying recipe images in a lightbox popup Working with Drupal issue queues (For more resources related to this topic, see here.) Introduction to the File entity module As per the module page at https://www.drupal.org/project/file_entity: File entity provides interfaces for managing files. It also extends the core file entity, allowing files to be fieldable, grouped into types, viewed (using display modes) and formatted using field formatters. File entity integrates with a number of modules, exposing files to Views, Entity API, Token and more. In our case, we need this module to easily edit image properties such as Title text and Alt text. So these properties will be used in the colorbox popup to display them as captions. Working with dev versions of modules There are times when you come across a module that introduces some major new features and is fairly stable, but not quite ready for use on a live/production website, and is therefore available only as a dev version. This is a perfect opportunity to provide a valuable contribution to the Drupal community. Just by installing and using a dev version of a module (in your local development environment, of course), you are providing valuable testing for the module maintainers. Of course, you should enter an issue in the project's issue queue if you discover any bugs or would like to request any additional features. Also, using a dev version of a module presents you with the opportunity to take on some custom Drupal development. However, it is important that you remember that a module is released as a dev version for a reason, and it is most likely not stable enough to be deployed on a public-facing site. Our use of the File entity module in this article is a good example of working with the dev version of a module. One thing to note: Drush will download official and dev module releases. But at this point in time, there is no official port for the File entity module in Drupal, so we will use the unofficial one, which lives on GitHub (https://github.com/drupal-media/file_entity). In the next step, we will be downloading the dev release with GitHub. Time for action – installing a dev version of the File entity module In Drupal, we use Drush to download and enable any module/theme, but there is no official port yet for the file entity module in Drupal, so we can use the unofficial one, which lives on GitHub at https://github.com/drupal-media/file_entity: Open the Terminal (Mac OS X) or Command Prompt (Windows) application, and go to the root directory of your d8dev site. Go inside the modules folder and download the File entity module from GitHub. We use the git command to download this module: $ git clone https://github.com/drupal-media/file_entity. Another way is to download a .zip file from https://github.com/drupal-media/file_entity and extract it in the modules folder: Next, on the Extend page (admin/modules), enable the File entity module. What just happened? We enabled the File entity module, and learned how to download and install with GitHub. A new recipe for our site In this article, we are going to create a new recipe: Thai Basil Chicken. If you would like to have more real content to use as an example, and feel free to try the recipe out! Name: Thai Basil Chicken Description: A spicy, flavorful version of one of my favorite Thai dishes RecipeYield : Four servings PrepTime: 25 minutes CookTime: 20 minutes Ingredients : One pound boneless chicken breasts Two tablespoons of olive oil Four garlic cloves, minced Three tablespoons of soy sauce Two tablespoons of fish sauce Two large sweet onions, sliced Five cloves of garlic One yellow bell pepper One green bell pepper Four to eight Thai peppers (depending on the level of hotness you want) One-third cup of dark brown sugar dissolved in one cup of hot water One cup of fresh basil leaves Two cups of Jasmin rice Instructions: Prepare the Jasmine rice according to the directions. Heat the olive oil in a large frying pan over medium heat for two minutes. Add the chicken to the pan and then pour on soy sauce. Cook the chicken until there is no visible pinkness—approximately 8 to 10 minutes. Reduce heat to medium low. Add the garlic and fish sauce, and simmer for 3 minutes. Next, add the Thai chilies, onion, and bell pepper and stir to combine. Simmer for 2 minutes. Add the brown sugar and water mixture. Stir to mix, and then cover. Simmer for 5 minutes. Uncover, add basil, and stir to combine. Serve over rice. Time for action – adding a Recipe images field to our Recipe content type We will use the manage fields administrative page to add a Media field to our d8dev Recipe content type: Open up the d8dev site in your favorite browser, click on the Structure link in the Admin toolbar, and then click on the Content types link. Next, on the Content types administrative page, click on the Manage fields link for your Recipe content type: Now, on the Manage fields administrative page, click on the Add field link. On the next screen, select Image from the Add a new field dropdown and Label as Recipe images. Click on the Save field settings button. Next, on the Field settings page, select Unlimited as the allowed number of values. Click on the Save field settings button. On the next screen, leave all settings as they are and click on the Save settings button. Next, on the Manage form display page, select widget Editable file for the Recipe images field and click on the Save button. Now, on the Manage display page, for the Recipe images field, select Hidden as the label. Click on the settings icon. Then select Medium (220*220) as the image style, and click on the Update button. At the bottom, click on the Save button: Let's add some Recipe images to a recipe. Click on the Content link in the menu bar, and then click on Add content and Recipe. On the next screen, fill in the title as Thai Basil Chicken and other fields respectively as mentioned in the preceding recipe details. Now, scroll down to the new Recipe images field that you have added. Click on the Add a new file button or drag and drop images that you want to upload. Then click on the Save and Publish button: Reload your Thai Basil Chicken recipe page, and you should see something similar to the following: All the images are stacked on top of each other. So, we will add the following CSS just under the style for field--name-field-recipe-images and field--type-recipe-images in the /modules/d8dev/styles/d8dev.css file, to lay out the Recipe images in more of a grid: .node .field--type-recipe-images { float: none !important; } .field--name-field-recipe-images .field__item { display: inline-flex; padding: 6px; } Now we will load this d8dev.css file to affect this grid style. In Drupal 8, loading a CSS file has a process: Save the CSS to a file. Define a library, which can contain the CSS file. Attach the library to a render array in a hook. So, we have already saved a CSS file called d8dev.css under the styles folder; now we will define a library. To define one or more (asset) libraries, add a *.libraries.yml file to your theme folder. Our module is named d8dev, and then the filename should be d8dev.libraries.yml. Each library in the file is an entry detailing CSS, like this: d8dev: version: 1.x css: theme: styles/d8dev.css: {} Now, we define the hook_page_attachments() function to load the CSS file. Add the following code inside the d8dev.module file. Use this hook when you want to conditionally add attachments to a page: /** * Implements hook_page_attachments(). */ function d8dev_page_attachments(array &$attachments) { $attachments['#attached']['library'][] = 'd8dev/d8dev'; } Now, we will need to clear the cache for our d8dev site by going to Configuration, clicking on the Performance link, and then clicking on the Clear all caches button. Reload your Thai Basil Chicken recipe page, and you should see something similar to the following: What just happened? We added and configured a media-based field for our Recipe content type. We updated the d8dev module with custom CSS code to lay out the Recipe images in more of a grid format. And also we looked at how to attach a CSS file through a module. Creating a custom image style Before we configure a colorbox feature, we are going to create a custom image style to use when we add them in colorbox content preview settings. Image styles for Drupal 8 are part of the core Image module. The core image module provides three default image styles—thumbnail, medium, and large—as seen in the following Image style configuration page: Now, we are going to add a fifth custom image style, an image style that will resize our images somewhere between the 100 x 75 thumbnail style and the 220 x 165 medium style. We will walkthrough the process of creating an image style through the Image style administrative page, and also walkthrough the process of programmatically creating an image style. Time for action – adding a custom image style through the image style administrative page First, we will use the Image style administrative page (admin/config/media/image-styles) to create a custom image style: Open the d8dev site in your favorite browser, click on the Configuration link in the Admin toolbar, and click on the Image styles link under the Media section. Once the Image styles administrative page has loaded, click on the Add style link. Next, enter small for the Image style name of your custom image style, and click on the Create new style button: Now, we will add the one and only effect for our custom image style by selecting Scale from the EFFECT options and then clicking on the Add button. On the Add Scale effect page, enter 160 for the width and 120 for the height. Leave the Allow Upscaling checkbox unchecked, and click on the Add effect button: Finally, just click on the Update style button on the Edit small style administrative page, and we are done. We now have a new custom small image style that we will be able to use to resize images for our site: What just happened? We learned how easy it is to add a custom image style with the administrative UI. Now, we are going to see how to add a custom image style by writing some code. The advantage of having code-based custom image styles is that it will allow us to utilize a source code repository, such as Git, to manage and deploy our custom image styles between different environments. For example, it would allow us to use Git to promote image styles from our development environment to a live production website. Otherwise, the manual configuration that we just did would have to be repeated for every environment. Time for action – creating a programmatic custom image style Now, we will see how we can add a custom image style with code: The first thing we need to do is delete the small image style that we just created. So, open your d8dev site in your favorite browser, click on the Configuration link in the Admin toolbar, and then click on the Image styles link under the Media section. Once the Image styles administrative page has loaded, click on the delete link for the small image style that we just added. Next, on the Optionally select a style before deleting small page, leave the default value for the Replacement style select list as No replacement, just delete, and click on the Delete button: In Drupal 8, image styles have been converted from an array to an object that extends ConfigEntity. All image styles provided by modules need to be defined as YAML configuration files in the config/install folder of each module. Suppose our module is located at modules/d8dev. Create a file called modules/d8dev/config/install/image.style.small.yml with the following content: uuid: b97a0bd7-4833-4d4a-ae05-5d4da0503041 langcode: en status: true dependencies: { } name: small label: small effects: c76016aa-3c8b-495a-9e31-4923f1e4be54: uuid: c76016aa-3c8b-495a-9e31-4923f1e4be54 id: image_scale weight: 1 data: width: 160 height: 120 upscale: false We need to use a UUID generator to assign unique IDs to image style effects. Do not copy/paste UUIDs from other pieces of code or from other image styles! The name of our custom style is small, is provided as the name and label as same. For each effect that we want to add to our image style, we will specify the effect we want to use as the name key, and then pass in values as the settings for the effect. In the case of the image_scale effect that we are using here, we pass in the width, height, and upscale settings. Finally, the value for the weight key allows us to specify the order the effects should be processed in, and although it is not very useful when there is only one effect, it becomes important when there are multiple effects. Now, we will need to uninstall and install our d8dev module by going to the Extend page. On the next screen click on the Uninstall tab, check the d8dev checkbox and click on the Uninstall button. Now, click on the List tab, check d8dev, and click on the Install button. Then, go back to the Image styles administrative page and you will see our programmatically created small image style. What just happened? We created a custom image style with some custom code. We then configured our Recipe content type to use our custom image style for images added to the Recipe images field. Integrating the Colorbox and File entity modules The File entity module provides interfaces for managing files. For images, we will be able to edit Title text, Alt text, and Filenames easily. However, the images are taking up quite a bit of room. Let's create a pop-up lightbox gallery and show images in a popup. When someone clicks on an image, a lightbox will pop up and allow the user to cycle through larger versions of all associated images. Time for action – installing the Colorbox module Before we can display Recipe images in a Colorbox, we need to download and enable the module: Open the Mac OS X Terminal or Windows Command Prompt, and change to the d8dev directory. Next, use Drush to download and enable the current dev release of the Colorbox module (http://drupal.org/project/colorbox): $ drush dl colorbox-8.x-1.x-dev Project colorbox (8.x-1.x-dev) downloaded to /var/www/html/d8dev/modules/colorbox. [success] $ drushencolorbox The following extensions will be enabled: colorbox Do you really want to continue? (y/n): y colorbox was enabled successfully. [ok] The Colorbox module depends on the Colorbox jQuery plugin available at https://github.com/jackmoore/colorbox. The Colorbox module includes a Drush task that will download the required jQuery plugin at the /libraries directory: $drushcolorbox-plugin Colorbox plugin has been installed in libraries [success] Next, we will look into the Colorbox display formatter. Click on the Structure link in the Admin toolbar, then click on the Content types link, and finally click on the manage display link for your Recipe content type under the Operations dropdown: Next, click on the FORMAT select list for the Recipe images field, and you will see an option for Colorbox, Select as Colorbox then you will see the settings change. Then, click on the settings icon: Now, you will see the settings for Colorbox. Select Content image style as small and Content image style for first image as small in the dropdown, and use the default settings for other options. Click on the Update button and next on the Save button at the bottom: Reload our Thai Basil Chicken recipe page, and you should see something similar to the following (with the new image style, small): Now, click on any image and then you will see the image loaded in the colorbox popup: We have learned more about images for colorbox, but colorbox also supports videos. Another way to add some spice to our site is by adding videos. So there are several modules available to work with colorbox for videos. The Video Embed Field module creates a simple field type that allows you to embed videos from YouTube and Vimeo and show their thumbnail previews simply by entering the video's URL. So you can try this module to add some pizzazz to your site! What just happened? We installed the Colorbox module and enabled it for the Recipe images field on our custom Recipe content type. Now, we can easily add images to our d8dev content with the Colorbox pop-up feature. Working with Drupal issue queues Drupal has its own issue queue for working with a team of developers around the world. If you need help for a specific project, core, module, or a theme related, you should go to the issue queue, where the maintainers, users, and followers of the module/theme communicate. The issue page provides a filter option, where you can search for specific issues based on Project, Assigned, Submitted by, Followers, Status, Priority, Category, and so on. We can find issues at https://www.drupal.org/project/issues/colorbox. Here, replace colorbox with the specific module name. For more information, see https://www.drupal.org/issue-queue. In our case, we have one issue with the colorbox module. Captions are working for the Automatic and Content title properties, but are not working for the Alt text and Title text properties. To check this issue, go to Structure | Content types and click on Manage display. On the next screen, click on the settings icon for the Recipe images field. Now select the Caption option as Title text or Alt text and click on the Update button. Finally, click on the Save button. Reload the Thai Basil Chicken recipe page, and click on any image. Then it opens in popup, but we cannot see captions for this. Make sure you have the Title text and Alt text properties updated for Recipe images field for the Thai Basil Chicken recipe. Time for action – creating an issue for the Colorbox module Now, before we go and try to figure out how to fix this functionality for the Colorbox module, let's create an issue: On https://www.drupal.org/project/issues/colorbox, click on the Create a new issue link: On the next screen we will see a form. We will fill in all the required fields: Title, Category as Bug report, Version as 8.x-1.x-dev, Component as Code, and the Issue summary field. Once I submitted my form, an issue was created at https://www.drupal.org/node/2645160. You should see an issue on Drupal (https://www.drupal.org/) like this: Next, the Maintainers of the colorbox module will look into this issue and reply accordingly. Actually, @frjo replied saying "I have never used that module but if someone who does sends in a patch I will take a look at it." He is a contributor to this module, so we will wait for some time and will see if someone can fix this issue by giving a patch or replying with useful comments. In case someone gives the patch, then we have to apply that to the colorbox module. This information is available on Drupal at https://www.drupal.org/patch/apply . What just happened? We understood and created an issue in the Colorbox module's issue queue list. We also looked into what the required fields are and how to fill them to create an issue in the Drupal module queue list form. Summary In this article, we looked at a way to use our d8dev site with multimedia, creating image styles using some custom code, and learned some new ways of interacting with the Drupal developer community. We also worked with the Colorbox module to add images to our d8dev content with the Colorbox pop-up feature. Lastly, we looked into the custom module to work with custom CSS files. Resources for Article: Further resources on this subject: Installing Drupal 8 [article] Drupal 7 Social Networking: Managing Users and Profiles [article] Drupal 8 and Configuration Management [article]
Read more
  • 0
  • 0
  • 33605

article-image-using-native-sdks-and-libraries-react-native
Emilio Rodriguez
07 Apr 2016
6 min read
Save for later

Using Native SDKs and Libraries in React Native

Emilio Rodriguez
07 Apr 2016
6 min read
When building an app in React Native we may end up needing to use third-party SDKs or libraries. Most of the time, these are only available in their native version, and, therefore, only accessible as Objective-C or Swift libraries in the case of iOS apps or as Java Classes for Android apps. Only in a few cases these libraries are written in JavaScript and even then, they may need pieces of functionality not available in React Native such as DOM access or Node.js specific functionality. In my experience, this is one of the main reasons driving developers and IT decision makers in general to run away from React Native when considering a mobile development framework for their production apps. The creators of React Native were fully aware of this potential pitfall and left a door open in the framework to make sure integrating third-party software was not only possible but also quick, powerful, and doable by any non-iOS/Android native developer (i.e. most of the React Native developers). As a JavaScript developer, having to write Objective-C or Java code may not be very appealing in the beginning, but once you realize the whole process of integrating a native SDK can take as little as eight lines of code split in two files (one header file and one implementation file), the fear quickly fades away and the feeling of being able to perform even the most complex task in a mobile app starts to take over. Suddenly, the whole power of iOS and Android can be at any React developer’s disposal. To better illustrate how to integrate a third-party SDK we will use one of the easiest to integrate payment providers: Paymill. If we take a look at their site, we notice that only iOS and Android SDKs are available for mobile payments. That should leave out every app written in React Native if it wasn’t for the ability of this framework to communicate with native modules. For the sake of convenience I will focus this article on the iOS module. Step 1: Create two native files for our bridge. We need to create an Objective-C class, which will serve as a bridge between our React code and Paymill’s native SDK. Normally, an Objective-C class is made out of two files, a .m and a .h, holding the module implementation and the header for this module respectively. To create the .h file we can right-click on our project’s main folder in XCode > New File > Header file. In our case, I will call this file PaymillBridge.h. For React Native to communicate with our bridge, we need to make it implement the RTCBridgeModule included in React Native. To do so, we only have to make sure our .h file looks like this: // PaymillBridge.h #import "RCTBridgeModule.h" @interface PaymillBridge : NSObject <RCTBridgeModule> @end We can follow a similar process to create the .m file: Right-click our project’s main folder in XCode > New File > Objective-C file. The module implementation file should include the RCT_EXPORT_MODULE macro (also provided in any React Native project): // PaymillBridge.m @implementation PaymillBridge RCT_EXPORT_MODULE(); @end A macro is just a predefined piece of functionality that can be imported just by calling it. This will make sure React is aware of this module and would make it available for importing in your app. Now we need to expose the method we need in order to use Paymill’s services from our JavaScript code. For this example we will be using Paymill’s method to generate a token representing a credit card based on a public key and some credit card details: generateTokenWithPublicKey. To do so, we need to use another macro provided by React Native: RCT_EXPORT_METHOD. // PaymillBridge.m @implementation PaymillBridge RCT_EXPORT_MODULE(); RCT_EXPORT_METHOD(generateTokenWithPublicKey: (NSString *)publicKey cardDetails:(NSDictionary *)cardDetails callback:(RCTResponseSenderBlock)callback) { //… Implement the call as described in the SDK’s documentation … callback(@[[NSNull null], token]); } @end In this step we will have to write some Objective-C but most likely it would be a very simple piece of code using the examples stated in the SDK’s documentation. One interesting point is how to send data from the native SDK to our React code. To do so you need to pass a callback as you can see I did as the last parameter of our exported method. Callbacks in React Native’s bridges have to be defined as RCTResponseSenderBlock. Once we do this, we can call this callback passing an array of parameters, which will be sent as parameters for our JavaScript function in React Native (in our case we decided to pass two parameters back: an error set to null following the error handling conventions of node.js, and the token generated by Paymill natively). Step 2: Call our bridge from our React Native code. Once the module is properly set up, React Native makes it available in our app just by importing it from our JavaScript code: // PaymentComponent.js var Paymill = require('react-native').NativeModules.PaymillBridge; Paymill.generateTokenWithPublicKey( '56s4ad6a5s4sd5a6', cardDetails, function(error, token){ console.log(token); }); NativeModules holds the list of modules we created implementing the RCTBridgeModule. React Native makes them available by the name we chose for our Objective-C class name (PaymillBridge in our example). Then, we can call any exported native method as a normal JavaScript method from our React Native Component or library. Going Even Further That should do it for any basic SDK, but React Native gives developers a lot more control on how to communicate with native modules. For example, we may want to force the module to be run in the main thread. For that we just need to add an extra method to our native module implementation: // PaymillBridge.m @implementation PaymillBridge //... - (dispatch_queue_t)methodQueue { return dispatch_get_main_queue(); } Just by adding this method to our PaymillBridge.m React Native will force all the functionality related to this module to be run on the main thread, which will be needed when running main-thread-only iOS API. And there is more: promises, exporting constants, sending events to JavaScript, etc. More complex functionality can be found in the official documentation of React Native; the topics covered on this article, however, should solve 80 percent of the cases when implementing most of the third-party SDKs. About the Author Emilio Rodriguez started working as a software engineer for Sun Microsystems in 2006. Since then, he has focused his efforts on building a number of mobile apps with React Native while contributing to the React Native project. These contributions helped his understand how deep and powerful this framework is.
Read more
  • 0
  • 2
  • 33600

article-image-tangled-web-not-all
Packt
22 Jun 2017
20 min read
Save for later

Tangled Web? Not At All!

Packt
22 Jun 2017
20 min read
In this article by Clif Flynt, the author of the book Linux Shell Scripting Cookbook - Third Edition, we can see a collection of shell-scripting recipes that talk to services on the Internet. This articleis intended to help readers understand how to interact with the Web using shell scripts to automate tasks such as collecting and parsing data from web pages. This is discussed using POST and GET to web pages, writing clients to web services. (For more resources related to this topic, see here.) In this article, we will cover the following recipes: Downloading a web page as plain text Parsing data from a website Image crawler and downloader Web photo album generator Twitter command-line client Tracking changes to a website Posting to a web page and reading response Downloading a video from the Internet The Web has become the face of technology and the central access point for data processing. The primary interface to the web is via a browser that's designed for interactive use. That's great for searching and reading articles on the web, but you can also do a lot to automate your interactions with shell scripts. For instance, instead of checking a website daily to see if your favorite blogger has added a new blog, you can automate the check and be informed when there's new information. Similarly, twitter is the current hot technology for getting up-to-the-minute information. But if I subscribe to my local newspaper's twitter account because I want the local news, twitter will send me all news, including high-school sports that I don't care about. With a shell script, I can grab the tweets and customize my filters to match my desires, not rely on their filters. Downloading a web page as plain text Web pages are simply text with HTML tags, JavaScript and CSS. The HTML tags define the content of a web page, which we can parse for specific content. Bash scripts can parse web pages. An HTML file can be viewed in a web browser to see it properly formatted. Parsing a text document is simpler than parsing HTML data because we aren't required to strip off the HTML tags. Lynx is a command-line web browser which download a web page as plaintext. Getting Ready Lynx is not installed in all distributions, but is available via the package manager. # yum install lynx or apt-get install lynx How to do it... Let's download the webpage view, in ASCII character representation, in a text file by using the -dump flag with the lynx command: $ lynx URL -dump > webpage_as_text.txt This command will list all the hyperlinks <a href="link"> separately under a heading References, as the footer of the text output. This lets us parse links separately with regular expressions. For example: $lynx -dump http://google.com > plain_text_page.txt You can see the plaintext version of text by using the cat command: $ cat plain_text_page.txt Search [1]Images [2]Maps [3]Play [4]YouTube [5]News [6]Gmail [7]Drive [8]More » [9]Web History | [10]Settings | [11]Sign in [12]St. Patrick's Day 2017 _______________________________________________________ Google Search I'm Feeling Lucky [13]Advanced search [14]Language tools [15]Advertising Programs [16]Business Solutions [17]+Google [18]About Google © 2017 - [19]Privacy - [20]Terms References Parsing data from a website The lynx, sed, and awk commands can be used to mine data from websites. How to do it... Let's go through the commands used to parse details of actresses from the website: $ lynx -dump -nolist http://www.johntorres.net/BoxOfficefemaleList.html | grep -o "Rank-.*" | sed -e 's/ *Rank-([0-9]*) *(.*)/1t2/' | sort -nk 1 > actresslist.txt The output is: # Only 3 entries shown. All others omitted due to space limits 1 Keira Knightley 2 Natalie Portman 3 Monica Bellucci How it works... Lynx is a command-line web browser—it can dump a text version of a website as we would see in a web browser, instead of returning the raw html as wget or cURL do. This saves the step of removing HTML tags. The -nolist option shows the links without numbers. Parsing and formatting the lines that contain Rank is done with sed: sed -e 's/ *Rank-([0-9]*) *(.*)/1t2/' These lines are then sorted according to the ranks. See also The Downloading a web page as plain text recipe in this article explains the lynx command. Image crawler and downloader Image crawlers download all the images that appear in a web page. Instead of going through the HTML page by hand to pick the images, we can use a script to identify the images and download them automatically. How to do it... This Bash script will identify and download the images from a web page: #!/bin/bash #Desc: Images downloader #Filename: img_downloader.sh if [ $# -ne 3 ]; then echo "Usage: $0 URL -d DIRECTORY" exit -1 fi while [ -n $1 ] do case $1 in -d) shift; directory=$1; shift ;; *) url=$1; shift;; esac done mkdir -p $directory; baseurl=$(echo $url | egrep -o "https?://[a-z.-]+") echo Downloading $url curl -s $url | egrep -o "<imgsrc=[^>]*>" | sed's/<imgsrc="([^"]*).*/1/g' | sed"s,^/,$baseurl/,"> /tmp/$$.list cd $directory; while read filename; do echo Downloading $filename curl -s -O "$filename" --silent done < /tmp/$$.list An example usage is: $ ./img_downloader.sh http://www.flickr.com/search/?q=linux -d images How it works... The image downloader script reads an HTML page, strips out all tags except <img>, parses src="URL" from the <img> tag, and downloads them to the specified directory. This script accepts a web page URL and the destination directory as command-line arguments. The [ $# -ne 3 ] statement checks whether the total number of arguments to the script is three, otherwise it exits and returns a usage example. Otherwise, this code parses the URL and destination directory: while [ -n "$1" ] do case $1 in -d) shift; directory=$1; shift ;; *) url=${url:-$1}; shift;; esac done The while loop runs until all the arguments are processed. The shift command shifts arguments to the left so that $1 will take the next argument's value; that is, $2, and so on. Hence, we can evaluate all arguments through $1 itself. The case statement checks the first argument ($1). If that matches -d, the next argument must be a directory name, so the arguments are shifted and the directory name is saved. If the argument is any other string it is a URL. The advantage of parsing arguments in this way is that we can place the -d argument anywhere in the command line: $ ./img_downloader.sh -d DIR URL Or: $ ./img_downloader.sh URL -d DIR The egrep -o "<imgsrc=[^>]*>"code will print only the matching strings, which are the <img> tags including their attributes. The phrase [^>]*matches all the characters except the closing >, that is, <imgsrc="image.jpg">. sed's/<imgsrc="([^"]*).*/1/g' extracts the url from the string src="url". There are two types of image source paths—relative and absolute. Absolute paths contain full URLs that start with http:// or https://. Relative URLs starts with / or image_name itself. An example of an absolute URL is http://example.com/image.jpg. An example of a relative URL is /image.jpg. For relative URLs, the starting / should be replaced with the base URL to transform it to http://example.com/image.jpg. The script initializes the baseurl by extracting it from the initial url with the command: baseurl=$(echo $url | egrep -o "https?://[a-z.-]+") The output of the previously described sed command is piped into another sed command to replace a leading / with the baseurl, and the results are saved in a file named for the script's PID: /tmp/$$.list. sed"s,^/,$baseurl/,"> /tmp/$$.list The final while loop iterates through each line of the list and uses curl to downloas the images. The --silent argument is used with curl to avoid extra progress messages from being printed on the screen. The final while loop iterates through each line of the list and uses curl to downloas the images. The --silent argument is used with curl to avoid extra progress messages from being printed on the screen. Web photo album generator Web developers frequently create photo albums of full sized and thumbnail images. When a thumbnail is clicked, a large version of the picture is displayed. This requires resizing and placing many images. These actions can be automated with a simple bash script. The script creates thumbnails, places them in exact directories, and generates the code fragment for <img> tags automatically.  Web developers frequently create photo albums of full sized and thumbnail images. When a thumbnail is clicked, a large version of the picture is displayed. This requires resizing and placing many images. These actions can be automated with a simple bash script. The script creates thumbnails, places them in exact directories, and generates the code fragment for <img> tags automatically. Getting ready This script uses a for loop to iterate over every image in the current directory. The usual Bash utilities such as cat and convert (from the Image Magick package) are used. These will generate an HTML album, using all the images, in index.html. How to do it... This Bash script will generate an HTML album page: #!/bin/bash #Filename: generate_album.sh #Description: Create a photo album using images in current directory echo "Creating album.." mkdir -p thumbs cat <<EOF1 > index.html <html> <head> <style> body { width:470px; margin:auto; border: 1px dashed grey; padding:10px; } img { margin:5px; border: 1px solid black; } </style> </head> <body> <center><h1> #Album title </h1></center> <p> EOF1 for img in *.jpg; do convert "$img" -resize "100x""thumbs/$img" echo "<a href="$img">">>index.html echo "<imgsrc="thumbs/$img" title="$img" /></a>">> index.html done cat <<EOF2 >> index.html </p> </body> </html> EOF2 echo Album generated to index.html Run the script as follows: $ ./generate_album.sh Creating album.. Album generated to index.html How it works... The initial part of the script is used to write the header part of the HTML page. The following script redirects all the contents up to EOF1 to index.html: cat <<EOF1 > index.html contents... EOF1 The header includes the HTML and CSS styling. for img in *.jpg *.JPG; iterates over the file names and evaluates the body of the loop. convert "$img" -resize "100x""thumbs/$img" creates images of 100 px width as thumbnails. The following statements generate the required <img> tag and appends it to index.html: echo "<a href="$img">" echo "<imgsrc="thumbs/$img" title="$img" /></a>">> index.html Finally, the footer HTML tags are appended with cat as done in the first part of the script. Twitter command-line client Twitter is the hottest micro-blogging platform, as well as the latest buzz of the online social media now. We can use Twitter API to read tweets on our timeline from the command line! Twitter is the hottest micro-blogging platform, as well as the latest buzz of the online social media now. We can use Twitter API to read tweets on our timeline from the command line! Let's see how to do it. Getting ready Recently, Twitter stopped allowing people to log in by using plain HTTP Authentication, so we must use OAuth to authenticate ourselves.  Perform the following steps: Download the bash-oauth library from https://github.com/livibetter/bash-oauth/archive/master.zip, and unzip it to any directory. Go to that directory and then inside the subdirectory bash-oauth-master, run make install-all as root.Go to https://apps.twitter.com/ and register a new app. This will make it possible to use OAuth. After registering the new app, go to your app's settings and change Access type to Read and Write. Now, go to the Details section of the app and note two things—Consumer Key and Consumer Secret, so that you can substitute these in the script we are going to write. Great, now let's write the script that uses this. How to do it... This Bash script uses the OAuth library to read tweets or send your own updates. #!/bin/bash #Filename: twitter.sh #Description: Basic twitter client oauth_consumer_key=YOUR_CONSUMER_KEY oauth_consumer_scret=YOUR_CONSUMER_SECRET config_file=~/.$oauth_consumer_key-$oauth_consumer_secret-rc if [[ "$1" != "read" ]] && [[ "$1" != "tweet" ]]; then echo -e "Usage: $0 tweet status_messagen ORn $0 readn" exit -1; fi #source /usr/local/bin/TwitterOAuth.sh source bash-oauth-master/TwitterOAuth.sh TO_init if [ ! -e $config_file ]; then TO_access_token_helper if (( $? == 0 )); then echo oauth_token=${TO_ret[0]} > $config_file echo oauth_token_secret=${TO_ret[1]} >> $config_file fi fi source $config_file if [[ "$1" = "read" ]]; then TO_statuses_home_timeline'''YOUR_TWEET_NAME''10' echo $TO_ret | sed's/,"/n/g' | sed's/":/~/' | awk -F~ '{} {if ($1 == "text") {txt=$2;} else if ($1 == "screen_name") printf("From: %sn Tweet: %snn", $2, txt);} {}' | tr'"''' elif [[ "$1" = "tweet" ]]; then shift TO_statuses_update''"$@" echo 'Tweeted :)' fi Run the script as follows: $./twitter.sh read Please go to the following link to get the PIN: https://api.twitter.com/oauth/authorize?oauth_token=LONG_TOKEN_STRING PIN: PIN_FROM_WEBSITE Now you can create, edit and present Slides offline. - by A Googler $./twitter.sh tweet "I am reading Packt Shell Scripting Cookbook" Tweeted :) $./twitter.sh read | head -2 From: Clif Flynt Tweet: I am reading Packt Shell Scripting Cookbook How it works... First of all, we use the source command to include the TwitterOAuth.sh library, so we can use its functions to access Twitter. The TO_init function initializes the library. Every app needs to get an OAuth token and token secret the first time it is used. If these are not present, we use the library function TO_access_token_helper to acquire them. Once we have the tokens, we save them to a config file so we can simply source it the next time the script is run. The library function TO_statuses_home_timeline fetches the tweets from Twitter. This data is retuned as a single long string in JSON format, which starts like this: [{"created_at":"Thu Nov 10 14:45:20 +0000 "016","id":7...9,"id_str":"7...9","text":"Dining... Each tweet starts with the created_at tag and includes a text and a screen_nametag. The script will extract the text and screen name data and display only those fields. The script assigns the long string to the variable TO_ret. The JSON format uses quoted strings for the key and may or may not quote the value. The key/value pairs are separated by commas, and the key and value are separated by a colon :. The first sed to replaces each," character set with a newline, making each key/value a separate line. These lines are piped to another sed command to replace each occurrence of ": with a tilde ~ which creates a line like screen_name~"Clif_Flynt" The final awk script reads each line. The -F~ option splits the line into fields at the tilde, so $1 is the key and $2 is the value. The if command checks for text or screen_name. The text is first in the tweet, but it's easier to read if we report the sender first, so the script saves a text return until it sees a screen_name, then prints the current value of $2 and the saved value of the text. The TO_statuses_updatelibrary function generates a tweet. The empty first parameter defines our message as being in the default format, and the message is a part of the second parameter. Tracking changes to a website Tracking website changes is useful to both web developers and users. Checking a website manually impractical, but a change tracking script can be run at regular intervals. When a change occurs, it generate a notification. Getting ready Tracking changes in terms of Bash scripting means fetching websites at different times and taking the difference by using the diff command. We can use curl and diff to do this. How to do it... This bash script combines different commands, to track changes in a webpage: #!/bin/bash #Filename: change_track.sh #Desc: Script to track changes to webpage if [ $# -ne 1 ]; then echo -e "$Usage: $0 URLn" exit 1; fi first_time=0 # Not first time if [ ! -e "last.html" ]; then first_time=1 # Set it is first time run fi curl --silent $1 -o recent.html if [ $first_time -ne 1 ]; then changes=$(diff -u last.html recent.html) if [ -n "$changes" ]; then echo -e "Changes:n" echo "$changes" else echo -e "nWebsite has no changes" fi else echo "[First run] Archiving.." fi cp recent.html last.html Let's look at the output of the track_changes.sh script on a website you control. First we'll see the output when a web page is unchanged, and then after making changes. Note that you should change MyWebSite.org to your website name. First, run the following command: $ ./track_changes.sh http://www.MyWebSite.org [First run] Archiving.. Second, run the command again. $ ./track_changes.sh http://www.MyWebSite.org Website has no changes Third, run the following command after making changes to the web page: $ ./track_changes.sh http://www.MyWebSite.org Changes: --- last.html 2010-08-01 07:29:15.000000000 +0200 +++ recent.html 2010-08-01 07:29:43.000000000 +0200 @@ -1,3 +1,4 @@ +added line :) data How it works... The script checks whether the script is running for the first time by using [ ! -e "last.html" ];. If last.html doesn't exist, it means that it is the first time and, the webpage must be downloaded and saved as last.html. If it is not the first time, it downloads the new copy recent.html and checks the difference with the diff utility. Any changes will be displayed as diff output.Finally, recent.html is copied to last.html. Note that changing the website you're checking will generate a huge diff file the first time you examine it. If you need to track multiple pages, you can create a folder for each website you intend to watch. Posting to a web page and reading the response POST and GET are two types of requests in HTTP to send information to, or retrieve information from a website. In a GET request, we send parameters (name-value pairs) through the webpage URL itself. The POST command places the key/value pairs in the message body instead of the URL. POST is commonly used when submitting long forms or to conceal the information submitted from a casual glance. Getting ready For this recipe, we will use the sample guestbook website included in the tclhttpd package.  You can download tclhttpd from http://sourceforge.net/projects/tclhttpd and then run it on your local system to create a local webserver. The guestbook page requests a name and URL which it adds to a guestbook to show who has visited a site when the user clicks the Add me to your guestbook button. This process can be automated with a single curl (or wget) command. How to do it... Download the tclhttpd package and cd to the bin folder. Start the tclhttpd daemon with this command: tclsh httpd.tcl The format to POST and read the HTML response from generic website resembles this: $ curl URL -d "postvar=postdata2&postvar2=postdata2" Consider the following example: $ curl http://127.0.0.1:8015/guestbook/newguest.html -d "name=Clif&url=www.noucorp.com&http=www.noucorp.com" curl prints a response page like this: <HTML> <Head> <title>Guestbook Registration Confirmed</title> </Head> <Body BGCOLOR=white TEXT=black> <a href="www.noucorp.com">www.noucorp.com</a> <DL> <DT>Name <DD>Clif <DT>URL <DD> </DL> www.noucorp.com </Body> -d is the argument used for posting. The string argument for -d is similar to the GET request semantics. var=value pairs are to be delimited by &. You can POST the data using wget by using --post-data "string". For example: $ wgethttp://127.0.0.1:8015/guestbook/newguest.cgi --post-data "name=Clif&url=www.noucorp.com&http=www.noucorp.com" -O output.html Use the same format as cURL for name-value pairs. The text in output.html is the same as that returned by the cURL command. The string to the post arguments (for example, to -d or --post-data) should always be given in quotes. If quotes are not used, & is interpreted by the shell to indicate that this should be a background process. How to do it... If you look at the website source (use the View Source option from the web browser), you will see an HTML form defined, similar to the following code: <form action="newguest.cgi"" method="post"> <ul> <li> Name: <input type="text" name="name" size="40"> <li> Url: <input type="text" name="url" size="40"> <input type="submit"> </ul> </form> Here, newguest.cgi is the target URL. When the user enters the details and clicks on the Submit button, the name and url inputs are sent to newguest.cgi as a POST request, and the response page is returned to the browser. Downloading a video from the internet There are many reasons for downloading a video. If you are on a metered service, you might want to download videos during off-hours when the rates are cheaper. You might want to watch videos where the bandwidth doesn't support streaming, or you might just want to make certain that you always have that video of cute cats to show your friends. Getting ready One program for downloading videos is youtube-dl. This is not included in most distributions and the repositories may not be up to date, so it's best to go to the youtube-dl main site:http://yt-dl.org You'll find links and information on that page for downloading and installing youtube-dl. How to do it… Using youtube-dl is easy. Open your browser and find a video you like. Then copy/paste that URL to the youtube-dl command line. youtube-dl  https://www.youtube.com/watch?v=AJrsl3fHQ74 While youtube-dl is downloading the file it will generate a status line on your terminal. How it works… The youtube-dl program works by sending a GET message to the server, just as a browser would do. It masquerades as a browser so that YouTube or other video providers will download a video as if the device were streaming. The –list-formats (-F) option will list the available formats a video is available in, and the –format (-f) option will specify which format to download. This is useful if you want to download a higher-resolution video than your internet connection can reliably stream. Summary In this article we learned how to download and parse website data, send data to forms, and automate website-usage tasks and similar activities. We can automate many activities that we perform interactively through a browser with a few lines of scripting. Resources for Article: Further resources on this subject: Linux Shell Scripting – various recipes to help you [article] Linux Shell Script: Tips and Tricks [article] Linux Shell Script: Monitoring Activities [article]
Read more
  • 0
  • 0
  • 33382
article-image-react-conf-2019-concurrent-mode-preview-out-css-in-js-react-docs-in-40-languages-and-more
Bhagyashree R
29 Oct 2019
9 min read
Save for later

React Conf 2019: Concurrent Mode preview out, CSS-in-JS, React docs in 40 languages, and more

Bhagyashree R
29 Oct 2019
9 min read
React Conf 2019 wrapped up last week. It was kick-started with a keynote by Tom Occhino and Yuhi Zheng from the React team who both talked about Concurrent Mode and Suspense. Then followed by Frank Yan also from the React team, who explained how they are building the “new Facebook” with React and Relay. One of the major highlights of his talk was the CSS-in-JS library that will be open-sourced once ready. Sophie Alpert, former manager of the React team gave a talk on building a custom React renderer. To demonstrate that, she implemented a small version of ReactDOM in just 30 minutes. There were many other lightning talks and presentations on translated React, building inclusive apps by improving their accessibility, and much more. React Conf 2019 is a two-day event that took place from Oct 24-25 at Lake Las Vegas, Nevada. This conference brought together front-end and full-stack developers to “share knowledge, skills, to network, and just to have fun.” React's long-term goal: "Making it easier to build great user experiences" Tom Occhino, Engineering Director of the React group, took to the stage to talk about the goals for React and the community. He says that React’s long-term goal is to make it easier for developers to build great user experiences. “Easier to build” means improving the developer experience. The three factors that contribute to a great developer experience are a low barrier to entry, developer productivity, and ability to scale. React is constantly working towards improving the developer experience by introducing new features. Two such features are: Concurrent Mode and Suspense. Concurrent Mode Concurrent Mode is a set of features to make React apps more responsive by rendering component trees without blocking the main thread. It gives React the ability to interrupt big blocks of low-priority work in order to focus on higher priority work like responding to user input. This will enable React to work on several state updates concurrently and removing jarring and too frequent DOM updates. The team also released the first early community preview of Concurrent Mode last week. https://twitter.com/reactjs/status/1187411505001746432 Suspense Suspense was introduced as an improvement to the developer experience when dealing with asynchronous data fetching within React apps. It suspends your component rendering and shows a fallback until some condition is met. Occhino describes Suspense as a “React system for orchestrating asynchronous loading of code, data, and resources.” He adds, “Suspense lets the component wait for something before they render. This helps consolidate nested dependencies and nested spinners and things behind the single simple loading experience.” Towards the end of his keynote, Occhino also touched upon how the team plans to make the React community more inclusive and diverse. He said, “Over the past 10 years, I have learned that diverse teams build better products and make better decisions. Everyone working on React shares my conviction about this.” He adds, “Up until recently we have taken a pretty passive stance to building and shaping the React community. We have a responsibility to you all and I feel like we let many of you down. We are committed to doing better!” As a first step, the team has now replaced the React code of conduct with the contributor covenant. Read also: #Reactgate forces React leaders to confront community’s toxic culture head on What’s new the React team is working on Yuzi Zheng, Engineering Manager for React and Relay team at Facebook gave an insight into what projects the core teams are working on. She started off by giving a recap of hooks, which was one of the most-awaited React features announced at React Conf 2018. “Hooks are designed for the future of React in the way that it naturally encourages code that is compatible with all the plumbing features such as accessibility, server-side rendering, suspense, and concurrent mode. Since its release, the reception of Hooks has been really positive,” she shared. If you want to understand the fundamentals of React Hooks and use them for implementing responsive design and more, check out our book, Learning Hooks. Another long-term project that the team is focusing on is providing developers a way to easily build accessibility features in React. Currently, developers can create accessible websites using standard HTML techniques, but it does have some limitations. To help building accessibility directly into React the team is working on two areas: managing focus and input interfaces. For managing focus, the team plans to add primitives that provide “a more structured way of making sure component flows well” for cases like React portals and Suspense fallback and are accessible by default. For input interfaces, they plan to add support for rich gestures that work across platforms and are accessible by default. The team is also focusing on improving the initial render times. Server-side rendering helps in reducing the amount of CPU usage on the client for the initial render to some extent, but it does have some limitations. To meet these limitations, the team plans to add built-in support for server-side rendering. This will work with lazily loaded components to reduce the bytes needed on the client, support streaming down markups in chunks, and be fully-compatible with Concurrent Mode and Suspense. The CSS-in-JS library Frank Yan, Engineering Manager in the React group at Facebook talked about how the team has rebuilt and redesigned the Facebook website and the key lessons they have learned along the way. The new Facebook website is a single-page app with React organizing the HTML and JavaScript into components from the top down and with GraphQL and Relay colocating the queries declaratively in the components. The only key part that the team did not reorganize was CSS. They instead created a new library to embed styles in components called CSS-in-JS. It aims to make the styles easier to read, understand, and update. Its syntax is inspired by React Native and other frameworks. Since it enables you to embed styles inside JavaScript files, you can also use JavaScript tooling like type checkers and linters. React docs translated into 40 languages Nat Alison is a freelance front-end developer who helped the React team coordinate translations of reactjs.org into 40 languages. She shared why and how they were able to translate the docs for this massively popular library. She shared, “More than 80% of the world’s population does not know English. If we restrict React, one of the most popular JavaScript frameworks, we restrict who gets to create and shape the web.” Providing the officially translated docs will make it easier for several non-English speaking React developers to understand and use it in their projects. This will also prevent users from creating unofficial translations, which can be incorrect, outdated, or difficult to find. Initially, they thought of integrating a SaaS platform that allows users to submit translations, but this was not a feasible solution. Then they decided to check out the solution used by Vue, which is maintaining separate repositories for each language forked from the original repo. Similar to Vue, the team also created a bot that periodically tracks for changes in the English repo and submits pull requests whenever there is a change. If you want to contribute to translating React docs in your language, check out the IsReactTranslatedYet website. Developing accessible apps Brittany Feenstra, a developer at Formidable, took to the stage to talk about why accessibility is important and how you can approach it. Accessibility or a11y is making your apps and websites usable for everyone, including people with any kind of disabilities.  There are four types of disabilities that developers need to design for: visual, auditory, motor, and cognitive. Feenstra mentioned that though we all are aware of the importance of accessibility, we often “end up saving it for later” because of tight deadlines. Feenstra, however, compares accessibility with marathons. It is not something that you can achieve in just one sprint, she says. You should instead look at it as a training program that you will follow when participating in a marathon. You need to take a step-by-step approach to make an accessible app. If we do that “we will be way less fatigued and well-equipped,” she adds. Sharing some starting tips she said that we need to focus on three areas. First, learn to run, or in accessibility context, understand the HTML semantics then explore reference patterns, navigation, and focus traps. Second, improve nutritional habits, or in accessibility context, use environments and tools that help us write sturdier code. She recommends using axe, an accessibility checker for WCAG 2 and Section 508 accessibility. Also, check out the tools that basically simulate how people with visual impairment will see your UI such as NoCoffee and I want to see like the colour blind. She emphasizes on linting and testing your code for accessibility with the help of eslint-plugin-jsx-a11y and accessibility assessment automation tools. Third, cross-train and stretch, or in accessibility context, learn to “interact with the UI in ways that let us understand the update we are making to our code.” “React is Fiction” This was a talk by Jenn Creighton, a Frontend Architect at The Wing, who comes from a creative writing background. “Writing React to me felt like coming home. It was really familiar in a way that I could not pinpoint,” she said. Then she realized that writing React reminded her of fiction and merging the two disciplines helped her write better components. Creighton drew the similarities between developing in React and creative writing. One of the key principles of creative writing is “Show, don’t tell” that advises authors to describe a situation instead of just telling it. This will help engage the readers as they will be able to picture the situation in their heads. According to Creighton, React also has a similar principle: “Declarative, not imperative.”  React is declarative, which allows developers to describe what the final state should be, instead of listing all the steps to reach that state. There were many other exciting talks about progressive web animations, building React-Select, and more. Check out the live streams to watch the full talks: Day1: https://www.youtube.com/watch?v=RCiccdQObpo Day2: https://www.youtube.com/watch?v=JDDxR1a15Yo&t=2376s Ionic React released; Ionic Framework pivots from Angular to a native React version ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more React Native 0.61 introduces Fast Refresh for reliable hot reloading
Read more
  • 0
  • 0
  • 33307

article-image-anatomy-wordpress-plugin
Packt
25 Mar 2011
7 min read
Save for later

Anatomy of a WordPress Plugin

Packt
25 Mar 2011
7 min read
  WordPress 3 Plugin Development Essentials Create your own powerful, interactive plugins to extend and add features to your WordPress site         Read more about this book       WordPress is a popular content management system (CMS), most renowned for its use as a blogging / publishing application. According to usage statistics tracker, BuiltWith (http://builtWith.com), WordPress is considered to be the most popular blogging software on the planet—not bad for something that has only been around officially since 2003. Before we develop any substantial plugins of our own, let's take a few moments to look at what other people have done, so we get an idea of what the final product might look like. By this point, you should have a fresh version of WordPress installed and running somewhere for you to play with. It is important that your installation of WordPress is one with which you can tinker. In this article by Brian Bondari and Everett Griffiths, authors of WordPress 3 Plugin Development Essentials, we will purposely break a few things to help see how they work, so please don't try anything in this article on a live production site. Deconstructing an existing plugin: "Hello Dolly" WordPress ships with a simple plugin named "Hello Dolly". Its name is a whimsical take on the programmer's obligatory "Hello, World!", and it is trotted out only for pedantic explanations like the one that follows (unless, of course, you really do want random lyrics by Jerry Herman to grace your administration screens). Activating the plugin Let's activate this plugin so we can have a look at what it does: Browse to your WordPress Dashboard at http://yoursite.com/wp-admin/. Navigate to the Plugins section. Under the Hello Dolly title, click on the Activate link. You should now see a random lyric appear in the top-right portion of the Dashboard. Refresh the page a few times to get the full effect. Examining the hello.php file Now that we've tried out the "Hello Dolly" plugin, let's have a closer look. In your favorite text editor, open up the /wp-content/plugins/hello.php file. Can you identify the following integral parts? The Information Header which describes details about the plugin (author and description). This is contained in a large PHP /* comment */. User-defined functions, such as the hello_dolly() function. The add_action() and/or add_filter() functions, which hook a WordPress event to a user-defined function. It looks pretty simple, right? That's all you need for a plugin: An information header Some user-defined functions add_action() and/or add_filter() functions In your WordPress Dashboard, ensure that the "Hello Dolly" plugin has been activated. If applicable, use your preferred (s)FTP program to connect to your WordPress installation. Using your text editor, temporarily delete the information header from wpcontent/ plugins/hello.php and save the file (you can save the header elsewhere for now). Save the file. Refresh the Plugins page in your browser. You should get a warning from WordPress stating that the plugin does not have a valid header: Ensure that the "Hello Dolly" plugin is active. Open the /wp-content/plugins/hello.php file in your text editor. Immediately before the line that contains function hello_dolly_get_lyric, type in some gibberish text, such as "asdfasdf" and save the file. Reload the plugins page in your browser. This should generate a parse error, something like: pre width="70"> Parse error: syntax error, unexpected T_FUNCTION in /path/to/ wordpress/html/wp-content/plugins/hello.php on line 16 Author: Listed below the plugin name Author URI: Together with "Author", this creates a link to the author's site Description: Main block of text describing the plugin Plugin Name: The displayed name of the plugin Plugin URI: Destination of the "Visit plugin site" link Version: Use this to track your changes over time Now that we've identified the critical component parts, let's examine them in more detail. Information header Don't just skim this section thinking it's a waste of breath on the self-explanatory header fields. Unlike a normal PHP file in which the comments are purely optional, in WordPress plugin and theme files, the Information Header is required! It is this block of text that causes a file to show up on WordPress' radar so that you can activate it or deactivate it. If your plugin is missing a valid information header, you cannot use it! Exercise—breaking the header To reinforce that the information header is an integral part of a plugin, try the following exercise: After you've seen the tragic consequences, put the header information back into the hello.php file. This should make it abundantly clear to you that the information header is absolutely vital for every WordPress plugin. If your plugin has multiple files, the header should be inside the primary file—in this article we use index.php as our primary file, but many plugins use a file named after the plugin name as their primary file. Location, name, and format The header itself is similar in form and function to other content management systems, such as Drupal's module.info files or Joomla's XML module configurations—it offers a way to store additional information about a plugin in a standardized format. The values can be extended, but the most common header values are listed below: For more information about header blocks, see the WordPress codex at: http://codex.wordpress.org/File_Header. In order for a PHP file to show up in WordPress' Plugins menu: The file must have a .php extension. The file must contain the information header somewhere in it (preferably at the beginning). The file must be either in the /wp-content/plugins directory, or in a subdirectory of the plugins directory. It cannot be more deeply nested. Understanding the Includes When you activate a plugin, the name of the file containing the information header is stored in the WordPress database. Each time a page is requested, WordPress goes through a laundry list of PHP files it needs to load, so activating a plugin ensures that your own files are on that list. To help illustrate this concept, let's break WordPress again. Exercise – parse errors Try the following exercise: Yikes! Your site is now broken. Why did this happen? We introduced errors into the plugin's main file (hello.php), so including it caused PHP and WordPress to choke. Delete the gibberish line from the hello.php file and save to return the plugin back to normal. The parse error only occurs if there is an error in an active plugin. Deactivated plugins are not included by WordPress and therefore their code is not parsed. You can try the same exercise after deactivating the plugin and you'll notice that WordPress does not raise any errors. Bonus for the curious In case you're wondering exactly where and how WordPress stores the information about activated plugins, have a look in the database. Using your MySQL client, you can browse the wp_options table or execute the following query: SELECT option_value FROM wp_options WHERE option_name='active_ plugins'; The active plugins are stored as a serialized PHP hash, referencing the file containing the header. The following is an example of what the serialized hash might contain if you had activated a plugin named "Bad Example". You can use PHP's unserialize() function to parse the contents of this string into a PHP variable as in the following script: <?php $active_plugin_str = 'a:1:{i:0;s:27:"bad-example/bad-example. php";}'; print_r( unserialize($active_plugin_str) ); ?> And here's its output: Array ( [0] => bad-example/bad-example.php )
Read more
  • 0
  • 1
  • 33166
Modal Close icon
Modal Close icon