Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
Packt
13 Apr 2016
7 min read
Save for later

Nginx "expires" directive – Emitting Caching Headers

Packt
13 Apr 2016
7 min read
In this article by Alex Kapranoff, the author of the book Nginx Troubleshooting, explains how all browsers (and even many non-browser HTTP clients) support client-side caching. It is a part of the HTTP standard, albeit one of the most complex caching to understand. Web servers do not control client-side caching to full extent, obviously, but they may issue recommendations about what to cache and how, in the form of special HTTP response headers. This is a topic thoroughly discussed in many great articles and guides, so we will mention it shortly, and with a lean towards problems you may face and how to troubleshoot them. (For more resources related to this topic, see here.) In spite of the fact that browsers have been supporting caching on their side for at least 20 years, configuring cache headers was always a little confusing mostly due to the fact that there two sets of headers designed for the same purpose but having different scopes and totally different formats. There is the Expires: header, which was designed as a quick and dirty solution and also the new (relatively) almost omnipotent Cache-Control: header, which tries to support all the different ways an HTTP cache could work. This is an example of a modern HTTP request-response pair containing the caching headers. First is the request headers sent from the browser (here Firefox 41, but it does not matter): User-Agent:"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:41.0) Gecko/20100101 Firefox/41.0" Accept:"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" Accept-Encoding:"gzip, deflate" Connection:"keep-alive" Cache-Control:"max-age=0" Then, the response headers is: Cache-Control:"max-age=1800" Content-Encoding:"gzip" Content-Type:"text/html; charset=UTF-8" Date:"Sun, 10 Oct 2015 13:42:34 GMT" Expires:"Sun, 10 Oct 2015 14:12:34 GMT" We highlighted the parts that are relevant. Note that some directives may be sent by both sides of the conversation. First, the browser sent the Cache-Control: max-age=0 header because the user pressed the F5 key. This is an indication that the user wants to receive a response that is fresh. Normally, the request will not contain this header and will allow any intermediate cache to respond with a stale but still nonexpired response. In this case, the server we talked to responded with a gzipped HTML page encoded in UTF-8 and indicated that the response is okay to use for half an hour. It used both mechanisms available, the modern Cache-Control:max-age=1800 header and the very old Expires:Sun, 10 Oct 2015 14:12:34 GMT header. The X-Cache: "EXPIRED" header is not a standard HTTP header but was also probably (there is no way to know for sure from the outside) emitted by Nginx. It may be an indication that there are, indeed, intermediate caching proxies between the client and the server, and one of them added this header for debugging purposes. The header may also show that the backend software uses some internal caching. Another possible source of this header is a debugging technique used to find problems in the Nginx cache configuration. The idea is to use the cache hit or miss status, which is available in one of the handy internal Nginx variables as a value for an extra header and then to be able to monitor the status from the client side. This is the code that will add such a header: add_header X-Cache $upstream_cache_status; Nginx has a special directive that transparently sets up both of standard cache control headers, and it is named expires. This is a piece of the nginx.conf file using the expires directive: location ~* \.(?:css|js)$ { expires 1y; add_header Cache-Control "public"; } First, the pattern uses the so-called noncapturing parenthesis, which is a feature first appeared in Perl regular expressions. The effect of this regexp is the same as of a simpler \.(css|js)$ pattern, but the regular expression engine is specifically instructed not to create a variable containing the actual string from inside the parenthesis. This is a simple optimization. Then, the expires directive declares that the content of the css and js files will expire after a year of storage. The actual headers as received by the client will look like this: Server: nginx/1.9.8 (Ubuntu) Date: Fri, 11 Mar 2016 22:01:04 GMT Content-Type: text/css Last-Modified: Thu, 10 Mar 2016 05:45:39 GMT Expires: Sat, 11 Mar 2017 22:01:04 GMT Cache-Control: max-age=31536000 The last two lines contain the same information in wildly different forms. The Expires: header is exactly one year after the date in the Date: header, whereas Cache-Control: specifies the age in seconds so that the client do the date arithmetics itself. The last directive in the provided configuration extract adds another Cache-Control: header with a value of public explicitly. What this means is that the content of the HTTP resource is not access-controlled and therefore may be cached not only for one particular user but also anywhere else. A simple and effective strategy that was used in offices to minimize consumed bandwidth is to have an office-wide caching proxy server. When one user requested a resource from a website on the Internet and that resource had a Cache-Control: public designation, the company cache server would store that to serve to other users on the office network. This may not be as popular today due to cheap bandwidth, but because history has a tendency to repeat itself, you need to know how and why Cache-Control: public works. The Nginx expires directive is surprisingly expressive. It may take a number of different values. See this table: off This value turns off Nginx cache headers logic. Nothing will be added, and more importantly, existing headers received from upstreams will not be modified. epoch This is an artificial value used to purge a stored resource from all caches by setting the Expires header to "1 January, 1970 00:00:01 GMT". max This is the opposite of the "epoch" value. The Expires header will be equal to "31 December 2037 23:59:59 GMT", and the Cache-Control max-age set to 10 years. This basically means that the HTTP responses are guaranteed to never change, so clients are free to never request the same thing twice and may use their own stored values. Specific time An actual specific time value means an expiry deadline from the time of the respective request. For example, expires 10w; A negative value for this directive will emit a special header Cache-Control: no-cache. "modified" specific time If you add the keyword "modified" before the time value, then the expiration moment will be computed relatively to the modification time of the file that is served. "@" specific time A time with an @ prefix specifies an absolute time-of-day expiry. This should be less than 24 hours. For example, Expires @17h;. Many web applications choose to emit the caching headers themselves, and this is a good thing. They have more information about which resources change often and which never change. Tampering with the headers that you receive from the upstream may or may not be a thing you want to do. Sometimes, adding headers to a response while proxying it may produce a conflicting set of headers and therefore create an unpredictable behavior. The static files that you serve with Nginx yourself should have the expires directive in place. However, the general advice about upstreams is to always examine the caching headers you get and refrain from overoptimizing by setting up more aggressive caching policy. Resources for Article: Further resources on this subject: Nginx service [article] Fine-tune the NGINX Configuration [article] Nginx Web Services: Configuration and Implementation [article]
Read more
  • 0
  • 0
  • 26567

article-image-vue-maintainers-proposed-listened-and-revised-the-rfc-for-hooks-in-vue-api
Bhagyashree R
28 Jun 2019
6 min read
Save for later

Vue maintainers proposed, listened, and revised the RFC for hooks in Vue API

Bhagyashree R
28 Jun 2019
6 min read
The internet was ablaze when Evan You, creator of Vue, published an RFC to introduce a function-based component API earlier this month. This followed a huge discussion in the Vue community on whether such an API is really needed. https://twitter.com/youyuxi/status/1137567675356291072 This proposal came after Evan You previewed an experimental Hooks API back in November at Vue Conf Toronto 2018. Why Vue needs a function-based component API Components help you to abstract your code into smaller pieces. This gives your web applications a better structure, makes the code more readable and understandable and most importantly enables you to reuse logic across multiple components. According to the RFC, the components API in Vue 2.x has some drawbacks in terms of reusability. The three common patterns that are generally used to achieve reusability in Vue are mixins, high-order components, and renderless components. Each of these come with their share of drawbacks: Mixins bring implicit dependencies in code, causes name clashes, and make your code harder to understand. HOCs can often be verbose, involve lots of passing props and hoisting statics, and can cause name conflicts. Renderless components require extra stateful component instances that come at the cost of performance. This function-based component API aims to address all these drawbacks. Inspired by React Hooks, its objective is to provide developers a “clean and flexible way” to compose logic and share it between components. The team plans to achieve this by moving the logic code to a "composition function" and returning reactive state. Another motivation behind this proposed change is to provide better built-in TypeScript type inference support as function-based APIs are naturally type-friendly. Also, code written with function-based APIs compresses better than an object or class-based code. What Vue developers think about this RFC? The Vue community was a little taken aback with this proposal that will essentially change the way they used to write Vue. They were concerned that this will take away the most desirable property of Vue, which is its simplicity. Vue’s class-based API made it easy to understand and get started with. However, bringing function-based API to Vue will complex things in exchange for very fewer advantages. Some argued that this change will make it another React. “Like a lot of others here, I chose Vue vs React for the simplicity and readability of code. The class-based API was easy to understand and pick up. If I wanted React, I would have just chosen React from the beginning. I get that there are some technical advantages to doing this, but Vue 3 is starting to really turn me off of staying with Vue going forward,” a developer shared on a Reddit thread. Developers were concerned that the time they have invested in learning Vue will go to waste as everything is about to change. A Vue developer commented on Reddit, “You learn to do something one way and then they change it up on you. Might as well just switch to react at this point.” Many compared this scenario to that of Angular 1->2 or Python 2->3 and suggested switching to Svelte to avoid the mess. Some, however, liked the syntax and are looking forward to playing around with the API.  A developer shared, “But I read through, checked out the new (simpler) example, read Evan's arguments about logical task grouping, and on a second read with a more open mind, I actually kind of like the new syntax and am now looking forward to trying it out. I'm glad they agreed to keep the object syntax around though.” How the Vue team responded When the RFC was first published it implied that the current API will be deprecated in a future major release. Also, there was a lot of confusion around the "compatibility" and "stable" build.  Many developers felt that this RFC is already “set in stone” from the way it was communicated. They felt that the core team has already decided to bring this API to Vue without community consultation. So, one of the reasons behind this confusion was how the change was communicated. The team acknowledged this and asked for suggestions from the community to improve their communication. https://twitter.com/N_Tepluhina/status/1142715703558103040 The core team clarified that the update will be additive and the team has no plans to remove the Object API in a future major release. Evan You, the creator of Vue, said in a thread, “feel free to stay with the current API for as long as you wish. As long as the community feels there's a need for the old API to stay, it will stay. The only one that can make the decision to switch to the new API is yourself.” He also addressed the concerns on a Hacker News thread: There is a lot of FUD in this thread so we need to clarify a bit: - This API is purely additive to 2.x and doesn't break anything. - 3.0 will have a standard build which adds this API on top of 2.x API, and an opt-in "lean build" which drops a number of 2.x APIs for a smaller and faster runtime. - This is an open RFC, which means it's not set in stone. The whole point of having an RFC is so that users can voice their opinions. It's not like we are shipping this tomorrow. After listening to various perspectives shared by developers, the core team revised the RFC accordingly putting everybody finally at ease. Guillaume Chau, a member of the Vue core team, put out a clear and concise plan of action on Twitter to which people are responding positively. This plan reassured that the Object API will not be deprecated until the community stops using it and the proposed API will be first offered as a standalone plugin for Vue 2.x. https://twitter.com/Akryum/status/1143114880960126976 Some developers have also started to try out the new API: https://twitter.com/igor_randj/status/1143302939496370177 https://twitter.com/cmsalvado/status/1143230023089786880 Closing thoughts Open source programmers put their time and efforts in building software that helps the community and an RFC (request for comments) is a way for the community to get involved in building high quality software at scale. Through RFC you can share your constructive feedback on why a change is necessary or is not necessary. And, all this can be done in a respectful way. This showed us a very good example of how an RFC should really work. Publishing an RFC, discussing with the community, listening to the community, and deciding collectively what to do next. Despite some hiccups in communication, the Vue core team did a good job in engaging with the community to develop the roadmap for the function-based component API in Vue. Read the RFC for function-based component API for more details. Vue 2.6 is now out with a new unified syntax for slots, and more Learning Vue in 2019 with Anthony Gore’s developer knowledge map Evan You shares Vue 3.0 updates at VueConf Toronto 2018
Read more
  • 0
  • 0
  • 26456

article-image-fine-tune-your-web-application-profiling-and-automation
Packt
07 Jun 2016
17 min read
Save for later

Fine Tune Your Web Application by Profiling and Automation

Packt
07 Jun 2016
17 min read
In this article by James Singleton, author of the book,ASP.NET Core 1.0 High Performance,sheds some light on how to improve the performance of your web application by profiling and testing it. In this article, we will cover writing automated tests to monitor performance along with adding these to aContinuous Integration(CI) and deployment system by constantly checking for regressions. (For more resources related to this topic, see here.) Profiling and measurement It's impossible to overstate how important profiling, measuring, and analyzingreliable evidence is, especially when dealing with web application performance. Maybe you used Glimpseor MiniProfilerto provide insights into the running of your web application;or perhaps, you are familiar with the Visual Studio diagnostics tools and the Application InsightsSoftware Development Kit (SDK). There's another tool that's worth mentioning and that's the Prefix profiler, which you can get at prefix.io.Prefix is a free, web‑based,ASP.NET profiler thatsupports ASP.NET Core. However, it doesn't yet support .NET Core (although this is planned),so you'll need to run ASP.NETCore on .NET Framework 4.6, for now. There's a live demo on their website (at demo.prefix.io) if you want to quickly check it out. You may also want to look at the PerfView performance analysis tool from Microsoft, which is used in the development of .NET Core. You can download PerfView from https://www.microsoft.com/en-us/download/details.aspx?id=28567, as a ZIP file that you can just extract and run. It is useful to analyze the memory of .NET applications among other things. You can use PerfView for many debugging activities, for example, to snapshot the heap or force GC runs. We don't have space for a detailed walkthrough here, but the included instructions are good, and there blogs on MSDN with guides and many video tutorials on Channel 9 at channel9.msdn.com/Series/PerfView-Tutorial if you need more information.Sysinternals tools (technet.microsoft.com/sysinternals) can also be helpful, but as they are not focused on .NET, they are less useful in this context. While tools such as these are great, what would be even better is building performance monitoring into your development workflow. Automate everything that you can and make performance checks transparent, routine, and run by default. Manual processes are bad becausesteps can be skipped and errors can easily be made. You wouldn't dream of developing software by e-mailing files around or editing code directly on a production server, so why not automate your performance tests too? Change control processes exist to ensure consistency and reduce errors. This is why using a Source Control Management (SCM) system, such as git or Team Foundation Server (TFS) is essential. It's also extremely useful to have a build server and perform Continuous Integration(CI) or even fully automated deployments. If the code that is deployed in production differs from what you have on your local workstation, then you have very little chance of success. This is one of the reasons why SQL Stored Procedures (SPs/sprocs) are difficult to work with,at least without rigorous version control. It's far too easy to modify an old version of an SP on a development database, accidentally revert a bug fix, and end up with a regression.If you must use sprocs, then you will need a versioning system such, as ReadyRoll (which Redgate has now acquired). If you practice Continuous Delivery (CD),then you'll have a build server, such as JetBrains TeamCity, ThoughtWorksGoCD, orCruiseControl.NET,or a cloud service, such as AppVeyor. Perhaps, you even automating your deployments using a tool, such as Octopus Deploy, and have your own internal NuGet feeds using software such as TheMotleyFool's Klondike or a cloud service such as MyGet (which also supports npm, bower, and VSIX packages). Bypassing processes and doing things manually will cause problems, even if you follow a script. If it can be automated, then it probably should be, and this includes testing. Automated testing As previously mentioned, the key to improving almost everything is automation. Tests thatare only run manually on developer workstations add very little value. It should of course be possible to run the tests on desktops, but this shouldn't be the official result because there's no guarantee that they will pass on a server (where the correct functioning matters more). Although automation usually occurs on servers, it can be useful to automate tests running on developer workstations too. One way of doing this in Visual Studio is to use a plugin, such as NCrunch. This runs your tests as you work, which can be very useful if you practice Test-Driven Development (TDD) and write your tests before your implementations. You can read more about NCrunch and see the pricing at ncrunch.net, or there's a similar open source project at continuoustests.com. One way of enforcing testing is to use gated check-ins in TFS, but this can be a little draconian, and if you use an SCM-like git, then it's easier to work on branches and simply block merges until all of the tests pass. You want to encourage developers to check-in early and often because this makes merges easier.Therefore, it's a bad idea to have features in progress sitting on workstations for a long time (generally no longer than a day). Continuous integration CI systems automatically build and test all of your branches, and they feed this information back to your version control system. For example, using the GitHubAPI,you can block the merging of pull requests until the build server has reported success of the merge result. Both Bitbucket and GitLab offer free CI systems called pipelines, so you may not need any extra systems in addition to one for source control because everything is in one place. GitLab also offers an integrated Docker container registry, and there is an open source version that you can install locally. Docker is well supported by .NET Core, and the new version of Visual Studio.You cando something similar with Visual Studio Team Services for CI builds and unit testing. Visual Studioalso has git services built into it. This process works well for unit testing because unit tests must be quick so that you get feedback early.Shortening the iteration cycle is a good way of increasing productivity,and you'll want the lag to be as small as possible. However, running tests on each build isn't suitable for all types of testing because not all tests can be quick. In this case, you'll need an additional strategy so as not to slow down your feedback loop. There are many unit testing frameworks available for .NET, for example NUnit, xUnit, and MSTest (Microsoft's unit test framework), along with multiple graphical ways of running tests locally, such as the Visual Studio Test Explorer and the ReSharper plugin. People have their favorites, but it doesn't really matter what you choose because most CI systems will support all of them. Slow testing Some tests are slow,but even if each test is fast they can easily add up to a lengthy time if you have a lot of them. This is especially true if they can't be parallelized and need to be run in sequence.Therefore, you should always aim to have each test stand on its own, without any dependencies on others. It's good practice to divide your tests into rings of importance so that you can at least run a subset of the most crucial on every CI build. However, if you have a large test suite or some tests thatare unavoidably slow, then you may choose to only run these once a day (perhaps overnight) or every week (maybe over the weekend). Some testing is simply slow by nature, and performance testing can often fall into this category, for example, load testing or User Interface (UI) testing. These are usually classed as integration testing, rather than unit testing, because they require your code to be deployed to an environment for testing, and the tests can't simply exercise the binaries. To make use of such automated testing, you will need to have an automated deployment system in addition to your CI system. If you have enough confidence in your test system, then you caneven have live deployments happen automatically. This works well if you also use feature switching to control the rollout of new features. Realistic environments Using a test environment that is as close to production (or as live-like) as possible is a good step toward ensuring reliable results. You cantry and use a smaller set of servers, and then scale your results up to get an estimate of live performance, but this assumes that you have an intimate knowledge of how your application scales, and what hardware constraints will be the bottlenecks. A better option is to use your live environment or rather what will become your production stack. You first create a staging environment that is identical to live, then you deploy your code to it, and run your full test suite, including a comprehensive performance test, ensuring that it behaves correctly. Once you are happy, then you simply swap staging and production, perhaps using DNS or Azure staging slots. Your old live environment now either becomes your test environment or if you use immutable cloud instances, then you can simply terminate it and spin up a new staging system. This concept is known as blue‑green deployment. You don't necessarily have to move all users across at once in a big bang. You canmove a few over first to test whether everything is correct. Web UI testing tools One of the most popular web testing tools is Selenium, which allows you to easily write tests and automate web browsers using WebDriver. Selenium is useful for many other tasks apart from testing, and you can read more about it at docs.seleniumhq.org. WebDriver is a protocol for remote controlling web browsers, and you can read about it at w3c.github.io/webdriver/webdriver-spec.html. Selenium uses real browsers, the same versions your users will access your web application with. This makes it excellent to get representative results, but it can cause issues if itrunsfrom the command line in an unattended fashion. For example, you may find your test server's memory full of dead browser processes, which have timed out. You may find it easier to use a dedicated headless test browser, which while not exactly the same as what your users will see, is more suitable for automation. The best approach is of course to use a combination of both, perhaps running headless tests first and then running the same tests on real browsers with WebDriver. One of the most well-known headless test browsers is PhantomJS. This is based on the WebKit engine, so it should give similar results to Chrome and Safari. PhantomJS is useful for many things apart from testing, such as capturing screenshots, and many different testing frameworks can drive it. As the name suggests,JavaScript can control PhantomJS, and you can read more about it at phantomjs.org. WebKit is an open source engine for web browsers, which was originally part of the KDE Linux desktop environment. It is mainly used in Apple's Safari browser, but a fork called Blink is used in Google Chrome, Chromium, and Opera. You can read more at webkit.org. Other automatable testing browsers based on different engines are available, but they have some limitations. For example, SlimerJS (slimerjs.org) is based on the Gecko engine used by Firefox, but is not fully headless. You probably want to use a higher-level testing utility rather than scripting browser engines directly. One such utility that provides many useful abstractions is CasperJS(casperjs.org),which supports running onboth PhantomJS and SlimerJS. Another library is Capybara, which allows you to easily simulate user interactions in Ruby. It supports Selenium, WebKit, Rack, and PhantomJS (via Poltergeist), although it's more suitable for Rails apps.You can read more at jnicklas.github.io/capybara. There is also TrifleJS (triflejs.org), which uses the .NET WebBrowser class (the Internet Explorer Trident engine), but this is a work in progress. Additionally, there's Watir (watir.com), which is a set of Ruby libraries that target Internet Explorer and WebDriver. However, neither have been updated in a while, and IE has changed a lot recently. Microsoft Edge (codenamed Spartan)is the new version of IE, and the Trident engine has been forked to EdgeHTML.The JavaScript engine (Chakra) has been open sourced as ChakraCore (github.com/Microsoft/ChakraCore). It shouldn't matter too much what browser engine you use, and PhantomJS will work fine as a first pass for automated tests. You can always test with real browsers after using a headless one, perhaps with Selenium or with PhantomJS using WebDriver. When we refer to browser engines (WebKit/Blink, Gecko, and Trident/EdgeHTML), we generally mean only the rendering and layout engine, not the JavaScript engine (SFX/Nitro/FTL/B3, V8, SpiderMonkey, and Chakra/ChakraCore). You'll probably still want to use a utility such as CasperJS to make writing tests easier, and you'll likely need a test framework, such as Jasmine (jasmine.github.io) or QUnit (qunitjs.com), too. You can also use a test runner thatsupports both Jasmine and QUnit, such as Chutzpah (mmanela.github.io/chutzpah). You can integrate your automated tests with many different CI systems, for example, Jenkins or JetBrains TeamCity. If you prefer a cloud-hosted option, then there's Travis CI (travis-ci.org) andAppVeyor (appveyor.com), which is also suitableto build .NET apps. You may prefer to run your integration and UI tests from your deployment system, for example, to verify a successful deployment in Octopus Deploy. There are also dedicated,cloud-based,web-application UI testing services available, such as BrowserStack (browserstack.com). Automating UI performancetests Automated UI tests are clearly great to check functional regressions, but they are also useful to test performance. You have programmatic access to the same information provided by the network inspector in the browser developer tools. You can integrate the YSlow (yslow.org)performance analyzerwith PhantomJS, enabling your CI system to check for common web performance mistakes on every commit. YSlow came out of Yahoo!, and it provides rules used to identify bad practices, which can slow down web applications for users. It's a similar idea to Google's PageSpeed Insights service (which can be automated via its API). However, YSlow is pretty old, and things have moved on in web development recently, for example, HTTP/2. A modern alternative is "the coach" from sitespeed.io, and you can read more at github.com/sitespeedio/coach.You should check out their other open source tools too, such as the dashboard at dashboard.sitespeed.io, which uses Graphite and Grafana. You canalso export the network results (in industry standard HAR format) and analyze them however you like. For example, visualizing them graphically in waterfall format, as you might do manually with your browser developer tools. The HTTP Archive (HAR) format is a standard way of representing the content of monitored network data to export it to other software. You can copy or save as HAR in some browser developer tools by right-clicking on a network request. DevOps When using automation and techniques, such as feature switching, it is essential to have a good view of your environments so that you know the utilization of all the hardware. Good tooling is important to perform this monitoring, and you want to easily be able to see the vital statistics of every server. This will consist of at least the CPU, memory, and disk space consumption, but it may include more, and you will want alarms set up to alert you if any of these stray outside allowed bands. The practice of DevOps is the culmination of all of the automation that we covered previously with development, operations, and quality assurance testing teams all collaborating. The only missing pieces left now are provisioning and configuring infrastructure and then monitoring it while in use. Although DevOps is a culture, there is plenty of tooling that can help. DevOps tooling One of the primary themes of DevOps tooling is defining infrastructure as code. The idea is that you shouldn't manually perform a task, such as setting up a server, when you can create software to do it for you. You canthen reuse these provisioning scripts, which will not only save you time, but it will also ensure that all of the machines are consistent and free of mistakes or missed steps. Provisioning There are many systems available to commission and configure new machines. Some popular configuration management automation tools are Ansible (ansible.com), Chef (chef.io), and Puppet (puppet.com). Not all of these tools work great on Windows servers, partly because Linux is easier to automate. However, you can run ASP.NETCore on Linux and still develop on Windows using Visual Studio, while testing in a VM. Developing for a VM is a great idea because it solves the problems in setting up environments and issues where it "works on my machine" but not in production. Vagrant (vagrantup.com) is a great command line tool to manage developer VMs. It allows you to easily create, spin up, and share developer environments. The successor to Vagrant, Otto (ottoproject.io) takes this a step further and abstracts deployment too.Therefore,you can push to multiple cloud providers without worrying about the intricacies of CloudFormation, OpsWorks, or anything else. If you create your infrastructure as code, then your scripts can be versioned and tested, just like your application code. We'll stop before we get too far off-topic, but the point is that if you have reliable environments, which you can easily verify, instantiate, and perform testing on, then CI is a lot easier. Monitoring Monitoring is essential, especially for web applications, and there are many tools available to help with it. A popular open source infrastructure monitoring system is Nagios (nagios.org). Another more modern open source alerting and metrics tool is Prometheus(prometheus.io). If you use a cloud platform, then there will be monitoring built in, for example AWS CloudWatch or Azure Diagnostics.There are also cloud servicesto directly monitor your website, such as Pingdom (pingdom.com), UptimeRobot (uptimerobot.com),Datadog (datadoghq.com),and PagerDuty (pagerduty.com). You probably already have a system in place to measure availability, but you can also use the same systems to monitor performance. This is not only helpfulto ensure a responsive users experience, but it can also provide early warning signs that a failure is imminent. If you are proactive and take preventative action, then you can save yourself a lot of trouble reactively fighting fires. It helps consider application support requirements at design time. Development, testing, and operations aren't competing disciplines, and you will succeed more often if you work as one team rather than simply throwing an application over the fence and saying it "worked in test, ops problem now". Summary In this article, we saw how wecan integrate automated testing into a CI system in order to monitor for performance regressions. We also learned some strategies to roll out changes and ensure that tests accurately reflect real life. We also briefly covered some options for DevOps practices and cloud-hosting providers, which together make continuous performance testing much easier. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] Creating a NHibernate session to access database within ASP.NET [article] Working With ASP.NET DataList Control [article]
Read more
  • 0
  • 0
  • 26185

article-image-npm-inc-co-founder-and-chief-data-officer-quits-leaving-the-community-to-question-the-stability-of-the-javascript-registry
Fatema Patrawala
22 Jul 2019
6 min read
Save for later

Npm Inc. co-founder and Chief data officer quits, leaving the community to question the stability of the JavaScript Registry

Fatema Patrawala
22 Jul 2019
6 min read
On Thursday, The Register reported that Laurie Voss, the co-founder and chief data officer of JavaScript package registry, NPM Inc left the company. Voss’s last day in office was 1st July while he officially announced the news on Thursday. Voss joined NPM in January 2014 and decided to leave the company in early May this year. NPM has faced its share of unrest in the company in the past few months. In the month of March  5 NPM employees were fired from the company in an unprofessional and unethical way. Later 3 of those employees were revealed to have been involved in unionization and filed complaints against NPM Inc with the National Labor Relations Board (NLRB).  Earlier this month NPM Inc at the third trial settled the labor claims brought by these three former staffers through the NLRB. Voss’ s resignation will be third in line after Rebecca Turner, former core contributor who resigned in March and Kat Marchan, former CLI and community architect who resigned from NPM early this month. Voss writes on his blog, “I joined npm in January of 2014 as co-founder, when it was just some ideals and a handful of servers that were down as often as they were up. In the following five and a half years Registry traffic has grown over 26,000%, and worldwide users from about 1 million back then to more than 11 million today. One of our goals when founding npm Inc. was to make it possible for the Registry to run forever, and I believe we have achieved that goal. While I am parting ways with npm, I look forward to seeing my friends and colleagues continue to grow and change the JavaScript ecosystem for the better.” Voss also told The Register that he supported unions, “As far as the labor dispute goes, I will say that I have always supported unions, I think they're great, and at no point in my time at NPM did anybody come to me proposing a union,” he said. “If they had, I would have been in favor of it. The whole thing was a total surprise to me.” The Register team spoke to one of the former staffers of NPM and they said employees tend not to talk to management in the fear of retaliation and Voss seemed uncomfortable to defend the company’s recent actions and felt powerless to affect change. In his post Voss is optimistic about NPM’s business areas, he says, “Our paid products, npm Orgs and npm Enterprise, have tens of thousands of happy users and the revenue from those sustains our core operations.” However, Business Insider reports that a recent NPM Inc funding round of the company raised only enough to continue operating until early 2020. https://twitter.com/coderbyheart/status/1152453087745007616 A big question on everyone’s mind currently is the stability of the public Node JS Registry. Most users in the JavaScript community do not have a fallback in place. While the community see Voss’s resignation with appreciation for his accomplishments, some are disappointed that he could not raise his voice against these odds and had to quit. "Nobody outside of the company, and not everyone within it, fully understands how much Laurie was the brains and the conscience of NPM," Jonathan Cowperthwait, former VP of marketing at NPM Inc, told The Register. CJ Silverio, a principal engineer at Eaze who served as NPM Inc's CTO said that it’s good that Voss is out but she wasn't sure whether his absence would matter much to the day-to-day operations of NPM Inc. Silverio was fired from NPM Inc late last year shortly after CEO Bryan Bogensberger’s arrival. “Bogensberger marginalized him almost immediately to get him out of the way, so the company itself probably won’t notice the departure," she said. "What should affect fundraising is the massive brain drain the company has experienced, with the entire CLI team now gone, and the registry team steadily departing. At some point they’ll have lost enough institutional knowledge quickly enough that even good new hires will struggle to figure out how to cope." Silverio also mentions that she had heard rumors of eliminating the public registry while only continuing with their paid enterprise service, which will be like killing their own competitive advantage. She says if the public registry disappears there are alternative projects like the one spearheaded by Silverio and a fellow developer Chris Dickinson, Entropic. Entropic is available under an open source Apache 2.0 license, Silverio says "You can depend on packages from any other Entropic instance, and your home instance will mirror all your dependencies for you so you remain self-sufficient." She added that the software will mirror any packages installed by a legacy package manager, which is to say npm. As a result, the more developers use Entropic, the less they'll need NPM Inc's platform to provide a list of available packages. Voss feels the scale of npm is 3x bigger than any other registry and boasts of an extremely fast growth rate i.e approx 8% month on month. "Creating a company to manage an open source commons creates some tensions and challenges is not a perfect solution, but it is better than any other solution I can think of, and none of the alternatives proposed have struck me as better or even close to equally good." he said. With  NPM Inc. sustainability at stake, the JavaScript community on Hacker News discussed alternatives in case the public registry comes to an end. One of the comments read, “If it's true that they want to kill the public registry, that means I may need to seriously investigate Entropic as an alternative. I almost feel like migrating away from the normal registry is an ethical issue now. What percentage of popular packages are available in Entropic? If someone else's repo is not in there, can I add it for them?” Another user responds, “The github registry may be another reasonable alternative... not to mention linking git hashes directly, but that has other issues.” Other than Entropic another alternative discussed is nixfromnpm, it is a tool in which you can translate NPM packages to Nix expression. nixfromnpm is developed by Allen Nelson and two other contributors from Chicago. Surprise NPM layoffs raise questions about the company culture Is the Npm 6.9.1 bug a symptom of the organization’s cultural problems? Npm Inc, after a third try, settles former employee claims, who were fired for being pro-union, The Register reports
Read more
  • 0
  • 0
  • 26117

article-image-working-with-the-vue-router-plugin-for-spas
Pravin Dhandre
07 Jun 2018
5 min read
Save for later

Working with the Vue-router plugin for SPAs

Pravin Dhandre
07 Jun 2018
5 min read
Single-Page Applications (SPAs) are web applications that load a single HTML page and updates that page dynamically based on the user interaction with the app. These SPAs use AJAX and HTML5 for creating fluid and responsive web applications without any requirement of constant page reloads. In this tutorial, we will show a step-by-step approach of how to install an extremely powerful plugin Vue-router to build Single Page Applications. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example. Similar to how you add Vue and Vuex to your applications, you can either directly include the library from unpkg, or head to the following URL and download a local copy for yourself: https://unpkg.com/Vue-router. Add the JavaScript to a new HTML document, along with Vue, and your application's JavaScript. Create an application container element, your view, as well. In the following example, I have saved the Vue-router JavaScript file as router.js: Initialize VueRouter in your JavaScript application We are now ready to add VueRouter and utilize its power. Before we do that, however, we need to create some very simple components which we can load and display based on the URL. As we are going to be loading the components with the router, we don't need to register them with Vue.component, but instead create JavaScript objects with the same properties as we would a Vue component. For this first exercise, we are going to create two pages—Home and About pages. Found on most websites, these should help give you context as to what is loading where and when. Create two templates in your HTML page for us to use: Don't forget to encapsulate all your content in one "root" element (represented here by the wrapping div tags). You also need to ensure you declare the templates before your application JavaScript is loaded. We've created a Home page template, with the id of homepage, and an About page, containing some placeholder text from lorem ipsum, with the id of about. Create two components in your JavaScript which reference these two templates: The next step is to give the router a placeholder to render the components in the view. This is done by using a custom <router-view> HTML element. Using this element gives you control over where your content will render. It allows us to have a header and footer right in the app view, without needing to deal with messy templates or include the components themselves. Add a header, main, and footer element to your app. Give yourself a logo in the header and credits in the footer; in the main HTML element, place the router-view placeholder: Everything in the app view is optional, except the router-view, but it gives you an idea of how the router HTML element can be implemented into a site structure. The next stage is to initialize the Vue-router and instruct Vue to use it. Create a new instance of VueRouter and add it to the Vue instance—similar to how we added Vuex in the previous section: We now need to tell the router about our routes (or paths), and what component it should load when it encounters each one. Create an object inside the Vue-router instance with a key of routes and an array as the value. This array needs to include an object for each route: Each route object contains a path and component key. The path is a string of the URL that you want to load the component on. Vue-router serves up components based on a first-come-first-served basis. For example, if there are several routes with the same path, the first one encountered is used. Ensure each route has the beginning slash—this tells the router it is a root page and not a sub-page. Press save and view your app in the browser. You should be presented with the content of the Home template component. If you observe the URL, you will notice that on page load a hash and forward slash (#/) are appended to the path. This is the router creating a method for browsing the components and utilizing the address bar. If you change this to the path of your second route, #/about, you will see the contents of the About component. Vue-router is also able to use the JavaScript history API to create prettier URLs. For example, yourdomain.com/index.html#about would become yourdomain.com/about. This is activated by adding mode: 'history' to your VueRouter instance: With this, you should now be familiar with Vue-router and how to initialize it for creating new routes and paths for different web pages on your website. Do check out the book Vue.js 2.x by Example to start developing building blocks for a successful e-commerce website. What is React.js and how does it work? Why has Vue.js become so popular? Building a real-time dashboard with Meteor and Vue.js
Read more
  • 0
  • 0
  • 25950

article-image-w3c-world-wide-web-consortium-declares-webassembly-1-0-as-an-official-web-standard
Sugandha Lahoti
09 Dec 2019
3 min read
Save for later

W3C (World Wide Web Consortium) declares WebAssembly 1.0 as an official web standard

Sugandha Lahoti
09 Dec 2019
3 min read
Last Thursday, the World Wide Web Consortium declared Web Assembly 1.0 as an official W3C Recommendation. With this announcement, WebAssembly becomes the fourth language to run natively in browsers following HTML, CSS, and JavaScript. “The arrival of Web Assembly expands the range of applications that can be achieved by simply using Open Web Platform technologies. In a world where machine learning and Artificial Intelligence become more and more common, it is important to enabling high-performance applications on the Web, without compromising the safety of the users,” declared Philippe Le Hégaret, W3C Project Lead in the official press release. Web Assembly has been the talk of the town for providing a safe, portable, low-level code format designed for efficient execution and compact representation. According to the W3C consortium, WebAssembly enables the Web platform to perform a more efficient execution of computationally-intensive algorithms, which in turn makes it practical to deliver whole new classes of user experience on the Web and elsewhere. Because it is a platform-independent execution environment, it can also be used on any other computer platform. W3C has published three WebAssembly specifications as W3C Recommendations: WebAssembly Core Specification defines a low-level virtual machine that closely mimics the functionality of many microprocessors upon which it is run. WebAssembly Web API which defines a Promise-based interface for requesting and executing a .wasm resource. WebAssembly JavaScript Interface that provides a JavaScript API for invoking and passing parameters to WebAssembly functions. W3C is also working on a range of features for future versions of the standard. These include Threading: Threads provide the benefits of shared-memory multi-threading and atomic memory accesses. Fixed-width SIMD: Vector operations that execute loops in parallel. Reference types: Allow Web Assembly code to directly reference host objects. Tail calls: Enable calling functions without using extra stack space. ECMAScript module integration: Interact with JavaScript by loading WebAssembly executables as ES6 modules. There are many other longer-term projects that W3C is working on. Many of them are aimed at improving the usability and availability of Web Assembly. For example garbage collection, debugging interfaces, and Web Assembly System Interface (WASI). In other news, recently, Mozilla partnered with Fastly, Intel and Red Hat to form the Bytecode Alliance to build a secure-by-default future for WebAssembly and to take it beyond the browser. Introducing Woz, a Progressive WebAssembly Application (PWA + WebAssembly) generator written entirely in Rust. You can now use WebAssembly from .NET with Wasmtime! 4 predictions by Richard Feldman on the future of the web: TypeScript, WebAssembly, and more.
Read more
  • 0
  • 0
  • 25814
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-cors-nodejs
Packt
20 Jun 2017
14 min read
Save for later

CORS in Node.js

Packt
20 Jun 2017
14 min read
In this article by Randall Goya, and Rajesh Gunasundaram the author of the book CORS Essentials, Node.js is a cross-platform JavaScript runtime environment that executes JavaScript code at server side. This enables to have a unified language across the web application development. JavaScript becomes the unified language that runs both on client side and server side. (For more resources related to this topic, see here.) In this article we will learn about: Node.js is a JavaScript platform for developing server-side web applications. Node.js can provide the web server for other frameworks including Express.js, AngularJS, Backbone,js, Ember.js and others. Some other JavaScript frameworks such as ReactJS, Ember.js and Socket.IO may also use Node.js as the web server. Isomorphic JavaScript can add server-side functionality for client-side frameworks. JavaScript frameworks are evolving rapidly. This article reviews some of the current techniques, and syntax specific for some frameworks. Make sure to check the documentation for the project to discover the latest techniques. Understanding CORS concepts, you may create your own solution, because JavaScript is a loosely structured language. All the examples are based on the fundamentals of CORS, with allowed origin(s), methods, and headers such as Content-Type, or preflight, that may be required according to the CORS specification. JavaScript frameworks are very popular JavaScript is sometimes called the lingua franca of the Internet, because it is cross-platform and supported by many devices. It is also a loosely-structured language, which makes it possible to craft solutions for many types of applications. Sometimes an entire application is built in JavaScript. Frequently JavaScript provides a client-side front-end for applications built with Symfony, Content Management Systems such as Drupal, and other back-end frameworks. Node.js is server-side JavaScript and provides a web server as an alternative to Apache, IIS, Nginx and other traditional web servers. Introduction to Node.js Node.js is an open-source and cross-platform library that enables in developing server-side web applications. Applications will be written using JavaScript in Node.js can run on many operating systems, including OS X, Microsoft Windows, Linux, and many others. Node.js provides a non-blocking I/O and an event-driven architecture designed to optimize an application's performance and scalability for real-time web applications. The biggest difference between PHP and Node.js is that PHP is a blocking language, where commands execute only after the previous command has completed, while Node.js is a non-blocking language where commands execute in parallel, and use callbacks to signal completion. Node.js can move files, payloads from services, and data asynchronously, without waiting for some command to complete, which improves performance. Most JS frameworks that work with Node.js use the concept of routes to manage pages and other parts of the application. Each route may have its own set of configurations. For example, CORS may be enabled only for a specific page or route. Node.js loads modules for extending functionality via the npm package manager. The developer selects which packages to load with npm, which reduces bloat. The developer community creates a large number of npm packages created for specific functions. JXcore is a fork of Node.js targeting mobile devices and IoTs (Internet of Things devices). JXcore can use both Google V8 and Mozilla SpiderMonkey as its JavaScript engine. JXcore can run Node applications on iOS devices using Mozilla SpiderMonkey. MEAN is a popular JavaScript software stack with MongoDB (a NoSQL database), Express.js and AngularJS, all of which run on a Node.js server. JavaScript frameworks that work with Node.js Node.js provides a server for other popular JS frameworks, including AngularJS, Express.js. Backbone.js, Socket.IO, and Connect.js. ReactJS was designed to run in the client browser, but it is often combined with a Node.js server. As we shall see in the following descriptions, these frameworks are not necessarily exclusive, and are often combined in applications. Express.js is a Node.js server framework Express.js is a Node.js web application server framework, designed for building single-page, multi-page, and hybrid web applications. It is considered the "standard" server framework for Node.js. The package is installed with the command npm install express –save. AngularJS extends static HTML with dynamic views HTML was designed for static content, not for dynamic views. AngularJS extends HTML syntax with custom tag attributes. It provides model–view–controller (MVC) and model–view–viewmodel (MVVM) architectures in a front-end client-side framework.  AngularJS is often combined with a Node.js server and other JS frameworks. AngularJS runs client-side and Express.js runs on the server, therefore Express.js is considered more secure for functions such as validating user input, which can be tampered client-side. AngularJS applications can use the Express.js framework to connect to databases, for example in the MEAN stack. Connect.js provides middleware for Node.js requests Connect.js is a JavaScript framework providing middleware to handle requests in Node.js applications. Connect.js provides middleware to handle Express.js and cookie sessions, to provide parsers for the HTML body and cookies, and to create vhosts (virtual hosts) and error handlers, and to override methods. Backbone.js often uses a Node.js server Backbone.js is a JavaScript framework with a RESTful JSON interface and is based on the model–view–presenter (MVP) application design. It is designed for developing single-page web applications, and for keeping various parts of web applications (for example, multiple clients and the server) synchronized. Backbone depends on Underscore.js, plus jQuery for use of all the available fetures. Backbone often uses a Node.js server, for example to connect to data storage. ReactJS handles user interfaces ReactJS is a JavaScript library for creating user interfaces while addressing challenges encountered in developing single-page applications where data changes over time. React handles the user interface in model–view–controller (MVC) architecture. ReactJS typically runs client-side and can be combined with AngularJS. Although ReactJS was designed to run client-side, it can also be used server-side in conjunction with Node.js. PayPal and Netflix leverage the server-side rendering of ReactJS known as Isomorphic ReactJS. There are React-based add-ons that take care of the server-side parts of a web application. Socket.IO uses WebSockets for realtime event-driven applications Socket.IO is a JavaScript library for event-driven web applications using the WebSocket protocol ,with realtime, bi-directional communication between web clients and servers. It has two parts: a client-side library that runs in the browser, and a server-side library for Node.js. Although it can be used as simply a wrapper for WebSocket, it provides many more features, including broadcasting to multiple sockets, storing data associated with each client, and asynchronous I/O. Socket.IO provides better security than WebSocket alone, since allowed domains must be specified for its server. Ember.js can use Node.js Ember is another popular JavaScript framework with routing that uses Moustache templates. It can run on a Node.js server, or also with Express.js. Ember can also be combined with Rack, a component of Ruby On Rails (ROR). Ember Data is a library for  modeling data in Ember.js applications. CORS in Express.js The following code adds the Access-Control-Allow-Origin and Access-Control-Allow-Headers headers globally to all requests on all routes in an Express.js application. A route is a path in the Express.js application, for example /user for a user page. app.all sets the configuration for all routes in the application. Specific HTTP requests such as GET or POST are handled by app.get and app.post. app.all('*', function(req, res, next) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); next(); }); app.get('/', function(req, res, next) { // Handle GET for this route }); app.post('/', function(req, res, next) { // Handle the POST for this route }); For better security, consider limiting the allowed origin to a single domain, or adding some additional code to validate or limit the domain(s) that are allowed. Also, consider limiting sending the headers only for routes that require CORS by replacing app.all with a more specific route and method. The following code only sends the CORS headers on a GET request on the route/user, and only allows the request from http://www.localdomain.com. app.get('/user', function(req, res, next) { res.header("Access-Control-Allow-Origin", "http://www.localdomain.com"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); next(); }); Since this is JavaScript code, you may dynamically manage the values of routes, methods, and domains via variables, instead of hard-coding the values. CORS npm for Express.js using Connect.js middleware Connect.js provides middleware to handle requests in Express.js. You can use Node Package Manager (npm) to install a package that enables CORS in Express.js with Connect.js: npm install cors The package offers flexible options, which should be familiar from the CORS specification, including using credentials and preflight. It provides dynamic ways to validate an origin domain using a function or a regular expression, and handler functions to process preflight. Configuration options for CORS npm origin: Configures the Access-Control-Allow-Origin CORS header with a string containing the full URL and protocol making the request, for example http://localdomain.com. Possible values for origin: Default value TRUE uses req.header('Origin') to determine the origin and CORS is enabled. When set to FALSE CORS is disabled. It can be set to a function with the request origin as the first parameter and a callback function as the second parameter. It can be a regular expression, for example /localdomain.com$/, or an array of regular expressions and/or strings to match. methods: Sets the Access-Control-Allow-Methods CORS header. Possible values for methods: A comma-delimited string of HTTP methods, for example GET, POST An array of HTTP methods, for example ['GET', 'PUT', 'POST'] allowedHeaders: Sets the Access-Control-Allow-Headers CORS header. Possible values for allowedHeaders: A comma-delimited string of  allowed headers, for example "Content-Type, Authorization'' An array of allowed headers, for example ['Content-Type', 'Authorization'] If unspecified, it defaults to the value specified in the request's Access-Control-Request-Headers header exposedHeaders: Sets the Access-Control-Expose-Headers header. Possible values for exposedHeaders: A comma-delimited string of exposed headers, for example 'Content-Range, X-Content-Range' An array of exposed headers, for example ['Content-Range', 'X-Content-Range'] If unspecified, no custom headers are exposed credentials: Sets the Access-Control-Allow-Credentials CORS header. Possible values for credentials: TRUE—passes the header for preflight FALSE or unspecified—omit the header, no preflight maxAge: Sets the Access-Control-Allow-Max-Age header. Possible values for maxAge An integer value in milliseconds for TTL to cache the request If unspecified, the request is not cached preflightContinue: Passes the CORS preflight response to the next handler. The default configuration without setting any values allows all origins and methods without preflight. Keep in mind that complex CORS requests other than GET, HEAD, POST will fail without preflight, so make sure you enable preflight in the configuration when using them. Without setting any values, the configuration defaults to: { "origin": "*", "methods": "GET,HEAD,PUT,PATCH,POST,DELETE", "preflightContinue": false } Code examples for CORS npm These examples demonstrate the flexibility of CORS npm for specific configurations. Note that the express and cors packages are always required. Enable CORS globally for all origins and all routes The simplest implementation of CORS npm enables CORS for all origins and all requests. The following example enables CORS for an arbitrary route " /product/:id" for a GET request by telling the entire app to use CORS for all routes: var express = require('express') , cors = require('cors') , app = express(); app.use(cors()); // this tells the app to use CORS for all re-quests and all routes app.get('/product/:id', function(req, res, next){ res.json({msg: 'CORS is enabled for all origins'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80'); }); Allow CORS for dynamic origins for a specific route The following example uses corsOptions to check if the domain making the request is in the whitelisted array with a callback function, which returns null if it doesn't find a match. This CORS option is passed to the route "product/:id" which is the only route that has CORS enabled. The allowed origins can be dynamic by changing the value of the variable "whitelist." var express = require('express') , cors = require('cors') , app = express(); // define the whitelisted domains and set the CORS options to check them var whitelist = ['http://localdomain.com', 'http://localdomain-other.com']; var corsOptions = { origin: function(origin, callback){ var originWhitelisted = whitelist.indexOf(origin) !== -1; callback(null, originWhitelisted); } }; // add the CORS options to a specific route /product/:id for a GET request app.get('/product/:id', cors(corsOptions), function(req, res, next){ res.json({msg: 'A whitelisted domain matches and CORS is enabled for route product/:id'}); }); // log that CORS is enabled on the server app.listen(80, function(){ console.log(''CORS is enabled on the web server listening on port 80''); }); You may set different CORS options for specific routes, or sets of routes, by defining the options assigned to unique variable names, for example "corsUserOptions." Pass the specific configuration variable to each route that requires that set of options. Enabling CORS preflight CORS requests that use a HTTP method other than GET, HEAD, POST (for example DELETE), or that use custom headers, are considered complex and require a preflight request before proceeding with the CORS requests. Enable preflight by adding an OPTIONS handler for the route: var express = require('express') , cors = require('cors') , app = express(); // add the OPTIONS handler app.options('/products/:id', cors()); // options is added to the route /products/:id // use the OPTIONS handler for the DELETE method on the route /products/:id app.del('/products/:id', cors(), function(req, res, next){ res.json({msg: 'CORS is enabled with preflight on the route '/products/:id' for the DELETE method for all origins!'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80''); }); You can enable preflight globally on all routes with the wildcard: app.options('*', cors()); Configuring CORS asynchronously One of the reasons to use NodeJS frameworks is to take advantage of their asynchronous abilities, handling multiple tasks at the same time. Here we use a callback function corsDelegateOptions and add it to the cors parameter passed to the route /products/:id. The callback function can handle multiple requests asynchronously. var express = require('express') , cors = require('cors') , app = express(); // define the allowed origins stored in a variable var whitelist = ['http://example1.com', 'http://example2.com']; // create the callback function var corsDelegateOptions = function(req, callback){ var corsOptions; if(whitelist.indexOf(req.header('Origin')) !== -1){ corsOptions = { origin: true }; // the requested origin in the CORS response matches and is allowed }else{ corsOptions = { origin: false }; // the requested origin in the CORS response doesn't match, and CORS is disabled for this request } callback(null, corsOptions); // callback expects two parameters: error and options }; // add the callback function to the cors parameter for the route /products/:id for a GET request app.get('/products/:id', cors(corsDelegateOptions), function(req, res, next){ res.json({msg: ''A whitelisted domain matches and CORS is enabled for route product/:id'}); }); app.listen(80, function(){ console.log('CORS is enabled on the web server listening on port 80''); }); Summary We have learned important stuffs of applying CORS in Node.js. Let us have a qssuick recap of what we have learnt: Node.js provides a web server built with JavaScript, and can be combined with many other JS frameworks as the application server. Although some frameworks have specific syntax for implementing CORS, they all follow the CORS specification by specifying allowed origin(s) and method(s). More robust frameworks allow custom headers such as Content-Type, and preflight when required for complex CORS requests. JavaScript frameworks may depend on the jQuery XHR object, which must be configured properly to allow Cross-Origin requests. JavaScript frameworks are evolving rapidly. The examples here may become outdated. Always refer to the project documentation for up-to-date information. With knowledge of the CORS specification, you may create your own techniques using JavaScript based on these examples, depending on the specific needs of your application. https://en.wikipedia.org/wiki/Node.js  Resources for Article: Further resources on this subject: An Introduction to Node.js Design Patterns [article] Five common questions for .NET/Java developers learning JavaScript and Node.js [article] API with MongoDB and Node.js [article]
Read more
  • 0
  • 0
  • 25710

article-image-laravel-50-essentials
Packt
12 Aug 2016
9 min read
Save for later

Laravel 5.0 Essentials

Packt
12 Aug 2016
9 min read
In this article by Alfred Nutile from the book, Laravel 5.x Cookbook, we will learn the following topics: Setting up Travis to Auto Deploy when all is Passing Working with Your .env File Testing Your App on Production with Behat (For more resources related to this topic, see here.) Setting up Travis to Auto Deploy when all is Passing Level 0 of any work should be getting a deployment workflow setup. What that means in this case is that a push to GitHub will trigger our Continuous Integration (CI). And then from the CI, if the tests are passing, we trigger the deployment. In this example I am not going to hit the URL Forge gives you but I am going to send an Artifact to S3 and then have call CodeDeploy to deploy this Artifact. Getting ready… You really need to see the section before this, otherwise continue knowing this will make no sense. How to do it… Install the travis command line tool in Homestead as noted in their docs https://github.com/travis-ci/travis.rb#installation. Make sure to use Ruby 2.x: sudo apt-get install ruby2.0-dev sudo gem install travis -v 1.8.2 --no-rdoc --no-ri Then in the recipe folder I run the command > travis setup codedeploy I answer all the questions keeping in mind:     The KEY and SECRET are the ones we made of the I AM User in the Section before this     The S3 KEY is the filename not the KEY we used for a above. So in my case I just use the name again of the file latest.zip since it sits inside the recipe-artifact bucket. Finally I open the .travis.yml file, which the above modifies and I update the before-deploy area so the zip command ignores my .env file otherwise it would overwrite the file on the server. How it works… Well if you did the CodeDeploy section before this one you will know this is not as easy as it looks. After all the previous work we are able to, with the one command travis setup codedeploy punch in securely all the needed info to get this passing build to deploy. So after phpunit reports things are passing we are ready. With that said we had to have a lot of things in place, S3 bucket to put the artifact, permission with the KEY and SECRET to access the Artifact and CodeDeploy, and a CodeDeploy Group and Application to deploy to. All of this covered in the previous section. After that it is just the magic of Travis and CodeDeploy working together to make this look so easy. See also… Travis Docs: https://docs.travis-ci.com/user/deployment/codedeploy https://github.com/travis-ci/travis.rb https://github.com/travis-ci/travis.rb#installation Working with Your .env File The workflow around this can be tricky. Going from Local, to TravisCI, to CodeDeploy and then to AWS without storing your info in .env on GitHub can be a challenge. What I will show here are some tools and techniques to do this well. Getting ready…. A base install is fine I will use the existing install to show some tricks around this. How to do it… Minimize using Conventions as much as possible     config/queue.php I can do this to have one or more Queues     config/filesystems.php Use the Config file as much as possible. For example this is in my .env If I add config/marvel.php and then make it look like this My .env can be trimmed down by KEY=VALUES later on I can call to those:    Config::get('marvel.MARVEL_API_VERSION')    Config::get('marvel.MARVEL_API_BASE_URL') Now to easily send to Staging or Production using the EnvDeployer library >composer require alfred-nutile-inc/env-deployer:dev-master Follow the readme.md for that library. Then as it says in the docs setup your config file so that it matches the destination IP/URL and username and path for those services. I end up with this config file config/envdeployer.php Now the trick to this library is you start to enter KEY=VALUES into your .env file stacked on top of each other. For example, my database settings might look like this. so now I can type: >php artisan envdeployer:push production Then this will push over SSH your .env to production and swap out the related @production values for each KEY they are placed above. How it works… The first mindset to follow is conventions before you put a new KEY=VALUE into the .env file set back and figure out defaults and conventions around what you already must have in this file. For example must haves, APP_ENV, and then I always have APP_NAME so those two together do a lot to make databases, queues, buckets and so on. all around those existing KEYs. It really does add up, whether you are working alone or on a team focus on these conventions and then using the config/some.php file workflow to setup defaults. Then libraries like the one I use above that push this info around with ease. Kind of like Heroku you can command line these settings up to the servers as needed. See also… Laravel Validator for the .env file: https://packagist.org/packages/mathiasgrimm/laravel-env-validator Laravel 5 Fundamentals: Environments and Configuration: https://laracasts.com/series/laravel-5-fundamentals/episodes/6 Testing Your App on Production with Behat So your app is now on Production! Start clicking away at hundreds of little and big features so you can make sure everything went okay or better yet run Behat! Behat on production? Sounds crazy but I will cover some tips on how to do this including how to setup some remote conditions and clean up when you are done. Getting ready… Any app will do. In my case I am going to hit production with some tests I made earlier. How to do it… Tag a Behat test @smoke or just a Scenario that you know it is safe to run on Production for example features/home/search.feature. Update behat.yml adding a profile call production. Then run > vendor/bin/behat -shome_ui --tags=@smoke --profile=production I run an Artisan command to run all these Then you will see it hit the production url and only the Scenarios you feel are safe for Behat. Another method is to login as a demo user. And after logging in as that user you can see data that is related to that user only so you can test authenticated level of data and interactions. For example database/seeds/UserTableSeeder.php add the demo user to the run method Then update your .env. Now push that .env setting up to Production.  >php artisan envdeploy:push production Then we update our behat.yml file to run this test even on Production features/auth/login.feature. Now we need to commit our work and push to GitHub so TravisCI can deploy and changes: Since this is a seed and not a migration I need to rerun seeds on production. Since this is a new site, and no one has used it this is fine BUT of course this would have been a migration if I had to do this later in the applications life. Now let's run this test, from our vagrant box > vendor/bin/behat -slogin_ui --profile=production But it fails because I am setting up the start of this test for my local database not the remote database features/bootstrap/LoginPageUIContext.php. So I can basically begin to create a way to setup the state of the world on the remote server. > php artisan make:controller SetupBehatController And update that controller to do the setup. And make the route app/Http/routes.php Then update the behat test features/bootstrap/LoginPageUIContext.php And we should do some cleanup! First add a new method to features/bootstrap/LoginPageUIContext.php. Then add that tag to the Scenarios this is related to features/auth/login.feature Then add the controller like before and route app/Http/Controllers/CleanupBehatController.php Then Push and we are ready test this user with fresh state and then clean up when they are done! In this case I could test editing the Profile from one state to another. How it works… Not to hard! Now we have a workflow that can save us a ton of clicking around Production after every deployment. To begin with I add the tag @smoke to tests I considered safe for production. What does safe mean? Basically read only tests that I know will not effect that site's data. Using the @smoke tag I have a consistent way to make Suites or Scenarios as safe to run on Production. But then I take it a step further and create a way to test authenticated related state. Like make a Favorite or updating a Profile! By using some simple routes and a user I can begin to tests many other things on my long list of features I need to consider after every deploy. All of this happens with the configurability of Behat and how it allows me to manage different Profiles and Suites in the behat.yml file! Lastly I tie into the fact that Behat has hooks. I this case I tie in to the @AfterScenario by adding that to my Annotation. And I add another hooks @profile so it only runs if the Scenario has that Tag. That is it, thanks to Behat, Hooks and how easy it is to make Routes in Laravel I can easily take care of a large percentage of what otherwise would be a tedious process after every deployment! See also… Behat Docus on Hooks—http://docs.behat.org/en/v3.0/guides/3.hooks.html Saucelabs—on behat.yml setting later and you can test your site on numerous devices: https://saucelabs.com/. Summary This article gives a summary of Setting up Travis, working with .env files and Behat.  Resources for Article: Further resources on this subject: CRUD Applications using Laravel 4 [article] Laravel Tech Page [article] Eloquent… without Laravel! [article]
Read more
  • 0
  • 0
  • 25383

article-image-using-front-controllers-create-new-page
Packt
28 Nov 2014
22 min read
Save for later

Using front controllers to create a new page

Packt
28 Nov 2014
22 min read
In this article, by Fabien Serny, author of PrestaShop Module Development, you will learn about controllers and object models. Controllers handle display on front and permit us to create new page type. Object models handle all required database requests. We will also see that, sometimes, hooks are not enough and can't change the way PrestaShop works. In these cases, we will use overrides, which permit us to alter the default process of PrestaShop without making changes in the core code. If you need to create a complex module, you will need to use front controllers. First of all, using front controllers will permit to split the code in several classes (and files) instead of coding all your module actions in the same class. Also, unlike hooks (that handle some of the display in the existing PrestaShop pages), it will allow you to create new pages. (For more resources related to this topic, see here.) Creating the front controller To make this section easier to understand, we will make an improvement on our current module. Instead of displaying all of the comments (there can be many), we will only display the last three comments and a link that redirects to a page containing all the comments of the product. First of all, we will add a limit to the Db request in the assignProductTabContent method of your module class that retrieves the comments on the product page: $comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$id_product.'ORDER BY `date_add` DESCLIMIT 3'); Now, if you go to a product, you should only see the last three comments. We will now create a controller that will display all comments concerning a specific product. Go to your module's root directory and create the following directory path: /controllers/front/ Create the file that will contain the controller. You have to choose a simple and explicit name since the filename will be used in the URL; let's name it comments.php. In this file, create a class and name it, ensuring that you follow the [ModuleName][ControllerFilename]ModuleFrontController convention, which extends the ModuleFrontController class. So in our case, the file will be as follows: <?phpclass MyModCommentsCommentsModuleFrontController extendsModuleFrontController{} The naming convention has been defined by PrestaShop and must be respected. The class names are a bit long, but they enable us to avoid having two identical class names in different modules. Now you just to have to set the template file you want to display with the following lines: class MyModCommentsCommentsModuleFrontController extendsModuleFrontController{public function initContent(){parent::initContent();$this->setTemplate('list.tpl');}} Next, create a template named list.tpl and place it in views/templates/front/ of your module's directory: <h1>{l s='Comments' mod='mymodcomments'}</h1> Now, you can check the result by loading this link on your shop: /index.php?fc=module&module=mymodcomments&controller=comments You should see the Comments title displayed. The fc parameter defines the front controller type, the module parameter defines in which module directory the front controller is, and, at last, the controller parameter defines which controller file to load. Maintaining compatibility with the Friendly URL option In order to let the visitor access the controller page we created in the preceding section, we will just add a link between the last three comments displayed and the comment form in the displayProductTabContent.tpl template. To maintain compatibility with the Friendly URL option of PrestaShop, we will use the getModuleLink method. This will generate a URL according to the URL settings (defined in Preferences | SEO & URLs). If the Friendly URL option is enabled, then it will generate a friendly URL (for example, /en/5-tshirts-doctor-who); if not, it will generate a classic URL (for example, /index.php?id_category=5&controller=category&id_lang=1). This function takes three parameters: the name of the module, the controller filename you want to call, and an array of parameters. The array of parameters must contain all of the data that's needed, which will be used by the controller. In our case, we will need at least the product identifier, id_product, to display only the comments related to the product. We can also add a module_action parameter just in case our controller contains several possible actions. Here is an example. As you will notice, I created the parameters array directly in the template using the assign Smarty method. From my point of view, it is easier to have the content of the parameters close to the link. However, if you want, you can create this array in your module class and assign it to your template in order to have cleaner code: <div class="rte">{assign var=params value=['module_action' => 'list','id_product'=> $smarty.get.id_product]}<a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}">{l s='See all comments' mod='mymodcomments'}</a></div> Now, go to your product page and click on the link; the URL displayed should look something like this: /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 Creating a small action dispatcher In our case, we won't need to have several possible actions in the comments controller. However, it would be great to create a small dispatcher in our front controller just in case we want to add other actions later. To do so, in controllers/front/comments.php, we will create new methods corresponding to each action. I propose to use the init[Action] naming convention (but this is not mandatory). So in our case, it will be a method named initList: protected function initList(){$this->setTemplate('list.tpl');} Now in the initContent method, we will create a $actions_list array containing all possible actions and associated callbacks: $actions_list = array('list' => 'initList'); Now, we will retrieve the id_product and module_action parameters in variables. Once complete, we will check whether the id_product parameter is valid and if the action exists by checking in the $actions_list array. If the method exists, we will dynamically call it: if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action](); Here's what your code should look like: public function initContent(){parent::initContent();$id_product = (int)Tools::getValue('id_product');$module_action = Tools::getValue('module_action');$actions_list = array('list' => 'initList');if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action]();} If you did this correctly nothing should have changed when you refreshed the page on your browser, and the Comments title should still be displayed. Displaying the product name and comments We will now display the product name (to let the visitor know he or she is on the right page) and associated comments. First of all, create a public variable, $product, in your controller class, and insert it in the initContent method with an instance of the selected product. This way, the product object will be available in every action method: $this->product = new Product((int)$id_product, false,$this->context->cookie->id_lang); In the initList method, just before setTemplate, we will make a DB request to get all comments associated with the product and then assign the product object and the comments list to Smarty: // Get comments$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESC');// Assign comments and product object$this->context->smarty->assign('comments', $comments);$this->context->smarty->assign('product', $this->product); Once complete, we will display the product name by changing the h1 title: <h1>{l s='Comments on product' mod='mymodcomments'}"{$product->name}"</h1> If you refresh your page, you should now see the product name displayed. I won't explain this part since it's exactly the same HTML code we used in the displayProductTabContent.tpl template. At this point, the comments should appear without the CSS style; do not panic, just go to the next section of this article. Including CSS and JS media in the controller As you can see, the comments are now displayed. However, you are probably asking yourself why the CSS style hasn't been applied properly. If you look back at your module class, you will see that it is the hookDisplayProductTab hook in the product page that includes the CSS and JS files. The problem is that we are not on a product page here. So we have to include them on this page. To do so, we will create a method named setMedia in our controller and add CS and JS files (as we did in the hookDisplayProductTab hook). It will override the default setMedia method contained in the FrontController class. Since this method includes general CSS and JS files used by PrestaShop, it is very important to call the setMedia parent method in our override: public function setMedia(){// We call the parent methodparent::setMedia();// Save the module path in a variable$this->path = __PS_BASE_URI__.'modules/mymodcomments/';// Include the module CSS and JS files needed$this->context->controller->addCSS($this->path.'views/css/starrating.css', 'all');$this->context->controller->addJS($this->path.'views/js/starrating.js');$this->context->controller->addCSS($this->path.'views/css/mymodcomments.css', 'all');$this->context->controller->addJS($this->path.'views/js/mymodcomments.js');} If you refresh your browser, the comments should now appear well formatted. In an attempt to improve the display, we will just add the date of the comment beside the author's name. Just replace <p>{$comment.firstname} {$comment.lastname|substr:0:1}.</p> in your list.tpl template with this line: <div>{$comment.firstname} {$comment.lastname|substr:0:1}. <small>{$comment.date_add|substr:0:10}</small></div> You can also replace the same line in the displayProductTabContent.tpl template if you want. If you want more information on how the Smarty method works, such as substr that I used for the date, you can check the official Smarty documentation. Adding a pagination system Your controller page is now fully working. However, if one of your products has thousands of comments, the display won't be quick. We will add a pagination system to handle this case. First of all, in the initList method, we need to set a number of comments per page and know how many comments are associated with the product: // Get number of comments$nb_comments = Db::getInstance()->getValue('SELECT COUNT(`id_product`)FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id);// Init$nb_per_page = 10; By default, I have set the number per page to 10, but you can set the number you want. The value is stored in a variable to easily change the number, if needed. Now we just have to calculate how many pages there will be : $nb_pages = ceil($nb_comments / $nb_per_page); Also, set the page the visitor is on: $page = 1;if (Tools::getValue('page') != '')$page = (int)$_GET['page']; Now that we have this data, we can generate the SQL limit and use it in the comment's DB request in such a way so as to display the 10 comments corresponding to the page the visitor is on: $limit_start = ($page - 1) * $nb_per_page;$limit_end = $nb_per_page;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESCLIMIT '.(int)$limit_start.','.(int)$limit_end); If you refresh your browser, you should only see the last 10 comments displayed. To conclude, we just need to add links to the different pages for navigation. First, assign the page the visitor is on and the total number of pages to Smarty: $this->context->smarty->assign('page', $page);$this->context->smarty->assign('nb_pages', $nb_pages); Then in the list.tpl template, we will display numbers in a list from 1 to the total number of pages. On each number, we will add a link with the getModuleLink method we saw earlier, with an additional parameter page: <ul class="pagination">{for $count=1 to $nb_pages}{assign var=params value=['module_action' => 'list','id_product' => $smarty.get.id_product,'page' => $count]}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{/for}</ul> To make the pagination clearer for the visitor, we can use the native CSS class to indicate the page the visitor is on: {if $page ne $count}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{else}<li class="active current"><span><span>{$count}</span></span></li>{/if} Your pagination should now be fully working. Creating routes for a module's controller At the beginning of this article, we chose to use the getModuleLink method to keep compatibility with the Friendly URL option of PrestaShop. Let's enable this option in the SEO & URLs section under Preferences. Now go to your product page and look at the target of the See all comments link; it should have changed from /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 to /en/module/mymodcomments/comments?module_action=list&id_product=1. The result is nice, but it is not really a Friendly URL yet. ISO code at the beginning of URLs appears only if you enabled several languages; so if you have only one language enabled, the ISO code will not appear in the URL in your case. Since PrestaShop 1.5.3, you can create specific routes for your module's controllers. To do so, you have to attach your module to the ModuleRoutes hook. In your module's install method in mymodcomments.php, add the registerHook method for ModuleRoutes: // Register hooksif (!$this->registerHook('displayProductTabContent') ||!$this->registerHook('displayBackOfficeHeader') ||!$this->registerHook('ModuleRoutes'))return false; Don't forget; you will have to uninstall/install your module if you want it to be attached to this hook. If you don't want to uninstall your module (because you don't want to lose all the comments you filled in), you can go to the Positions section under the Modules section of your back office and hook it manually. Now we have to create the corresponding hook method in the module's class. This method will return an array with all the routes we want to add. The array is a bit complex to explain, so let me write an example first: public function hookModuleRoutes(){return array('module-mymodcomments-comments' => array('controller' => 'comments','rule' => 'product-comments{/:module_action}{/:id_product}/page{/:page}','keywords' => array('id_product' => array('regexp' => '[d]+','param' => 'id_product'),'page' => array('regexp' => '[d]+','param' => 'page'),'module_action' => array('regexp' => '[w]+','param' => 'module_action'),),'params' => array('fc' => 'module','module' => 'mymodcomments','controller' => 'comments')));} The array can contain several routes. The naming convention for the array key of a route is module-[ModuleName]-[ModuleControllerName]. So in our case, the key will be module-mymodcomments-comments. In the array, you have to set the following: The controller; in our case, it is comments. The construction of the route (the rule parameter). You can use all the parameters you passed in the getModuleLink method by using the {/:YourParameter} syntax. PrestaShop will automatically add / before each dynamic parameter. In our case, I chose to construct the route this way (but you can change it if you want): product-comments{/:module_action}{/:id_product}/page{/:page} The keywords array corresponding to the dynamic parameters. For each dynamic parameter, you have to set Regexp, which will permit to retrieve it from the URL (basically, [d]+ for the integer values and '[w]+' for string values) and the parameter name. The parameters associated with the route. In the case of a module's front controller, it will always be the same three parameters: the fc parameter set with the fix value module, the module parameter set with the module name, and the controller parameter set with the filename of the module's controller. Very importantNow PrestaShop is waiting for a page parameter to build the link. To avoid fatal errors, you will have to set the page parameter to 1 in your getModuleLink parameters in the displayProductTabContent.tpl template: {assign var=params value=[ 'module_action' => 'list', 'id_product' => $smarty.get.id_product, 'page' => 1 ]} Once complete, if you go to a product page, the target of the See all comments link should now be: /en/product-comments/list/1/page/1 It's really better, but we can improve it a little more by setting the name of the product in the URL. In the assignProductTabContent method of your module, we will load the product object and assign it to Smarty: $product = new Product((int)$id_product, false,$this->context->cookie->id_lang);$this->context->smarty->assign('product', $product); This way, in the displayProductTabContent.tpl template, we will be able to add the product's rewritten link to the parameters of the getModuleLink method: (do not forget to add it in the list.tpl template too!): {assign var=params value=['module_action' => 'list','product_rewrite' => $product->link_rewrite,'id_product' => $smarty.get.id_product,'page' => 1]} We can now update the rule of the route with the product's link_rewrite variable: 'product-comments{/:module_action}{/:product_rewrite} {/:id_product}/page{/:page}' Do not forget to add the product_rewrite string in the keywords array of the route: 'product_rewrite' => array('regexp' => '[w-_]+','param' => 'product_rewrite'), If you refresh your browser, the link should look like this now: /en/product-comments/list/tshirt-doctor-who/1/page/1 Nice, isn't it? Installing overrides with modules As we saw in the introduction of this article, sometimes hooks are not sufficient to meet the needs of developers; hooks can't alter the default process of PrestaShop. We could add code to core classes; however, it is not recommended, as all those core changes will be erased when PrestaShop is updated using the autoupgrade module (even a manual upgrade would be difficult). That's where overrides take the stage. Creating the override class Installing new object models and controller overrides on PrestaShop is very easy. To do so, you have to create an override directory in the root of your module's directory. Then, you just have to place your override files respecting the path of the original file that you want to override. When you install the module, PrestaShop will automatically move the override to the overrides directory of PrestaShop. In our case, we will override the getProducts method of the /classes/Search.php class to display the grade and the number of comments on the product list. So we just have to create the Search.php file in /modules/mymodcomments/override/classes/Search.php, and fill it with: <?phpclass Search extends SearchCore{public static function find($id_lang, $expr, $page_number = 1,$page_size = 1, $order_by = 'position', $order_way = 'desc',$ajax = false, $use_cookie = true, Context $context = null){}} In this method, first of all, we will call the parent method to get the products list and return it: // Call parent method$find = parent::find($id_lang, $expr, $page_number, $page_size,$order_by, $order_way, $ajax, $use_cookie, $context);// Return productsreturn $find; We want to display the information (grade and number of comments) to the products list. So, between the find method call and the return statement, we will add some lines of code. First, we will check whether $find contains products. The find method can return an empty array when no products match the search. In this case, we don't have to change the way this method works. We also have to check whether the mymodcomments module has been installed (if the override is being used, the module is most likely to be installed, but as I said, it's just for security): if (isset($find['result']) && !empty($find['result']) &&Module::isInstalled('mymodcomments')){} If we enter these conditions, we will list the product identifier returned by the find parent method: // List id product$products = $find['result'];$id_product_list = array();foreach ($products as $p)$id_product_list[] = (int)$p['id_product']; Next, we will retrieve the grade average and number of comments for the products in the list: // Get grade average and nb comments for products in list$grades_comments = Db::getInstance()->executeS('SELECT `id_product`, AVG(`grade`) as grade_avg,count(`id_mymod_comment`) as nb_commentsFROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` IN ('.implode(',', $id_product_list).')GROUP BY `id_product`'); Finally, fill in the $products array with the data (grades and comments) corresponding to each product: // Associate grade and nb comments with productforeach ($products as $kp => $p)foreach ($grades_comments as $gc)if ($gc['id_product'] == $p['id_product']){$products[$kp]['mymodcomments']['grade_avg'] =round($gc['grade_avg']);$products[$kp]['mymodcomments']['nb_comments'] =$gc['nb_comments'];}$find['result'] = $products; Now, as we saw at the beginning of this section, the overrides of the module are installed when you install the module. So you will have to uninstall/install your module. Once this is done, you can check the override contained in your module; the content of /modules/mymodcomments/override/classes/Search.php should be copied in /override/classes/Search.php. If an override of the class already exists, PrestaShop will try to merge it by adding the methods you want to override to the existing override class. Once the override is added by your module, PrestaShop should have regenerated the cache/class_index.php file (which contains the path of every core class and controller), and the path of the Category class should have changed. Open the cache/class_index.php file and search for 'Search'; the content of this array should now be: 'Search' =>array ( 'path' => 'override/classes /Search.php','type' => 'class',), If it's not the case, it probably means the permissions of this file are wrong and PrestaShop could not regenerate it. To fix this, just delete this file manually and refresh any page of your PrestaShop. The file will be regenerated and the new path will appear. Since you uninstalled/installed the module, all your comments should have been deleted. So take 2 minutes to fill in one or two comments on a product. Then search for this product. As you must have noticed, nothing has changed. Data is assigned to Smarty, but not used by the template yet. To avoid deletion of comments each time you uninstall the module, you should comment the loadSQLFile call in the uninstall method of mymodcomments.php. We will uncomment it once we have finished working with the module. Editing the template file to display grades on products list In a perfect world, you should avoid using overrides. In this case, we could have used the displayProductListReviews hook, but I just wanted to show you a simple example with an override. Moreover, this hook exists only since PrestaShop 1.6, so it would not work on PrestaShop 1.5. Now, we will have to edit the product-list.tpl template of the active theme (by default, it is /themes/default-bootstrap/), so the module won't be a turnkey module anymore. A merchant who will install this module will have to manually edit this template if he wants to have this feature. In the product-list.tpl template, just after the short description, check if the $product.mymodcomments variable exists (to test if there are comments on the product), and then display the grade average and the number of comments: {if isset($product.mymodcomments)}<p><b>{l s='Grade:'}</b> {$product.mymodcomments.grade_avg}/5<br/><b>{l s='Number of comments:'}</b>{$product.mymodcomments.nb_comments}</p>{/if} Here is what the products list should look like now: Creating a new method in a native class In our case, we have overridden an existing method of a PrestaShop class. But we could have added a method to an existing class. For example, we could have added a method named getComments to the Product class: <?phpclass Product extends ProductCore{public function getComments($limit_start, $limit_end = false){$limit = (int)$limit_start;if ($limit_end)$limit = (int)$limit_start.','.(int)$limit_end;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->id.'ORDER BY `date_add` DESCLIMIT '.$limit);return $comments;}} This way, you could easily access the product comments everywhere in the code just with an instance of a Product class. Summary This article taught us about the main design patterns of PrestaShop and explained how to use them to construct a well-organized application. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF Reports from Python using ReportLab [Article] Customizing PrestaShop Theme Part 2 [Article] Django 1.2 E-commerce: Data Integration [Article]
Read more
  • 0
  • 0
  • 25240

article-image-unit-testing-and-end-end-testing
Packt
09 Mar 2017
4 min read
Save for later

Unit Testing and End-To-End Testing

Packt
09 Mar 2017
4 min read
In this article by Andrea Passaglia, the author of the book Vue.js 2 Cookbook, will cover stubbing external API calls with Sinon.JS. (For more resources related to this topic, see here.) Stub external API calls with Sinon.JS Normally when you do end-to-end testing and integration testing you would have the backend server running and ready to respond to you. I think there are many situations in which this is not desirable. As a frontend developer you take every opportunity to blame the backend guys. Getting started No particular skills are required to complete this recipe except that you should install Jasmine as a dependency. How to do it... First of all let's install some dependencies, starting with Jasmine as we are going to use it to run the whole thing. Also install Sinon.JS and axios before continuing, you just need to add the .js files. We are going to build an application that retrieves a post at the click of a button. In the HTML part write the following: <div id="app"> <button @click="retrieve">Retrieve Post</button> <p v-if="post">{{post}}</p> </div> The JavaScript part instead, is going to look like the following: const vm = new Vue({ el: '#app', data: { post: undefined }, methods: { retrieve () { axios .get('https://jsonplaceholder.typicode.com/posts/1') .then(response => { console.log('setting post') this.post = response.data.body }) } } }) If you launch your application now you should be able to see it working. Now we want to test the application but we don't like to connect to the real server. This would take additional time and it would not be reliable, instead we are going to take a sample, correct response from the server and use it instead. Sinon.JS has the concept of a sandbox. It means that whenever a test start, some dependencies, like axios are overwritten. After each test we can discard the sandbox and everything returns normal. An empty test with Sinon.JS looks like the following (add it after the Vue instance): describe('my app', () => { let sandbox beforeEach(() => sandbox = sinon.sandbox.create()) afterEach(() => sandbox.restore()) }) We want to stub the call to the get function for axios: describe('my app', () => { let sandbox beforeEach(() => sandbox = sinon.sandbox.create()) afterEach(() => sandbox.restore()) it('should save the returned post body', done => { const resolved = new Promise(resolve => r({ data: { body: 'Hello World' } }) ) sandbox.stub(axios, 'get').returns(resolved) ... done() }) }) We are overwriting axios here. We are saying that now the get method should return the resolved promise: describe('my app', () => { let sandbox beforeEach(() => sandbox = sinon.sandbox.create()) afterEach(() => sandbox.restore()) it('should save the returned post body', done => { const promise = new Promise(resolve => resolve({ data: { body: 'Hello World' } }) ) sandbox.stub(axios, 'get').returns(resolved) vm.retrieve() promise.then(() => { expect(vm.post).toEqual('Hello World') done() }) }) }) Since we are returning a promise (and we need to return a promise because the retrieve method is calling then on it) we need to wait until it resolves. We can launch the page and see that it works: How it works... In our case we used the sandbox to stub a method of one of our dependencies. This way the get method of axios never gets fired and we receive an object that is similar to what the backend would give us. Stubbing the API responses will get you isolated from the backend and its quirks. If something goes wrong you won't mind and moreover you can run your test without relying on the backend running and running correctly. There are many libraries and techniques to stub API calls in general, not only related to HTTP. Hopefully this recipe have given you a head start. Summary In this article we covered how we can stub an external API class with Sinon.JS. Resources for Article: Further resources on this subject: Installing and Using Vue.js [article] Introduction to JavaScript [article] JavaScript Execution with Selenium [article]
Read more
  • 0
  • 0
  • 24989
article-image-modern-web-development-what-makes-it-modern
Richard Gall
14 Oct 2019
10 min read
Save for later

Modern web development: what makes it ‘modern’?

Richard Gall
14 Oct 2019
10 min read
The phrase 'modern web development' is one that I have clung to during years writing copy for Packt books. But what does it really mean? I know it means something because it sounds right - but there’s still a part of me that feels that it’s a bit vague and empty. Although it might sound somewhat fluffy the truth is that there probably is such a thing as modern web development. Insofar as the field has changed significantly in a matter of years, and things are different now from how they were in, say, 2013, modern web development can be easily characterised as all the things that are being done in web development in 2019 that are different from 5-10 years ago. By this I don’t just mean trends like artificial intelligence and mobile development (although those are both important). I’m also talking about the more specific ways in which we actually build web projects. So, let’s take a look at how we got to where we are and the various ways in which ‘modern web development’ is, well, modern. The story of modern web development: how we got here It sounds obvious, but the catalyst for the changes that we currently see in web development today is the rise of mobile. Mobile and the rise of the web applications There are a few different parts to this that have got us to where we are today. In the first instance the growth of mobile in the middle part of this decade (around 2013 or 2014) initiated the trend of mobile-first or responsive web design. Those terms might sound a bit old-hat. If they do, it’s a mark of how quickly the web development world has changed. Primarily, though this was about appearance and UI - making web properties easy to use and navigate on mobile devices, rather than just desktop. Tools like Bootstrap grew quickly, providing an easy and templated way to build mobile-first and responsive websites. But what began as a trend concerned primarily with appearance later shifted as mobile usage grew. This called for a more sophisticated approach as mobile users came to expect richer and faster web experiences, and businesses a new way to monetize these significant changes user behavior. Explore Packt's Bootstrap titles here. Lightweight apps for data-intensive user experiences This is where concepts like the single page web app came to the fore. Lightweight and dynamic, and capable of handling data-intensive tasks and changes in state, single page web apps were unique in that they handled logic in the browser rather than on the server. This was arguably a watershed in changing how we think about web development. It was instrumental in collapsing the well-established distinction between backend and front end. Behind this trend we saw a shift towards new technologies. Node.js quietly emerged on the scene (arguably its only in the last couple of years that its popularity has really exploded), and frameworks like Angular were at the height of their popularity. Find a massive range of Node.js eBooks and videos on the Packt store. Full-stack web development It’s around this time that full stack development started to accelerate as a trend. Look at Google trends. You can see how searches for the phrase have grown since the beginning of 2012: If you look closely, it’s around 2015 that the term undergoes a step change in the level of interest. Undoubtedly one of the reasons for this is that the relationship between client and server was starting to shift. This meant the web developer skill set was starting to change as well. As a web developer, you weren’t only concerned with how to build the front end, but also how that front end managed dynamic content and different states. The rise and fall of Angular A good corollary to this tale is the fate of AngularJS. While it rose to the top amidst the chaos and confusion of the mid-teens framework bubble, as the mobile revolution matured in a way that gave way to more sophisticated web applications, the framework soon became too cumbersome. And while Google - the frameworks’ creator - aimed to keep it up to date with Angular 2 and subsequent versions, delays and missteps meant the project lost ground to React. Indeed, this isn’t to say that Angular is dead and buried. There are plenty of reasons to use Angular over React and other JavaScript tools if the use case is right. But it is nevertheless the case that the Angular project no longer defines web development to the extent that it used to. Explore Packt's Angular eBooks and videos. The fact that Ionic, the JavaScript mobile framework, is now backed by Web Components rather than Angular is an important indicator for what modern web development actually looks like - and how it contrasts with what we were doing just a few years ago. The core elements of modern web development in 2019 So, there are a number of core components to modern web development that have evolved out of industry changes over the last decade. Some of these are tools, some are ideas and approaches. All of them are based on the need to manage a balance of complex requirements with performance and simplicity. Web Components Web Components are the most important element if we’re trying to characterise ‘modern’ web development. The principle is straightforward: Web Components provides a set of reusable custom elements. This makes it easier to build web pages and applications without writing additional lines of code that add complexity to your codebase. The main thing to keep in mind here is that Web Components improve encapsulation. This concept, which is really about building in a more modular and loosely coupled manner, is crucial when thinking about what makes modern web development modern. There are three main elements to Web Components: Custom elements, which are a set of JavaScript APIs that you can call and define however you need them to work. The shadow DOM, which acts as a DOM that’s attached to individual elements on your page. This essentially isolates the resources different elements and components need to work on your page which makes it easier to manage from a development perspective, and can unlock better performance for users. HTML Templates, which are bits of HTML that can be reused and called upon only when needed. These elements together paint a picture of modern web development. This is one in which developers are trying to handle more complexity and sophistication while improving their productivity and efficiency. Want to get started with Web Components? You can! Read Getting Started with Web Components. React.js One of the reasons that React managed to usurp Angular is the fact that it does many of the things that Google wanted Angular to do. Perhaps the most significant difference between React and Angular is that React tackles some of the scalability issues presented by Angular’s two-way data binding (which was, for a while, incredibly exciting and innovative) with unidirectional flow. There’s a lot of discussion around this, but by moving towards a singular model of data flow, applications can handle data on a much larger scale without running into problems. Elsewhere, concepts like the virtual DOM (which is distinct from a shadow DOM, more on that here) help to improve encapsulation for developers. Indeed, flexibility is one of the biggest draws of React. To use Angular you need to know TypeScript, for example. And although you can use TypeScript when working with React, it isn’t essential. You have options. Explore Packt's React.js eBooks and videos. Redux, Flux, and how we think about application state The growth of React has got web developers thinking more and more about application state. While this isn’t something that’s new, as applications have become more interactive and complex it has become more important for developers to take the issue of ‘statefulness’ seriously. Consequently, libraries such as Flux and Redux have emerged on the scene which act as objects in which all the values that comprise an application’s state can be stored. This article on egghead.io explains why state is important in a clear and concise way: "For me, the key to understanding state management was when I realised that there is always state… users perform actions, and things change in response to those actions. State management makes the state of your app tangible in the form of a data structure that you can read from and write to. It makes your ‘invisible’ state clearly visible for you to work with." Find Redux eBooks and videos from Packt. Or, check out a wide range of Flux titles. APIs and microservices One software trend that we haven’t mentioned yet but nevertheless remains important when thinking about modern web development is the rise of APIs and microservices. For web developers, the trend is reinforcing the importance of encapsulation and modularity that things like Web Components and React are designed to help with. Insofar as microservices are simplifying the development process but adding architectural complexity, it’s not hard to see that web developers are having to think in a more holistic manner about how their applications interact with a variety of services and data sources. Indeed, you could even say that this trend is only extending the growth of the full-stack developer as a job role. If development is today more about pulling together multiple different services and components, rather than different roles building different parts of a monolithic app, it makes sense that the demand for full stack developers is growing. But there’s another, more specific, way in which the microservices trend is influencing modern web development: micro frontends. Micro frontends Micro frontends take the concept of microservices and apply them to the frontend. Rather than just building an application in which the front end is powered by microservices (in a way that’s common today), you also treat individual constituent parts of the frontend as a microservice. In turn, you build teams around each of these parts. So, perhaps one works on search, another on check out, another on user accounts. This is more of an organizational shift than a technological one. But it again feeds into the idea that modern web development is something modular, broken up and - usually - full-stack. Conclusion: modern web development is both a set of tools and a way of thinking The web development toolset has been evolving for more than a decade. Mobile was the catalyst for significant change, and has helped get us to a world that is modular, lightweight, and highly flexible. Heavyweight frameworks like AngularJS paved the way, but it appears an alternative has found real purchase with the wider development community. Of course, it won’t always be this way. And although React has dominated developer mindshare for a good three years or so (quite a while in the engineering world), something will certainly replace it at some point. But however the tool chain evolves, the basic idea that we build better applications and websites when we break things apart will likely stick. Complexity won’t decrease. Even if writing code gets easier, understanding how component parts of an application fit together - from front end elements to API integration - will become crucial. Even if it starts to bend your mind, and pose new problems you hadn’t even thought of, it’s clear that things are going to remain interesting as far as web development in the future is concerned.
Read more
  • 0
  • 0
  • 24393

article-image-using-frameworks
Packt
24 Dec 2014
24 min read
Save for later

Using Frameworks

Packt
24 Dec 2014
24 min read
In this article by Alex Libby, author of the book Responsive Media in HTML5, we will cover the following topics: Adding responsive media to a CMS Implementing responsive media in frameworks such as Twitter Bootstrap Using the Less CSS preprocessor to create CSS media queries Ready? Let's make a start! (For more resources related to this topic, see here.) Introducing our three examples Throughout this article, we've covered a number of simple, practical techniques to make media responsive within our sites—these are good, but nothing beats seeing these principles used in a real-world context, right? Absolutely! To prove this, we're going to look at three examples throughout this article, using technologies that you are likely to be familiar with: WordPress, Bootstrap, and Less CSS. Each demo will assume a certain level of prior knowledge, so it may be worth reading up a little first. In all three cases, we should see that with little effort, we can easily add responsive media to each one of these technologies. Let's kick off with a look at working with WordPress. Adding responsive media to a CMS We will begin the first of our three examples with a look at using the ever popular WordPress system. Created back in 2003, WordPress has been used to host sites by small independent traders all the way up to Fortune 500 companies—this includes some of the biggest names in business such as eBay, UPS, and Ford. WordPress comes in two flavors; the one we're interested in is the self-install version available at http://www.wordpress.org. This example assumes you have a local installation of WordPress installed and working; if not, then head over to http://codex.wordpress.org/Installing_WordPress and follow the tutorial to get started. We will also need a DOM inspector such as Firebug installed if you don't already have it. It can be downloaded from http://www.getfirebug.com if you need to install it. If you only have access to WordPress.com (the other flavor of WordPress), then some of the tips in this section may not work, due to limitations in that version of WordPress. Okay, assuming we have WordPress set up and running, let's make a start on making uploaded media responsive. Adding responsive media manually It's at this point that you're probably thinking we have to do something complex when working in WordPress, right? Wrong! As long as you use the Twenty Fourteen core theme, the work has already been done for you. For this exercise, and the following sections, I will assume you have installed and/or activated WordPress' Twenty Fourteen theme. Don't believe me? It's easy to verify: try uploading an image to a post or page in WordPress. Resize the browser—you should see the image shrink or grow in size as the browser window changes size. If we take a look at the code elsewhere using Firebug, we can also see the height: auto set against a number of the img tags; this is frequently done for responsive images to ensure they maintain the correct proportions. The responsive style seems to work well in the Twenty Fourteen theme; if you are using an older theme, we can easily apply the same style rule to images stored in WordPress when using that theme. Fixing a responsive issue So far, so good. Now, we have the Twenty Fourteen theme in place, we've uploaded images of various sizes, and we try resizing the browser window ... only to find that the images don't seem to grow in size above a certain point. At least not well—what gives? Well, it's a classic trap: we've talked about using percentage values to dynamically resize images, only to find that we've shot ourselves in the foot (proverbially speaking, of course!). The reason? Let's dive in and find out using the following steps: Browse to your WordPress installation and activate Firebug using F12. Switch to the HTML tab and select your preferred image. In Firebug, look for the <header class="entry-header"> line, then look for the following line in the rendered styles on the right-hand side of the window: .site-content .entry-header, .site-content .entry-content,   .site-content .entry-summary, .site-content .entry-meta,   .page-content {    margin: 0 auto; max-width: 474px; } The keen-eyed amongst you should hopefully spot the issue straightaway—we're using percentages to make the sizes dynamic for each image, yet we're constraining its parent container! To fix this, change the highlighted line as indicated: .site-content .entry-header, .site-content .entry-content,   .site-content .entry-summary, .site-content .entry-meta,   .page-content {    margin: 0 auto; max-width: 100%; } To balance the content, we need to make the same change to the comments area. So go ahead and change max-width to 100% as follows: .comments-area { margin: 48px auto; max-width: 100%;   padding: 0 10px; } If we try resizing the browser window now, we should see the image size adjust automatically. At this stage, the change is not permanent. To fix this, we would log in to WordPress' admin area, go to Appearance | Editor and add the adjusted styles at the foot of the Stylesheet (style.css) file. Let's move on. Did anyone notice two rather critical issues with the approach used here? Hopefully, you must have spotted that if a large image is used and then resized to a smaller size, we're still working with large files. The alteration we're making has a big impact on the theme, even though it is only a small change. Even though it proves that we can make images truly responsive, it is the kind of change that we would not necessarily want to make without careful consideration and plenty of testing. We can improve on this. However, making changes directly to the CSS style sheet is not ideal; they could be lost when upgrading to a newer version of the theme. We can improve on this by either using a custom CSS plugin to manage these changes or (better) using a plugin that tells WordPress to swap an existing image for a small one automatically if we resize the window to a smaller size. Using plugins to add responsive images A drawback though, of using a theme such as Twenty Fourteen, is the resizing of images. While we can grow or shrink an image when resizing the browser window, we are still technically altering the size of what could potentially be an unnecessarily large image! This is considered bad practice (and also bad manners!)—browsing on a desktop with a fast Internet connection as it might not have too much of an impact; the same cannot be said for mobile devices, where we have less choice. To overcome this, we need to take a different approach—get WordPress to automatically swap in smaller images when we reach a particular size or breakpoint. Instead of doing this manually using code, we can take advantage of one of the many plugins available that offer responsive capabilities in some format. I feel a demo coming on. Now's a good time to take a look at one such plugin in action: Let's start by downloading our plugin. For this exercise, we'll use the PictureFill.WP plugin by Kyle Ricks, which is available at https://wordpress.org/plugins/picturefillwp/. We're going to use the version that uses Picturefill.js version 2. This is available to download from https://github.com/kylereicks/picturefill.js.wp/tree/master. Click on Download ZIP to get the latest version. Log in to the admin area of your WordPress installation and click on Settings and then Media. Make sure your image settings for Thumbnail, Medium, and Large sizes are set to values that work with useful breakpoints in your design. We then need to install the plugin. In the admin area, go to Plugins | Add New to install the plugin and activate it in the normal manner. At this point, we will have installed responsive capabilities in WordPress—everything is managed automatically by the plugin; there is no need to change any settings (except maybe the image sizes we talked about in step 2). Switch back to your WordPress frontend and try resizing the screen to a smaller size. Press F12 to activate Firebug and switch to the HTML tab. Press Ctrl + Shift + C (or Cmd + Shift + C for Mac users) to toggle the element inspector; move your mouse over your resized image. If we've set the right image sizes in WordPress' admin area and the window is resized correctly, we can expect to see something like the following screenshot: To confirm we are indeed using a smaller image, right-click on the image and select View Image Info; it will display something akin to the following screenshot: We should now have a fully functioning plugin within our WordPress installation. A good tip is to test this thoroughly, if only to ensure we've set the right sizes for our breakpoints in WordPress! What happens if WordPress doesn't refresh my thumbnail images properly? This can happen. If you find this happening, get hold of and install the Regenerate Thumbnails plugin to resolve this issue; it's available at https://wordpress.org/plugins/regenerate-thumbnails/. Adding responsive videos using plugins Now that we can add responsive images to WordPress, let's turn our attention to videos. The process of adding them is a little more complex; we need to use code to achieve the best effect. Let's examine our options. If you are hosting your own videos, the simplest way is to add some additional CSS style rules. Although this removes any reliance on JavaScript or jQuery using this method, the result isn't perfect and will need additional styles to handle the repositioning of the play button overlay. Although we are working locally, we should remember the note from earlier in this article; changes to the CSS style sheet may be lost when upgrading. A custom CSS plugin should be used, if possible, to retain any changes. To use a CSS-only solution, it only requires a couple of steps: Browse to your WordPress theme folder and open a copy of styles.css in your text editor of choice. Add the following lines at the end of the file and save it: video { width: 100%; height: 100%; max-width: 100%; } .wp-video { width: 100% !important; } .wp-video-shortcode {width: 100% !important; } Close the file. You now have the basics in place for responsive videos. At this stage, you're probably thinking, "great, my videos are now responsive. I can handle the repositioning of the play button overlay myself, no problem"; sounds about right? Thought so and therein lies the main drawback of this method! Repositioning the overlay shouldn't be too difficult. The real problem is in the high costs of hardware and bandwidth that is needed to host videos of any reasonable quality and that even if we were to spend time repositioning the overlay, the high costs would outweigh any benefit of using a CSS-only solution. A far better option is to let a service such as YouTube do all the hard work for you and to simply embed your chosen video directly from YouTube into your pages. The main benefit of this is that YouTube's servers do all the hard work for you. You can take advantage of an increased audience and YouTube will automatically optimize the video for the best resolution possible for the Internet connections being used by your visitors. Although aimed at beginners, wpbeginner.com has a useful article located at http://www.wpbeginner.com/beginners-guide/why-you-should-never-upload-a-video-to-wordpress/, on the pros and cons of why self-hosting videos isn't recommended and that using an external service is preferable. Using plugins to embed videos Embedding videos from an external service into WordPress is ironically far simpler than using the CSS method. There are dozens of plugins available to achieve this, but one of the simplest to use (and my personal favorite) is FluidVids, by Todd Motto, available at http://github.com/toddmotto/fluidvids/. To get it working in WordPress, we need to follow these steps using a video from YouTube as the basis for our example: Browse to your WordPress' theme folder and open a copy of functions.php in your usual text editor. At the bottom, add the following lines: add_action ( 'wp_enqueue_scripts', 'add_fluidvid' );   function add_fluidvid() { wp_enqueue_script( 'fluidvids',     get_stylesheet_directory_uri() .     '/lib/js/fluidvids.js', array(), false, true ); } Save the file, then log in to the admin area of your WordPress installation. Navigate to Posts | Add New to add a post and switch to the Text tab of your Post Editor, then add http://www.youtube.com/watch?v=Vpg9yizPP_g&hd=1 to the editor on the page. Click on Update to save your post, then click on View post to see the video in action. There is no need to further configure WordPress—any video added from services such as YouTube or Vimeo will be automatically set as responsive by the FluidVids plugin. At this point, try resizing the browser window. If all is well, we should see the video shrink or grow in size, depending on how the browser window has been resized: To prove that the code is working, we can take a peek at the compiled results within Firebug. We will see something akin to the following screenshot: For those of us who are not feeling quite so brave (!), there is fortunately a WordPress plugin available that will achieve the same results, without configuration. It's available at https://wordpress.org/plugins/fluidvids/ and can be downloaded and installed using the normal process for WordPress plugins. Let's change track and move onto our next demo. I feel a need to get stuck in some coding, so let's take a look at how we can implement responsive images in frameworks such as Bootstrap. Implementing responsive media in Bootstrap A question—as developers, hands up if you have not heard of Bootstrap? Good—not too many hands going down Why have I asked this question, I hear you say? Easy—it's to illustrate that in popular frameworks (such as Bootstrap), it is easy to add basic responsive capabilities to media, such as images or video. The exact process may differ from framework to framework, but the result is likely to be very similar. To see what I mean, let's take a look at using Bootstrap for our second demo, where we'll see just how easy it is to add images and video to our Bootstrap-enabled site. If you would like to explore using some of the free Bootstrap templates that are available, then http://www.startbootstrap.com/ is well worth a visit! Using Bootstrap's CSS classes Making images and videos responsive in Bootstrap uses a slightly different approach to what we've examined so far; this is only because we don't have to define each style property explicitly, but instead simply add the appropriate class to the media HTML for it to render responsively. For the purposes of this demo, we'll use an edited version of the Blog Page example, available at http://www.getbootstrap.com/getting-started/#examples; a copy of the edited version is available on the code download that accompanies this article. Before we begin, go ahead and download a copy of the Bootstrap Example folder that is in the code download. Inside, you'll find the CSS, image and JavaScript files needed, along with our HTML markup file. Now that we have our files, the following is a screenshot of what we're going to achieve over the course of our demo: Let's make a start on our example using the following steps: Open up bootstrap.html and look for the following lines (around lines 34 to 35):    <p class="blog-post-meta">January 1, 2014 by <a href="#">Mark</a></p>      <p>This blog post shows a few different types of content that's supported and styled with Bootstrap.         Basic typography, images, and code are all         supported.</p> Immediately below, add the following code—this contains markup for our embedded video, using Bootstrap's responsive CSS styling: <div class="bs-example"> <div class="embed-responsive embed-responsive-16by9">    <iframe allowfullscreen="" src="http://www.youtube.com/embed/zpOULjyy-n8?rel=0" class="embed-responsive-item"></iframe> </div> </div> With the video now styled, let's go ahead and add in an image—this will go in the About section on the right. Look for these lines, on or around lines 74 and 75:    <h4>About</h4>      <p>Etiam porta <em>sem malesuada magna</em> mollis euismod. Cras mattis consectetur purus sit amet       fermentum. Aenean lacinia bibendum nulla sed       consectetur.</p> Immediately below, add in the following markup for our image: <a href="#" class="thumbnail"> <img src="http://placehold.it/350x150" class="img-responsive"> </a> Save the file and preview the results in a browser. If all is well, we can see our video and image appear, as shown at the start of our demo. At this point, try resizing the browser—you should see the video and placeholder image shrink or grow as the window is resized. However, the great thing about Bootstrap is that the right styles have already been set for each class. All we need to do is apply the correct class to the appropriate media file—.embed-responsive embed-responsive-16by9 for videos or .img-responsive for images—for that image or video to behave responsively within our site. In this example, we used Bootstrap's .img-responsive class in the code; if we have a lot of images, we could consider using img { max-width: 100%; height: auto; } instead. So far, we've worked with two popular examples of frameworks in the form of WordPress and Bootstrap. This is great, but it can mean getting stuck into a lot of CSS styling, particularly if we're working with media queries, as we saw earlier in the article! Can we do anything about this? Absolutely! It's time for a brief look at CSS preprocessing and how this can help with adding responsive media to our pages. Using Less CSS to create responsive content Working with frameworks often means getting stuck into a lot of CSS styling; this can become awkward to manage if we're not careful! To help with this, and for our third scenario, we're going back to basics to work on an alternative way of rendering CSS using the Less CSS preprocessing language. Why? Well, as a superset (or extension) of CSS, Less allows us to write our styles more efficiently; it then compiles them into valid CSS. The aim of this example is to show that if you're already using Less, then we can still apply the same principles that we've covered throughout this article, to make our content responsive. It should be noted that this exercise does assume a certain level of prior experience using Less; if this is the first time, you may like to peruse my article, Learning Less, by Packt Publishing. There will be a few steps involved in making the changes, so the following screenshot gives a heads-up on what it will look like, once we've finished: You would be right. If we play our cards right, there should indeed be no change in appearance; working with Less is all about writing CSS more efficiently. Let's see what is involved: We'll start by extracting copies of the Less CSS example from the code download that accompanies this article—inside it, we'll find our HTML markup, reset style sheet, images, and video needed for our demo. Save the folder locally to your PC. Next, add the following styles in a new file, saving it as responsive.less in the css subfolder—we'll start with some of the styling for the base elements, such as the video and banner: #wrapper {width: 96%; max-width: 45rem; margin: auto;   padding: 2%} #main { width: 60%; margin-right: 5%; float: left } #video-wrapper video { max-width: 100%; } #banner { background-image: url('../img/abstract-banner- large.jpg'); height: 15.31rem; width: 45.5rem; max-width:   100%; float: left; margin-bottom: 15px; } #skipTo { display: none; li { background: #197a8a }; }   p { font-family: "Droid Sans",sans-serif; } aside { width: 35%; float: right; } footer { border-top: 1px solid #ccc; clear: both; height:   30px; padding-top: 5px; } We need to add some basic formatting styles for images and links, so go ahead and add the following, immediately below the #skipTo rule: a { text-decoration: none; text-transform: uppercase } a, img { border: medium none; color: #000; font-weight: bold; outline: medium none; } Next up comes the navigation for our page. These styles control the main navigation and the Skip To… link that appears when viewed on smaller devices. Go ahead and add these style rules immediately below the rules for a and img: header { font-family: 'Droid Sans', sans-serif; h1 { height: 70px; float: left; display: block; fontweight: 700; font-size: 2rem; } nav { float: right; margin-top: 40px; height: 22px; borderradius: 4px; li { display: inline; margin-left: 15px; } ul { font-weight: 400; font-size: 1.1rem; } a { padding: 5px 5px 5px 5px; &:hover { background-color: #27a7bd; color: #fff; borderradius: 4px; } } } } We need to add the media query that controls the display for smaller devices, so go ahead and add the following to a new file and save it as media.less in the css subfolder. We'll start with setting the screen size for our media query: @smallscreen: ~"screen and (max-width: 30rem)";   @media @smallscreen { p { font-family: "Droid Sans", sans-serif; }      #main, aside { margin: 0 0 10px; width: 100%; }    #banner { margin-top: 150px; height: 4.85rem; max-width: 100%; background-image: url('../img/abstract-     banner-medium.jpg'); width: 45.5rem; } Next up comes the media query rule that will handle the Skip To… link at the top of our resized window:    #skipTo {      display: block; height: 18px;      a {         display: block; text-align: center; color: #fff; font-size: 0.8rem;        &:hover { background-color: #27a7bd; border-radius: 0; height: 20px }      }    } We can't forget the main navigation, so go ahead and add the following line of code immediately below the block for #skipTo:    header {      h1 { margin-top: 20px }      nav {        float: left; clear: left; margin: 0 0 10px; width:100%;        li { margin: 0; background: #efefef; display:block; margin-bottom: 3px; height: 40px; }        a {          display: block; padding: 10px; text-align:center; color: #000;          &:hover {background-color: #27a7bd; border-radius: 0; padding: 10px; height: 20px; }        }     }    } } At this point, we should then compile the Less style sheet before previewing the results of our work. If we launch responsive.html in a browser, we'll see our mocked up portfolio page appear as we saw at the beginning of the exercise. If we resize the screen to its minimum width, its responsive design kicks in to reorder and resize elements on screen, as we would expect to see. Okay, so we now have a responsive page that uses Less CSS for styling; it still seems like a lot of code, right? Working through the code in detail Although this seems like a lot of code for a simple page, the principles we've used are in fact very simple and are the ones we already used earlier in the article. Not convinced? Well, let's look at it in more detail—the focus of this article is on responsive images and video, so we'll start with video. Open the responsive.css style sheet and look for the #video-wrapper video class: #video-wrapper video { max-width: 100%; } Notice how it's set to a max-width value of 100%? Granted, we don't want to resize a large video to a really small size—we would use a media query to replace it with a smaller version. But, for most purposes, max-width should be sufficient. Now, for the image, this is a little more complicated. Let's start with the code from responsive.less: #banner { background-image: url('../img/abstract-banner- large.jpg'); height: 15.31rem; width: 45.5rem; max-width: 100%; float: left; margin-bottom: 15px; } Here, we used the max-width value again. In both instances, we can style the element directly, unlike videos where we have to add a container in order to style it. The theme continues in the media query setup in media.less: @smallscreen: ~"screen and (max-width: 30rem)"; @media @smallscreen { ... #banner { margin-top: 150px; background-image: url('../img/abstract-banner-medium.jpg'); height: 4.85rem;     width: 45.5rem; max-width: 100%; } ... } In this instance, we're styling the element to cover the width of the viewport. A small point of note; you might ask why we are using the rem values instead of the percentage values when styling our image? This is a good question—the key to it is that when using pixel values, these do not scale well in responsive designs. However, the rem values do scale beautifully; we could use percentage values if we're so inclined, although they are best suited to instances where we need to fill a container that only covers part of the screen (as we did with the video for this demo). An interesting article extolling the virtues of why we should use rem units is available at http://techtime.getharvest.com/blog/in-defense-of-rem-units - it's worth a read. Of particular note is a known bug with using rem values in Mobile Safari, which should be considered when developing for mobile platforms; with all of the iPhones available, its usage could be said to be higher than Firefox! For more details, head over to http://wtfhtmlcss.com/#rems-mobile-safari. Transferring to production use Throughout this exercise, we used Less to compile our styles on the fly each time. This is okay for development purposes, but is not recommended for production use. Once we've worked out the requisite styles needed for our site, we should always look to precompile them into valid CSS before uploading the results into our site. There are a number of options available for this purpose; two of my personal favorites are Crunch! available at http://www.crunchapp.net and the Less2CSS plugin for Sublime Text available at https://github.com/timdouglas/sublime-less2css. You can learn more about precompiling Less code from my new article, Learning Less.js, by Packt Publishing. Summary Wow! We've certainly covered a lot; it shows that adding basic responsive capabilities to media need not be difficult. Let's take a moment to recap on what you learned. We kicked off this article with an introduction to three real-word scenarios that we would then cover. Our first scenario looked at using WordPress. We covered how although we can add simple CSS styling to make images and videos responsive, the preferred method is to use one of the several plugins available to achieve the same result. Our next scenario visited the all too familiar framework known as Twitter Bootstrap. In comparison, we saw that this is a much easier framework to work with, in that styles have been predefined and that all we needed to do was add the right class to the right selector. Our third and final scenario went completely the opposite way, with a look at using the Less CSS preprocessor to handle the styles that we would otherwise have manually created. We saw how easy it was to rework the styles we originally created earlier in the article to produce a more concise and efficient version that compiled into valid CSS with no apparent change in design. Well, we've now reached the end of the book; all good things must come to an end at some point! Nonetheless, I hope you've enjoyed reading the book as much as I have writing it. Hopefully, I've shown that adding responsive media to your sites need not be as complicated as it might first look and that it gives you a good grounding to develop something more complex using responsive media. Resources for Article: Further resources on this subject: Styling the Forms [article] CSS3 Animation [article] Responsive image sliders [article]
Read more
  • 0
  • 41
  • 24202

article-image-setting-joomla-web-server-using-virtualbox-turnkeylinux-and-dyndns
Packt
03 Feb 2011
8 min read
Save for later

Setting up a Joomla Web Server using Virtualbox, TurnkeyLinux and DynDNS

Packt
03 Feb 2011
8 min read
VirtualBox 3.1: Beginner's Guide Virtualization is a powerful tool that can make your PC duties easier, no matter if you're a programmer, a systems administrator, a power user, or even a beginner. Have you ever wanted to test the popular Joomla! Content Management System (CMS), but couldn't spare the time and effort to install it in your PC, along with the Apache web server and the MySQL database server? Are you afraid to install Apache, MySQL and PHP in your only PC because it could mess things up? Well, you can forget about all the hassle thanks to Oracle VirtualBox, a powerful virtualization software product that lets you create one or more virtual machines, or VMs, inside your physical PC. Each VM is completely isolated from your main PC and all the other VMs, so it's like having several computers in one physical package, but you don't need the extra space to accommodate all the additional LCDs and PC cases. Cool, huh? In this article, I'm going to show you one of the quickest ways to set up a fully-functional web server right from your own home/office. And why would you need to do something like that? Well, if you want to create a website to establish your own presence on the Internet, there are some costs involved. First of all, you need to pay for a web hosting service and a domain name. So, if you want to learn how to create websites, this would be a perfect way to do it, since all the software we´ll use is free, and with the DynDNS dynamic DNS service, you don't need to pay for a domain name because you can also use one for free. Furthermore, since you're going to host your website on your virtual machine, you can also forget about the web hosting fee. Are these reasons good enough to start experimenting with virtual machines? I'm pretty sure they are! I decided to use the Joomla! Content Management System (CMS) because it has all you need to establish your Internet presence. The TurnkeyLinux Joomla! virtual appliance includes everything you need to have a website running right out of the box, so you won't have to go through the hassle of installing all the required web server software (Apache, MySQL, PHP, etc.). And in case something goes wrong, you can just wipe out your virtual machine and start again from scratch. How about that? The first steps in the tutorial will tell you how to create a virtual machine (VM) with VirtualBox, how to get a preconfigured ISO image from the TurnkeyLinux website with all the necessary stuff to install the Apache web server, the MySQL database server and the Joomla! CMS in your VM. Oh, and if you're wondering how to make your web server available on the Internet, don't worry: I'll also show you how to get a free DynDNS account, and how to configure your Cable/DSL router to open port 80 (the HTTP web server port). That way, visitors from the Internet will be able to navigate in your brand-new Joomla! website. You'll need a PC or Mac system with Windows/Linux/Mac OS X installed, at least 1 GB of RAM and a Cable/DSL connection to the Internet, so you can configure your Cable/DSL router to let your virtual machine work as a full-fledged web server. Getting Virtualbox Download the most recent version of Oracle VirtualBox from the official website: VirtualBox 4.0 for Windows VirtualBox 4.0 for Mac OS X VirtualBox 4.0 for Linux Once the download is completed, follow the instructions included in the User Manual to install VirtualBox in your specific operating system. Downloading the Turnkey Joomla Appliance You can download the Joomla appliance from the TurnkeyLinux website. Just click on the following link to start downloading it to your computer: http://www.turnkeylinux.org/download?file=turnkey-joomla-11.0-lucid-x86.iso. Creating a new virtual machine Open VirtualBox, click on New to open the New Virtual Machine Wizard and then click on Next. Type MyJoomlaVM in the Name field, select Linux as the Operating System and Ubuntu as the Version, and click on Next to continue: The Memory dialog will show up next. Select at least 384 MB (you can press the Left and Right arrow keys to increase/decrease the memory value) in the Base Memory Size slider (depending on the total memory available in your PC) and click Next to continue: Leave the default values in the Virtual Hard Disk window and click Next four times to finish configuring your virtual machine with the default values. Then click Finish twice in the Summary dialogs that will show up afterwards, and you’ll be taken back to the VirtualBox main screen. Your MyJoomlaVM virtual machine will appear in the virtual machine list, as shown below: Now we need to tweak some network settings so your virtual machine can behave as a real PC with its own IP address. Click the Settings button to open the MyJoomlaVM – Settings window, and then select the Network section: Make sure the Adapter 1 tab is selected; then click on the Attached to list box and select Bridged Adapter instead of NAT: Click on the OK button to close the MyJoomlaVM – Settings window and return to the VirtualBox main screen. Installing the Joomla TurnkeyLinux appliance To start your virtual machine, double-click on its name in the virtual machines’ list or select it and click on the Start button: The first time you open a virtual machine, the First Run Wizard dialog shows up. This wizard helps you to install an operating system to your virtual machine. Click Next to go to the Select Installation Media window, where you can select a media source to install an operating system in your virtual machine. In this case you’re going to select the Turnkey Joomla ISO live CD image you downloaded before. Click on the folder icon located at the right-side of the Media Source list box: The Choose a virtual CD/DVD disk file dialog will open up. Use this dialog to locate and select the Joomla Turnkey ISO image your previously downloaded; then click on Open to return to the Select Installation Media dialog and click Next to continue. The Summary window will appear next, showing the media you selected. Click on Finish to exit the First Run Wizard and start your virtual machine. Wait until the TurnkeyLinux boot screen shows up; then make sure the Install to hard disk option is highlighted and hit Enter to proceed (you can also wait until installation begins automatically): Wait until the Debian Installer Live screen appears. Use the keyboard to select the Guided – use the entire disk option and hit Enter to continue: The next screen will ask you if you want to write the changes to disk. Select Yes and hit Enter to continue. The Debian Installer will start installing Ubuntu and the Joomla appliance in your virtual machine. After a while, a screen will appear asking if you want to install the GRUB boot loader to the master boot record. Select Yes and hit Enter to continue. The next screen will tell you that the installation is complete, and will ask if you want to restart your computer (virtual machine). Make sure Yes is selected and hit Enter to continue. Wait until your virtual machine boots up and asks you to type a new password for the root account. Type a secure password and hit Enter to continue. Type the password again and hit Enter to proceed. Now the system will ask for the MySQL server 'root' account’s password. Type a password of your choice and hit Enter. Repeat the procedure to confirm the password. Finally, the system will ask you to type a password for the Joomla 'admin' account. Choose a secure password, type it and hit Enter. Once again, repeat the procedure to confirm the password. The next step is to write the email address for the Joomla 'admin' account. Type a real email address and hit Enter to proceed. Next you’ll see a Link TKLBAM to the Turnkey Hub screen. In this case we’re not going to use the Turnkey Hub (a backup/restore system), so don’t type anything and hit Enter to continue. The next screen that will show up is Security Updates. You can leave the default option (Install) and hit Enter to proceed. (Be patient while the security updates get installed in your virtual machine; sometimes it can take several minutes.) Once the security updates finish installing in your virtual machine, the JOOMLA appliance services screen will pop up, and your virtual machine will be ready to roll: Write down the IP address assigned to your Joomla virtual machine (in the above picture it’s 192.168.1.79, but your IP address may vary). Then, open a web browser and type http://youripaddress (remember to replace youripaddress with the IP address you wrote down) to verify your Joomla virtual machine is working. The next screen should appear in your browser: Finally, you need to unmount the TurnkeyLinux Joomla ISO image from your machine’s virtual drive. This is to avoid booting up the ISO image again instead of booting up from your hard drive. Go to the Devices menu and select CD/DVD Devices > Remove disk from virtual drive: That’s it for now. Now let’s see how to get a free domain name and configure your Cable/DSL router to accept incoming connections for your Joomla virtual machine.
Read more
  • 0
  • 0
  • 24185
article-image-laracon-us-2019-highlights-laravel-6-release-update-laravel-vapor-and-more
Bhagyashree R
26 Jul 2019
5 min read
Save for later

Laracon US 2019 highlights: Laravel 6 release update, Laravel Vapor, and more

Bhagyashree R
26 Jul 2019
5 min read
Laracon US 2019, probably the biggest Laravel conference, wrapped up yesterday. Its creator, Tylor Otwell kick-started the event by talking about the next major release, Laravel 6. He also showcased a project that he has been working on called Laravel Vapor, a full-featured serverless management and deployment dashboard for PHP/Laravel. https://twitter.com/taylorotwell/status/1154168986180997125 This was a two-day event from July 24-25 hosted at Time Square, NYC. The event brings together people passionate about creating applications with Laravel, the open-source PHP web framework. Many exciting talks were hosted at this game-themed event. Evan You, the creator of Vue, was there presenting what’s coming in Vue.js 3.0. Caleb Porzio, a developer at Tighten Co., showcased a Laravel framework named Livewire that enables you to build dynamic user interfaces with vanilla PHP.  Keith Damiani, a Principal Programmer at Tighten, talked about graph database use cases. You can watch this highlights video compiled by Romega Digital to get a quick overview of the event: https://www.youtube.com/watch?v=si8fHDPYFCo&feature=youtu.be Laravel 6 coming next month Since its birth, Laravel has followed a custom versioning system. It has been on 5.x release version for the past four years now. The team has now decided to switch to semantic versioning system. The framework currently stands at version 5.8, and instead of calling the new release 5.9 the team has decided to go with Laravel 6, which is scheduled for the next month. Otwell emphasized that they have decided to make this change to bring more consistency in the ecosystem as all optional Laravel components such as Cashier, Dusk, Valet, Socialite use semantic versioning. This does not mean that there will be any “paradigm shift” and developers have to rewrite their stuff. “This does mean that any breaking change would necessitate a new version which means that the next version will be 7.0,” he added. With the new version comes new branding Laravel gets a fresh look with every major release ranging from updated logos to a redesigned website. Initially, this was a volunteer effort with different people pitching in to help give Laravel a fresh look. Now that Laravel has got some substantial backing, Otwell has worked with Focus Lab, one of the top digital design agencies in the US. They have together come up with a new logo and a brand new website. The website looks easy to navigate and also provides improved documentation to give developers a good reading experience. Source: Laravel Laravel Vapor, a robust serverless deployment platform for Laravel After giving a brief on version 6 and the updated branding, Otwell showcased his new project named Laravel Vapor. Currently, developers use Forge for provisioning and deploying their PHP applications on DigitalOcean, Linode, AWS, and more. It provides painless Virtual Private Server (VPS) management. It is best suited for medium and small projects and performs well with basic load balancing. However, it does lack a few features that could have been helpful for building bigger projects such as autoscaling. Also, developers have to worry about updating their operating systems and PHP versions. To address these limitations, Otwell created this deployment platform. Here are some of the advantages Laravel Vapor comes with: Better scalability: Otwell’s demo showed that it can handle over half a million requests with an average response time of 12 ms. Facilitates collaboration: Vapor is built around teams. You can create as many teams as you require by just paying for one single plan. Fine-grained control: It gives you fine-grained control over what each team member can do. You can set what all they can do across all the resources Vapor manages. A “vanity URL” for different environments: Vapor gives you a staging domain, which you can access with what Otwell calls a “vanity URL.” It enables you to immediately access your applications with “a nice domain that you can share with your coworkers until you are ready to assign a custom domain,“ says Otwell. Environment metrics: Vapor provides many environment metrics that give you an overview of an application environment. These metrics include how many HTTP requests have the application got in the last 24 hours, how many CLI invocations, what’s the average duration of those things, how much these cost on lambda, and more. Logs: You can review and search your recent logs right from the Vapor UI. It also auto-updates when any new entry comes in the log. Databases: With Vapor, you can provision two types of databases: fixed-sized database and serverless database. The fixed-sized database is the one where you have to pick its specifications like VCPU, RAM, etc. In the serverless one, however, if you do not select these specifications and it will automatically scale according to the demand. Caches: You can create Redis clusters right from the Vapor UI with as many nodes as you want. It supports the creation and management of elastic Redis cache clusters, which can be scaled without experiencing any downtime. You can attach them to any of the team’s projects and use them with multiple projects at the same time. To watch the entire demonstration by Otwell check out this video: https://www.youtube.com/watch?v=XsPeWjKAUt0&feature=youtu.be Laravel 5.7 released with support for email verification, improved console testing Building a Web Service with Laravel 5 Symfony leaves PHP-FIG, the framework interoperability group
Read more
  • 0
  • 0
  • 24029

article-image-generic-section-single-page-based-website
Savia Lobo
17 May 2018
14 min read
Save for later

How to create a generic reusable section for a single page based website

Savia Lobo
17 May 2018
14 min read
There are countless variations when it comes to different sections that can be incorporated into the design of a single page website. In this tutorial, we will cover how to create a generic section that can be extended to multiple sections. This section provides the ability to display any information your website needs. Single page sections are commonly used to display the following data to the user: Contact form (will be implemented in the next chapter). About us: This can be as simple as a couple of paragraphs talking about the company/individual or more complex with images, even showing the team and their roles. Projects/work: Any work you or the company has done and would like to showcase. They are usually linked to external pages or pop up boxes containing more information about the project. Useful company info such as opening times. These are just some of the many uses for sections in a single page website. A good rule of thumb is that if it can be a page on another website it can most likely be adapted into sections on a single page website. Also, depending on the amount of information a single section has, it could potentially be split into multiple sections. This article is an excerpt taken from the book,' Responsive Web Design by Example', written by Frahaan Hussain. Single page section examples Let's go through some examples of the sections mentioned above. Example 1: Contact form As can be seen, by the contact form from Richman, the elements used are very similar to that of a contact page. A form is used with inputs for the various pieces of information required from the user along with a button for submission: Not all contact forms will have the same fields. Put what you need, it may be more or less, there is no right or wrong answer. Also at the bottom of the section is the company's logo along with some written contact information, which is also very common. Some websites also display a map usually using the Google Maps API; these mainly have a physical presence such as a store. Website link—http://richman-kcm.com/ Example 2: About us This is an excellent example of an about us page that uses the following elements to convey the information: Images: Display the individual's face. Creates a very personal touch to the otherwise digital website. Title: Used to display the individual's name. This can also be an image if you want a fancier title. Simple text: Talks about who the person is and what they do. Icons: Linking to the individual's social media accounts. Website link—http://designedbyfew.com/ Example 3: Projects/work This website shows its work off very elegantly and cleanly using images and little text: It also provides a carousel-like slider to display the work, which is extremely useful for displaying the content bigger without displaying all of it at once and it allows a lot of content for a small section to be used. Website link: http://peeltheorange.com/#recent-work Example 4: Opening times This website uses a background image similar to the introduction section created in the previous chapter and an additional image on top to display the opening times. This can also be achieved using a mixture of text and CSS styling for various facets such as the border. Website link—http://www.mumbaigate.co.uk/ Implementing generic reusable single page section We will now create a generic section that can easily be modified and reused to our single page portfolio website. But we still need some sort of layout/design in mind before we implement the section, let's go with an Our Team style section. What will the Our Team section contain? The Our Team section will be a bit simpler, but it can easily be modified to accommodate the animations and styles displayed on the previously mentioned websites. It will be similar to the following example: Website link—http://demo.themeum.com/html/oxygen/ The preceding example consists of the following elements: Heading Intro text (Lorem Ipsum in this case) Images displaying each member of the team Team member's name Their role Text informing the viewer a little bit about them Social links We will also create our section using a similar layout. We are now finally going to use the column system to its full potential to provide a responsive experience using breakpoints. Creating the Our Team section container First let's implement a simple container, with the title and section introduction text, without any extra elements such as an image. We will then use this to link to our navigation bar. Add the following code to the jumbotron div: Let's go over what the preceding code is doing: Line 9 creates a container that is fluid, allowing it to span the browser's width fully. This can be changed to a regular container if you like. The id will be used very soon to link to the navigation bar. Line 10 creates a row in which our text elements will be stored. Line 11 creates a div that spans all the 12 columns on all screen sizes and centers the text inside of it. Line 12 creates a simple header for the Team section. Line 14 to Line 16 adds introduction text. I have put the first two sentences of "Lorem Ipsum..." inside of it, but you can put anything you like. All of this produces the following result: Anchoring the Team section to the navigation bar We will now link the navigation bar to the Team section. This will allow the user to navigate to the Team section without having to scroll up or down. At the moment, there is no need to scroll up, but when more content is added this can become a problem as a single page website can become quite long. Fortunately, we have already done the heavy lifting with the navigation bar through HTML and JavaScript, phew! First, let's change the name of the second button in the navigation bar to Team. Update the navigation bar like so: The navigation bar will now look as follows: Fantastic, our navigation bar is looking more like what you would see on a real website. Now let's change href to the same ID as the Team section, which was #TeamSection like so: Now when we click on any of the navigation buttons we get no JavaScript errors like we would have in the previous chapter. Also, it automatically scrolls to each section without any extra JavaScript code. Adding team pictures Now let's use images to showcase the team members. I will use the image from the following link for our employees, but in a real website you would obviously use different images: http://res.cloudinary.com/dmliyxggm/image/upload/v1511699813/John_vepwoz.png I have modified the image so all the background is removed and the image is trimmed, so it looks as follows: Up until now, all images that we have used have been stored on other websites such as CDN's, this is great, but the need may arise when the use of a custom image like the previous one is needed. We can either store it on a CDN, which is a very good approach, and I would recommend Cloudinary (http://cloudinary.com/), or we can store it locally, which we will do now. A CDN is a Content Delivery Network that has a sole purpose of delivering content such as images to other websites using the best and fastest servers available to a specific user. I would definitely recommend using one. Create a folder called Images and place the image using the following folder structure: Root CSS Images Team Thumbnails Thumbnails.png Index.php JS SNIPPETS This may seem like overkill, considering we only have one image, but as your website gets more complex you will store more images and having an intelligent folder structure/hierarchy will save an immense amount of time. Add the following code to the first row like so: The code we have added doesn't actually provide any visual changes as it is nothing but empty div classes. But these div classes will serve as structures for each team member and their respective content such as name and social links. We created a new row to group our new div classes. Inside each div we will represent each team member. The classes have been set up to be displayed like so: Extra small screens will only show a single team member on a single row Small and medium screens will show two team members on a single row Large and extra large screens will show four team members on a single row The rows are rows in their literal sense and not the class row. Another way to look at them is as lines. The sizes/breakpoints can easily be changed using the information regarding the grid from this article What Is Bootstrap, Why Do We Use It? Now let's add the team's images, update the previous code like so: The preceding code produces the following result: As you can see, this is not the desired effect we were looking for. As there are no size restrictions on the image, it is displayed at its original size. Which, on some screens, will produce a result similar to the monstrosity you saw before; worry not, this can easily be fixed. Add the classes img-fluid and img-thumbnail to each one of the images like so: The classes we added are designed to provide the following styling: img-fluid: Provides a responsive image that is automatically restricted based on the number of columns and browser size. img-thumbnail: Is more of an optional class, but it is still very useful. It provides a light border around the images to make them pop. This produces the following result: As it can be seen, this is significantly better than our previous result. Depending on the browser/screen size, the positioning will slightly change based on the column breakpoints we specified. As usual, I recommend that you resize the browser to see the different layouts. These images are almost complete; they look fine on most screen sizes, but they aren't actually centered within their respective div. This is evident in larger screen sizes, as can be seen here: It isn't very noticeable, but the problem is there, it can be seen to the right of the last image. You probably could get away without fixing this, but when creating anything, from a website to a game, or even a table, the smallest details are what separate the good websites from the amazing websites. This is a simple idea called the aggregation of marginal gains. Fortunately for us, like many times before, Bootstrap offers functionality to resolve our little problem. Simply add the text-center class, to the row within the div of the images like so:   This now produces the following result: There is one more slight problem that is only noticeable on smaller screens when the images/member containers are stacked on top of each other. The following result is produced: The problem might not jump out at first glance, but look closely at the gaps between the images that are stacked, or I should say, to the lack of a gap. This isn't the end of the world, but again the small details make an immense difference to the look of a website. This can be easily fixed by adding padding to each team member div. First, add a class of teamMemberContainer to each team member div like so: Add the following CSS code to the index.css file to provide a more visible gap through the use of padding: This simple solution now produces the following result: If you want the gap to be bigger, simply increase the value and lower it to reduce the gap. Team member info text The previous section covered quite a lot if you're not 100% on what we did just go back and take a second look. This section will thankfully be very simple as it will incorporate techniques and features we have already covered, to add the following information to each team member: Name Job title Member info text Plus anything else you need Update each team member container with the following code: Let's go over the new code line by line: Line 24 adds a simple header that is intended to display the team member's name. I have chosen an h4 tag, but you can use something bigger or smaller if you like. Line 26 adds the team member's job title, I have used a paragraph element with the Bootstrap class text-muted, which lightens the text color. If you would like more information regarding text styling within Bootstrap, feel free to check out the following link. Line 28 adds a simple paragraph with no extra styling to display some information about the team member. Bootstrap text styling link—https://v4-alpha.getbootstrap.com/utilities/colors/ The code that we just added will produce the following result: As usual, resize your browser to simulate different screen sizes. I use Chrome as my main browser, but Safari has an awesome feature baked right in that allows you to see how your website will run on different browsers/devices, this link will help you use this feature—https://www.tekrevue.com/tip/safari-responsive-design-mode/ Most browsers have a plethora of plugins to aid in this process, but not only does Safari have it built in, it works really well. It all looks fantastic, but again I will nitpick at the gaps. The image is right on top of the team member name text; a small gap would really help improve the visual fidelity. Add a class of teamMemberImage to each image tag as it is demonstrated here: Now add the following code to the index.css file, which will apply a margin of 10px below the image, hence moving all the content down:  Change the margin to suit your needs. This very simple code will produce the following similar yet subtly different and more visually appealing result: Team member social links We have almost completed the Team section, only the social links remain for each team member. I will be using simple images for the social buttons from the following link: https://simplesharebuttons.com/html-share-buttons/ I will also only be adding three social icons, but feel free to add as many or as few as you need. Add the following code to the button of each team member container: Let's go over each new line of code: Line 30 creates a div to store all the social buttons for each team member Line 31 creates a link to Facebook (add your social link in the href) Line 32 adds an image to show the Facebook social link Line 35 creates a link to Google+ (add your social link in the href) Line 36 adds an image to show the Google+ social link Line 39 creates a link to Twitter (add your social link in the href) Line 40 adds an image to show the Twitter social link We have added a class that needs to be implemented, but let's first run our website to see the result without any styling: It looks OK, but the social icons are a bit big, especially if we were to have more icons. Add the following CSS styling to the index.css file: This piece of code simply restricts the social icons size to 50px. Only setting the width causes the height to be automatically calculated, this ensures that any changes to the image that involves a ratio change won't mess up the look of the icons. This now produces the following result: Feel free to change the width to suit your desires. With the social buttons implemented, we are done. If you've enjoyed this tutorial, check out Responsive Web Design by Example, to create a cool blog page, beautiful portfolio site, or professional business site to make them all totally responsive. 5 things to consider when developing an eCommerce website What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability
Read more
  • 0
  • 0
  • 23743
Modal Close icon
Modal Close icon