Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-using-mock-objects-test-interactions
Packt
23 Apr 2015
25 min read
Save for later

Using Mock Objects to Test Interactions

Packt
23 Apr 2015
25 min read
In this article by Siddharta Govindaraj, author of the book Test-Driven Python Development, we will look at the Event class. The Event class is very simple: receivers can register with the event to be notified when the event occurs. When the event fires, all the receivers are notified of the event. (For more resources related to this topic, see here.) A more detailed description is as follows: Event classes have a connect method, which takes a method or function to be called when the event fires When the fire method is called, all the registered callbacks are called with the same parameters that are passed to the fire method Writing tests for the connect method is fairly straightforward—we just need to check that the receivers are being stored properly. But, how do we write the tests for the fire method? This method does not change any state or store any value that we can assert on. The main responsibility of this method is to call other methods. How do we test that this is being done correctly? This is where mock objects come into the picture. Unlike ordinary unit tests that assert on object state, mock objects are used to test that the interactions between multiple objects occurs as it should. Hand writing a simple mock To start with, let us look at the code for the Event class so that we can understand what the tests need to do. The following code is in the file event.py in the source directory: class Event:    """A generic class that provides signal/slot functionality"""      def __init__(self):        self.listeners = []      def connect(self, listener):        self.listeners.append(listener)      def fire(self, *args, **kwargs):        for listener in self.listeners:            listener(*args, **kwargs) The way this code works is fairly simple. Classes that want to get notified of the event should call the connect method and pass a function. This will register the function for the event. Then, when the event is fired using the fire method, all the registered functions will be notified of the event. The following is a walk-through of how this class is used: >>> def handle_event(num): ...   print("I got number {0}".format(num)) ... >>> event = Event() >>> event.connect(handle_event) >>> event.fire(3) I got number 3 >>> event.fire(10) I got number 10 As you can see, every time the fire method is called, all the functions that registered with the connect method get called with the given parameters. So, how do we test the fire method? The walk-through above gives a hint. What we need to do is to create a function, register it using the connect method, and then verify that the method got notified when the fire method was called. The following is one way to write such a test: import unittest from ..event import Event   class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        called = False        def listener():            nonlocal called            called = True          event = Event()        event.connect(listener)        event.fire()        self.assertTrue(called) Put this code into the test_event.py file in the tests folder and run the test. The test should pass. The following is what we are doing: First, we create a variable named called and set it to False. Next, we create a dummy function. When the function is called, it sets called to True. Finally, we connect the dummy function to the event and fire the event. If the dummy function was successfully called when the event was fired, then the called variable would be changed to True, and we assert that the variable is indeed what we expected. The dummy function we created above is an example of a mock. A mock is simply an object that is substituted for a real object in the test case. The mock then records some information such as whether it was called, what parameters were passed, and so on, and we can then assert that the mock was called as expected. Talking about parameters, we should write a test that checks that the parameters are being passed correctly. The following is one such test:    def test_a_listener_is_passed_right_parameters(self):        params = ()        def listener(*args, **kwargs):            nonlocal params            params = (args, kwargs)        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape":"square"}), params) This test is the same as the previous one, except that it saves the parameters that are then used in the assert to verify that they were passed properly. At this point, we can see some repetition coming up in the way we set up the mock function and then save some information about the call. We can extract this code into a separate class as follows: class Mock:    def __init__(self):        self.called = False        self.params = ()      def __call__(self, *args, **kwargs):        self.called = True        self.params = (args, kwargs) Once we do this, we can use our Mock class in our tests as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called)      def test_a_listener_is_passed_right_parameters(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape": "square"}),             listener.params) What we have just done is to create a simple mocking class that is quite lightweight and good for simple uses. However, there are often times when we need much more advanced functionality, such as mocking a series of calls or checking the order of specific calls. Fortunately, Python has us covered with the unittest.mock module that is supplied as a part of the standard library. Using the Python mocking framework The unittest.mock module provided by Python is an extremely powerful mocking framework, yet at the same time it is very easy to use. Let us redo our tests using this library. First, we need to import the mock module at the top of our file as follows: from unittest import mock Next, we rewrite our first test as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called) The only change that we've made is to replace our own custom Mock class with the mock.Mock class provided by Python. That is it. With that single line change, our test is now using the inbuilt mocking class. The unittest.mock.Mock class is the core of the Python mocking framework. All we need to do is to instantiate the class and pass it in where it is required. The mock will record if it was called in the called instance variable. How do we check that the right parameters were passed? Let us look at the rewrite of the second test as follows:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_called_with(5, shape="square") The mock object automatically records the parameters that were passed in. We can assert on the parameters by using the assert_called_with method on the mock object. The method will raise an assertion error if the parameters don't match what was expected. In case we are not interested in testing the parameters (maybe we just want to check that the method was called), then we can pass the value mock.ANY. This value will match any parameter passed. There is a subtle difference in the way normal assertions are called compared to assertions on mocks. Normal assertions are defined as a part of the unittest.Testcase class. Since our tests inherit from that class, we call the assertions on self, for example, self.assertEquals. On the other hand, the mock assertion methods are a part of the mock object, so you call them on the mock object, for example, listener.assert_called_with. Mock objects have the following four assertions available out of the box: assert_called_with: This method asserts that the last call was made with the given parameters assert_called_once_with: This assertion checks that the method was called exactly once and was with the given parameters assert_any_call: This checks that the given call was made at some point during the execution assert_has_calls: This assertion checks that a list of calls occurred The four assertions are very subtly different, and that shows up when the mock has been called more than one. The assert_called_with method only checks the last call, so if there was more than one call, then the previous calls will not be asserted. The assert_any_call method will check if a call with the given parameters occurred anytime during execution. The assert_called_once_with assertion asserts for a single call, so if the mock was called more than once during execution, then this assert would fail. The assert_has_calls assertion can be used to assert that a set of calls with the given parameters occurred. Note that there might have been more calls than what we checked for in the assertion, but the assertion would still pass as long as the given calls are present. Let us take a closer look at the assert_has_calls assertion. Here is how we can write the same test using this assertion:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_has_calls([mock.call(5, shape="square")]) The mocking framework internally uses _Call objects to record calls. The mock.call function is a helper to create these objects. We just call it with the expected parameters to create the required call objects. We can then use these objects in the assert_has_calls assertion to assert that the expected call occurred. This method is useful when the mock was called multiple times and we want to assert only some of the calls. Mocking objects While testing the Event class, we only needed to mock out single functions. A more common use of mocking is to mock a class. Take a look at the implementation of the Alert class in the following: class Alert:    """Maps a Rule to an Action, and triggers the action if the rule    matches on any stock update"""      def __init__(self, description, rule, action):        self.description = description        self.rule = rule        self.action = action      def connect(self, exchange):        self.exchange = exchange        dependent_stocks = self.rule.depends_on()        for stock in dependent_stocks:            exchange[stock].updated.connect(self.check_rule)      def check_rule(self, stock):        if self.rule.matches(self.exchange):            self.action.execute(self.description) Let's break down how this class works as follows: The Alert class takes a Rule and an Action in the initializer. When the connect method is called, it takes all the dependent stocks and connects to their updated event. The updated event is an instance of the Event class that we saw earlier. Each Stock class has an instance of this event, and it is fired whenever a new update is made to that stock. The listener for this event is the self.check_rule method of the Alert class. In this method, the alert checks if the new update caused a rule to be matched. If the rule matched, it calls the execute method on the Action. Otherwise, nothing happens. This class has a few requirements, as shown in the following, that need to be met. Each of these needs to be made into a unit test. If a stock is updated, the class should check if the rule matches If the rule matches, then the corresponding action should be executed If the rule doesn't match, then nothing happens There are a number of different ways in which we could test this; let us go through some of the options. The first option is not to use mocks at all. We could create a rule, hook it up to a test action, and then update the stock and verify that the action was executed. The following is what such a test would look like: import unittest from datetime import datetime from unittest import mock   from ..alert import Alert from ..rule import PriceRule from ..stock import Stock   class TestAction:    executed = False      def execute(self, description):        self.executed = True   class AlertTest(unittest.TestCase):    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)       action = TestAction()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        self.assertTrue(action.executed) This is the most straightforward option, but it requires a bit of code to set up and there is the TestAction that we need to create just for the test case. Instead of creating a test action, we could instead replace it with a mock action. We can then simply assert on the mock that it got executed. The following code shows this variation of the test case:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)        action = mock.MagicMock()       alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") A couple of observations about this test: If you notice, alert is not the usual Mock object that we have been using so far, but a MagicMock object. A MagicMock object is like a Mock object but it has special support for Python's magic methods which are present on all classes, such as __str__, hasattr. If we don't use MagicMock, we may sometimes get errors or strange behavior if the code uses any of these methods. The following example illustrates the difference: >>> from unittest import mock >>> mock_1 = mock.Mock() >>> mock_2 = mock.MagicMock() >>> len(mock_1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type 'Mock' has no len() >>> len(mock_2) 0 >>>  In general, we will be using MagicMock in most places where we need to mock a class. Using Mock is a good option when we need to mock stand alone functions, or in rare situations where we specifically don't want a default implementation for the magic methods. The other observation about the test is the way methods are handled. In the test above, we created a mock action object, but we didn't specify anywhere that this mock class should contain an execute method and how it should behave. In fact, we don't need to. When a method or attribute is accessed on a mock object, Python conveniently creates a mock method and adds it to the mock class. Therefore, when the Alert class calls the execute method on our mock action object, that method is added to our mock action. We can then check that the method was called by asserting on action.execute.called. The downside of Python's behavior of automatically creating mock methods when they are accessed is that a typo or change in interface can go unnoticed. For example, suppose we rename the execute method in all the Action classes to run. But if we run our test cases, it still passes. Why does it pass? Because the Alert class calls the execute method, and the test only checks that the execute method was called, which it was. The test does not know that the name of the method has been changed in all the real Action implementations and that the Alert class will not work when integrated with the actual actions. To avoid this problem, Python supports using another class or object as a specification. When a specification is given, the mock object only creates the methods that are present in the specification. All other method or attribute accesses will raise an error. Specifications are passed to the mock at initialization time via the spec parameter. Both the Mock as well as MagicMock classes support setting a specification. The following code example shows the difference when a spec parameter is set compared to a default Mock object: >>> from unittest import mock >>> class PrintAction: ...     def run(self, description): ...         print("{0} was executed".format(description)) ...   >>> mock_1 = mock.Mock() >>> mock_1.execute("sample alert") # Does not give an error <Mock name='mock.execute()' id='54481752'>   >>> mock_2 = mock.Mock(spec=PrintAction) >>> mock_2.execute("sample alert") # Gives an error Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 557, in __getattr__    raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'execute' Notice in the above example that mock_1 goes ahead and executes the execute method without any error, even though the method has been renamed in the PrintAction. On the other hand, by giving a spec, the method call to the nonexistent execute method raises an exception. Mocking return values The second variant above showed how we could use a mock Action class in the test instead of a real one. In the same way, we can also use a mock rule instead of creating a PriceRule in the test. The alert calls the rule to see whether the new stock update caused the rule to be matched. What the alert does depends on whether the rule returned True or False. All the mocks we've created so far have not had to return a value. We were just interested in whether the right call was made or not. If we mock the rule, then we will have to configure it to return the right value for the test. Fortunately, Python makes that very simple to do. All we have to do is to set the return value as a parameter in the constructor to the mock object as follows: >>> matches = mock.Mock(return_value=True) >>> matches() True >>> matches(4) True >>> matches(4, "abcd") True As we can see above, the mock just blindly returns the set value, irrespective of the parameters. Even the type or number of parameters is not considered. We can use the same procedure to set the return value of a method in a mock object as follows: >>> rule = mock.MagicMock() >>> rule.matches = mock.Mock(return_value=True) >>> rule.matches() True >>>  There is another way to set the return value, which is very convenient when dealing with methods in mock objects. Each mock object has a return_value attribute. We simply set this attribute to the return value and every call to the mock will return that value, as shown in the following: >>> from unittest import mock >>> rule = mock.MagicMock() >>> rule.matches.return_value = True >>> rule.matches() True >>>  In the example above, the moment we access rule.matches, Python automatically creates a mock matches object and puts it in the rule object. This allows us to directly set the return value in one statement without having to create a mock for the matches method. Now that we've seen how to set the return value, we can go ahead and change our test to use a mocked rule object, as shown in the following:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") There are two calls that the Alert makes to the rule: one to the depends_on method and the other to the matches method. We set the return value for both of them and the test passes. In case no return value is explicitly set for a call, the default return value is to return a new mock object. The mock object is different for each method that is called, but is consistent for a particular method. This means if the same method is called multiple times, the same mock object will be returned each time. Mocking side effects Finally, we come to the Stock class. This is the final dependency of the Alert class. We're currently creating Stock objects in our test, but we could replace it with a mock object just like we did for the Action and PriceRule classes. The Stock class is again slightly different in behavior from the other two mock objects. The update method doesn't just return a value—it's primary behavior in this test is to trigger the updated event. Only if this event is triggered will the rule check occur. In order to do this, we must tell our mock stock class to fire the event when the update event is called. Mock objects have a side_effect attribute to enable us to do just this. There are many reasons we might want to set a side effect. Some of them are as follows: We may want to call another method, like in the case of the Stock class, which needs to fire the event when the update method is called. To raise an exception: this is particularly useful when testing error situations. Some errors such as a network timeout might be very difficult to simulate, and it is better to test using a mock that simply raises the appropriate exception. To return multiple values: these may be different values each time the mock is called, or specific values, depending on the parameters passed. Setting the side effect is just like setting the return value. The only difference is that the side effect is a lambda function. When the mock is executed, the parameters are passed to the lambda function and the lambda is executed. The following is how we would use this with a mocked out Stock class:    def test_action_is_executed_when_rule_matches(self):        goog = mock.MagicMock(spec=Stock)        goog.updated = Event()        goog.update.side_effect = lambda date, value:                goog.updated.fire(self)        exchange = {"GOOG": goog}      rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)         exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") So what is going on in that test? First, we create a mock of the Stock class instead of using the real one. Next, we add in the updated event. We need to do this because the Stock class creates the attribute at runtime in the __init__ scope. Because the attribute is set dynamically, MagicMock does not pick up the attribute from the spec parameter. We are setting an actual Event object here. We could set it as a mock as well, but it is probably overkill to do that. Finally, we set the side effect for the update method in the mock stock object. The lambda takes the two parameters that the method does. In this particular example, we just want to fire the event, so the parameters aren't used in the lambda. In other cases, we might want to perform different actions based on the values of the parameters. Setting the side_effect attribute allows us to do that. Just like with the return_value attribute, the side_effect attribute can also be set in the constructor. Run the test and it should pass. The side_effect attribute can also be set to an exception or a list. If it is set to an exception, then the given exception will be raised when the mock is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = Exception() >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 941, in _mock_call    raise effect Exception If it is set to a list, then the mock will return the next element of the list each time it is called. This is a good way to mock a function that has to return different values each time it is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = [1, 2, 3] >>> m() 1 >>> m() 2 >>> m() 3 >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 944, in _mock_call    result = next(effect) StopIteration As we have seen, the mocking framework's method of handling side effects using the side_effect attribute is very simple, yet quite powerful. How much mocking is too much? In the previous few sections, we've seen the same test written with different levels of mocking. We started off with a test that didn't use any mocks at all, and subsequently mocked out each of the dependencies one by one. Which one of these solutions is the best? As with many things, this is a point of personal preference. A purist would probably choose to mock out all dependencies. My personal preference is to use real objects when they are small and self-contained. I would not have mocked out the Stock class. This is because mocks generally require some configuration with return values or side effects, and this configuration can clutter the test and make it less readable. For small, self-contained classes, it is simpler to just use the real object. At the other end of the spectrum, classes that might interact with external systems, or that take a lot of memory, or are slow are good candidates for mocking out. Additionally, objects that require a lot of dependencies on other object to initialize are candidates for mocking. With mocks, you just create an object, pass it in, and assert on parts that you are interested in checking. You don't have to create an entirely valid object. Even here there are alternatives to mocking. For example, when dealing with a database, it is common to mock out the database calls and hardcode a return value into the mock. This is because the database might be on another server, and accessing it makes the tests slow and unreliable. However, instead of mocks, another option could be to use a fast in-memory database for the tests. This allows us to use a live database instead of a mocked out database. Which approach is better depends on the situation. Mocks versus stubs versus fakes versus spies We've been talking about mocks so far, but we've been a little loose on the terminology. Technically, everything we've talked about falls under the category of a test double. A test double is some sort of fake object that we use to stand in for a real object in a test case. Mocks are a specific kind of test double that record information about calls that have been made to it, so that we can assert on them later. Stubs are just an empty do-nothing kind of object or method. They are used when we don't care about some functionality in the test. For example, imagine we have a method that performs a calculation and then sends an e-mail. If we are testing the calculation logic, we might just replace the e-mail sending method with an empty do-nothing method in the test case so that no e-mails are sent out while the test is running. Fakes are a replacement of one object or system with a simpler one that facilitates easier testing. Using an in-memory database instead of the real one, or the way we created a dummy TestAction earlier in this article would be examples of fakes. Finally, spies are objects that are like middlemen. Like mocks, they record the calls so that we can assert on them later, but after recording, they continue execution to the original code. Spies are different from the other three in the sense that they do not replace any functionality. After recording the call, the real code is still executed. Spies sit in the middle and do not cause any change in execution pattern. Summary In this article, you looked at how to use mocks to test interactions between objects. You saw how to hand write our own mocks, followed by using the mocking framework provided in the Python standard library. Resources for Article: Further resources on this subject: Analyzing a Complex Dataset [article] Solving problems – closest good restaurant [article] Importing Dynamic Data [article]
Read more
  • 0
  • 0
  • 11065

article-image-restful-java-web-services-design
Packt
18 Nov 2009
5 min read
Save for later

RESTful Java Web Services Design

Packt
18 Nov 2009
5 min read
We'll leave the RESTful implementation for a later article. Our sample application is a micro-blogging web service (similar to Twitter), where users create accounts and then post entries. Finally, while designing our application, we'll define a set of steps that can be applied to designing any software system that needs to be deployed as a RESTful web service. Designing a RESTful web service Designing RESTful web services is not different from designing traditional web applications. We still have business requirements, we still have users who want to do things with data, and we still have hardware constraints and software architectures to deal with. The main difference, however, is that we look at the requirements to tease out resources and forget about specific actions to be taken on these resources. We can think of RESTful web service design as being similar to Object Oriented Design (OOD). In OOD, we try to identify objects from the data we want to represent together with the actions that an object can have. But the similarities end at the data structure definition, because with RESTful web services we already have specific calls that are part of the protocol itself. The underlying RESTful web service design principles can be summarized in the following four steps: Requirements gathering—this step is similar to traditional software requirement gathering practices. Resource identification—this step is similar to OOD where we identify objects, but we don't worry about messaging between objects. Resource representation definition—because we exchange representation between clients and servers, we should define what kind of representation we need to use. Typically, we use XML, but JSON has gained popularity. That's not to say that we can't use any other form of resource representation—on the contrary, we could use XHTML or any other form of binary representation, though we let the requirements guide our choices. URI definition—with resources in place, we need to define the API, which consists of URIs for clients and servers to exchange resources' representations. This design process is not static. These are iterative steps that gravitate around   resources. Let's say that during the URI definition step we discover that one of the URI's responses is not covered in one of the resources we have identified. Then we go back to define a suitable resource. In most cases, however, we find that the resources that we already have cover most of our needs, and we just have to combine existing resources into a meta-resource to take care of the new requirement. Requirements of sample web service The RESTful web service we design in this article is a social networking web application similar to Twitter. We follow an OOD process mixed with an agile philosophy for designing and coding our applications. This means that we create just enough documentation to be useful, but not so much that we spend an inordinate amount of time deciphering it during our implementation phase. As with any application, we begin by listing the main business requirements, for which we have the following use cases (these are the main functions of our application): A web user creates an account with a username and a password (creating an account means that the user is now registered). Registered users post blog entries to their accounts. We limit messages to 140 characters. Registered and non-registered users view all blog entries. Registered and non-registered users view user profiles. Registered users update their user profiles, for example, users update their password. Registered and non-registered users search for terms in all blog entries. However simple this example may be, social networking sites work on these same principles: users sign up for accounts to post personal updates or information. Our intention here, though, is not to fully replicate Twitter or to fully create a social networking application. What we are trying to outline is a set of requirements that will test our understanding of RESTful web services design and implementation. The core value of social networking sites lies in the ability to connect to multiple users who connect with us, and the value is derived from what the connections mean within the community, because of the tendency of users following people with similar interests. For example, the connections between users create targeted distribution networks.The connections between users create random graphs in the graph theory sense, where nodes are users and edges are connections between users. This is what is referred to as the social graph. Resource identification Out of the use cases listed above, we now need to define the service's resources. From reading the requirements we see that we need users and messages. Users appear in two ways: a single user and a list of users. Additionally, users have the ability to post blog entries in the form of messages of no more than 140 characters. This means that we need resources for a single message and a list of messages. In sum, we identify the following resources: User List of users Message List of messages
Read more
  • 0
  • 0
  • 11057

article-image-finishing-touches-and-publishing
Packt
18 Feb 2014
7 min read
Save for later

Finishing Touches and Publishing

Packt
18 Feb 2014
7 min read
(For more resources related to this topic, see here.) Publishing a Video Demo project Due to its very nature, a Video Demo project can only be published as an .mp4 video file. In the following exercise, you will return to the encoderVideo.cpvc project and explore the available publishing options: Open the encoderVideo.cpvc file under Chapter08. Make sure the file opens in Edit mode. If you are not in Edit mode, click on the Edit button at the lower-right corner of the screen. (If the Edit button is not displayed on the screen, it simply means that you already are in Edit mode.) When the file is open in Edit mode, take a look at the main toolbar at the top of the interface. Click on the Publish icon or navigate to File | Publish. The Publish Video Demo dialog opens. In the Publish Video Demo dialog, make sure the Name of the project is encoderVideo. Click on the … button and choose the publish folder of your exercises as the destination of the published video file. Open the Preset dropdown. Take some time to inspect the available presets. When done, choose the Video - Apple iPad preset. Make sure the Publish Video Demo dialog looks similar to what is shown in the following screenshot and click on the Publish button: Publishing a Video Demo project can be quite a lengthy process, so be patient. When the process is complete, a message asks you what to do next. Notice that one of the options enables you to upload your newly created video to YouTube directly. Click on the Close button to discard the message. Use Windows Explorer (Windows) or the Finder (Mac) to go to the publish folder of your exercises. Double-click on the encoderDemo.mp4 file to open the video in the default video player of your system. Remember that a Video Demo project can only be published as a video file. Also remember that the published .mp4 video file can only be experienced in a linear fashion and does not support any kind of interactivity. Publishing to Flash In the history of Captivate, publishing to Flash has always been the primary publishing option. Even though HTML5 publishing is a game changer, publishing to Flash is still an important capability of Captivate. Remember that this publishing format is currently the only one that supports every single feature, animation, and object of Captivate. In the following exercise, you will publish the Encoder Demonstration project in Flash using the default options: Return to the encoderDemo_800.cptx file under Chapter08. Click on the Publish icon situated right next to the Preview icon. Alternatively, you can also navigate to File | Publish. The Publish dialog box opens as shown in the following screenshot: Notice that the Publish dialog of a regular Captivate project contains far more options than its Publish Video Demo counterpart in .cpvc files. The Publish dialog box is divided into four main areas: The Publish Format area (1): This is where you choose the format in which you want to publish your projects. Basically, there are three options to choose from: SWF/HTML5, Media, and Print. The other options (E-Mail, FTP, and Adobe Connect) are actually suboptions of the SWF/HTML5, Media, and Print formats. The Output Format Options area (2): The content of this area depends on the format chosen in the Publish Format (1) area. The Project Information area (3): This area is a summary of the main project preferences and metadata. Clicking on the links of this area will bring you back to the corresponding preferences dialog boxes. The Advanced Options area (4): This area provides some additional advanced publishing options. You will now move on to the actual publication of the project in the Flash format. In the leftmost column of the Publish dialog, make sure the chosen format is SWF/HTML5. In the central area, change the Project Title to encoderDemo_800_flash. Click on the Browse… button situated just below the Folder field and choose to publish your movie in the publish folder of your exercises folder. Make sure the Publish to Folder checkbox is selected. Take a quick look at the remaining options, but leave them all at their current settings. Click on the Publish button at the bottom-right corner of the Publish dialog box. When Captivate has finished publishing the movie, an information box appears on the screen asking whether you want to view the output. Click on No to discard the information box and return to Captivate. You will now use the Finder (Mac) or the Windows Explorer (Windows) to take a look at the files Captivate has generated. Use the Finder (Mac) or the Windows Explorer (Windows) to browse to the publish folder of your exercises. Because you selected the Publish to Folder checkbox in the Publish dialog, Captivate has automatically created the encoderDemo_800_flash subfolder in the publish folder. Open the encoderDemo_800_flash subfolder to inspect its content.There should be five files stored in this location: encoderDemo_800_flash.swf: This is the main Flash file containing the compiled version of the .cptx project encoderDemo_800_flash.html: This file is an HTML page used to wrap the Flash file standard.js: This is a JavaScript file used to make the Flash player work well within the HTML page demo_en.flv: This is the video file used on slide 2 of the movie captivate.css: This file provides the necessary style rules to ensure there is proper formatting of the HTML page If you want to embed the compiled Captivate movie in an existing HTML page, only the .swf file (plus, in this case, the .flv video) is needed. The HTML editor (such as Adobe Dreamweaver) will recreate the necessary HTML, JavaScript, and CSS files. Captivate and DreamweaverAdobe Dreamweaver CC is the HTML editor of the Creative Cloud and the industry-leading solution for authoring professional web pages. Inserting a Captivate file in a Dreamweaver page is dead easy! First, move or copy the main Flash file (.swf) as well as the needed support files (in this case, the .flv video file), if any, somewhere in the root folder of the Dreamweaver site. When done, use the Files panel of Dreamweaver to drag and drop the main .swf file onto the HTML page. That's it! More information on Dreamweaver can be found at http://www.adobe.com/products/dreamweaver.html. You will now test the compiled project in a web browser. This is an important test as it closely recreates the conditions in which the students will experience the movie once uploaded on a web server. Double-click on the encoderDemo_800_flash.html file to open it in a web browser. Enjoy the final version of the demonstration you have created! Now that you have experienced the workflow of publishing the project to Flash with the default options, you will explore some additional publishing options. Using the Scalable HTML content option Thanks to Scalable HTML content option of Captivate, the eLearning content is automatically resized to fit the screen on which it is viewed. Let's experiment with this option hands on using the following steps: If needed, return to the encoderDemo_800.cptx file under Chapter08. Click on the Publish icon situated right next to the Preview icon. Alternatively, you can also navigate to File | Publish. In the leftmost column, make sure the chosen format is SWF/HTML5. In the central column, change the Project Title to encoderDemo_800_flashScalable. Click on the Browse… button situated just below the Folder field and ensure that the publish folder is still the publish folder of your exercises. Make sure the Publish to Folder checkbox is selected. In the Advanced Options section (lower-right corner of the Publish dialog), select the Scalable HTML content checkbox. Leave the remaining options at their current value and click on the Publish button at the bottom-right corner of the Publish dialog box. When Captivate has finished publishing the movie, an information box appears on the screen asking whether you want to view the output. Click on Yes to discard the information box and open the published movie in the default web browser. During the playback, use your mouse to resize your browser window and notice how the movie is resized and always fits the available space without being distorted. The Scalable HTML content option also works when the project is published in HTML5.
Read more
  • 0
  • 0
  • 11054

article-image-knockoutjs-templates
Packt
04 Mar 2015
38 min read
Save for later

KnockoutJS Templates

Packt
04 Mar 2015
38 min read
 In this article by Jorge Ferrando, author of the book KnockoutJS Essentials, we are going talk about how to design our templates with the native engine and then we will speak about mechanisms and external libraries we can use to improve the Knockout template engine. When our code begins to grow, it's necessary to split it in several parts to keep it maintainable. When we split JavaScript code, we are talking about modules, classes, function, libraries, and so on. When we talk about HTML, we call these parts templates. KnockoutJS has a native template engine that we can use to manage our HTML. It is very simple, but also has a big inconvenience: templates, it should be loaded in the current HTML page. This is not a problem if our app is small, but it could be a problem if our application begins to need more and more templates. (For more resources related to this topic, see here.) Preparing the project First of all, we are going to add some style to the page. Add a file called style.css into the css folder. Add a reference in the index.html file, just below the bootstrap reference. The following is the content of the file: .container-fluid { margin-top: 20px; } .row { margin-bottom: 20px; } .cart-unit { width: 80px; } .btn-xs { font-size:8px; } .list-group-item { overflow: hidden; } .list-group-item h4 { float:left; width: 100px; } .list-group-item .input-group-addon { padding: 0; } .btn-group-vertical > .btn-default { border-color: transparent; } .form-control[disabled], .form-control[readonly] { background-color: transparent !important; } Now remove all the content from the body tag except for the script tags and paste in these lines: <div class="container-fluid"> <div class="row" id="catalogContainer">    <div class="col-xs-12"       data-bind="template:{name:'header'}"></div>    <div class="col-xs-6"       data-bind="template:{name:'catalog'}"></div>    <div id="cartContainer" class="col-xs-6 well hidden"       data-bind="template:{name:'cart'}"></div> </div> <div class="row hidden" id="orderContainer"     data-bind="template:{name:'order'}"> </div> <div data-bind="template: {name:'add-to-catalog-modal'}"></div> <div data-bind="template: {name:'finish-order-modal'}"></div> </div> Let's review this code. We have two row classes. They will be our containers. The first container is named with the id value as catalogContainer and it will contain the catalog view and the cart. The second one is referenced by the id value as orderContainer and we will set our final order there. We also have two more <div> tags at the bottom that will contain the modal dialogs to show the form to add products to our catalog and the other one will contain a modal message to tell the user that our order is finished. Along with this code you can see a template binding inside the data-bind attribute. This is the binding that Knockout uses to bind templates to the element. It contains a name parameter that represents the ID of a template. <div class="col-xs-12" data-bind="template:{name:'header'}"></div> In this example, this <div> element will contain the HTML that is inside the <script> tag with the ID header. Creating templates Template elements are commonly declared at the bottom of the body, just above the <script> tags that have references to our external libraries. We are going to define some templates and then we will talk about each one of them: <!-- templates --> <script type="text/html" id="header"></script> <script type="text/html" id="catalog"></script> <script type="text/html" id="add-to-catalog-modal"></script> <script type="text/html" id="cart-widget"></script> <script type="text/html" id="cart-item"></script> <script type="text/html" id="cart"></script> <script type="text/html" id="order"></script> <script type="text/html" id="finish-order-modal"></script> Each template name is descriptive enough by itself, so it's easy to know what we are going to set inside them. Let's see a diagram showing where we dispose each template on the screen:   Notice that the cart-item template will be repeated for each item in the cart collection. Modal templates will appear only when a modal dialog is displayed. Finally, the order template is hidden until we click to confirm the order. In the header template, we will have the title and the menu of the page. The add-to-catalog-modal template will contain the modal that shows the form to add a product to our catalog. The cart-widget template will show a summary of our cart. The cart-item template will contain the template of each item in the cart. The cart template will have the layout of the cart. The order template will show the final list of products we want to buy and a button to confirm our order. The header template Let's begin with the HTML markup that should contain the header template: <script type="text/html" id="header"> <h1>    Catalog </h1>   <button class="btn btn-primary btn-sm" data-toggle="modal"     data-target="#addToCatalogModal">    Add New Product </button> <button class="btn btn-primary btn-sm" data-bind="click:     showCartDetails, css:{ disabled: cart().length < 1}">    Show Cart Details </button> <hr/> </script> We define a <h1> tag, and two <button> tags. The first button tag is attached to the modal that has the ID #addToCatalogModal. Since we are using Bootstrap as the CSS framework, we can attach modals by ID using the data-target attribute, and activate the modal using the data-toggle attribute. The second button will show the full cart view and it will be available only if the cart has items. To achieve this, there are a number of different ways. The first one is to use the CSS-disabled class that comes with Twitter Bootstrap. This is the way we have used in the example. CSS binding allows us to activate or deactivate a class in the element depending on the result of the expression that is attached to the class. The other method is to use the enable binding. This binding enables an element if the expression evaluates to true. We can use the opposite binding, which is named disable. There is a complete documentation on the Knockout website http://knockoutjs.com/documentation/enable-binding.html: <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, enable: cart().length > 0"> Show Cart Details </button>   <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, disable: cart().length < 1"> Show Cart Details </button> The first method uses CSS classes to enable and disable the button. The second method uses the HTML attribute, disabled. We can use a third option, which is to use a computed observable. We can create a computed observable variable in our view-model that returns true or false depending on the length of the cart: //in the viewmodel. Remember to expose it var cartHasProducts = ko.computed(function(){ return (cart().length > 0); }); //HTML <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, enable: cartHasProducts"> Show Cart Details </button> To show the cart, we will use the click binding. Now we should go to our viewmodel.js file and add all the information we need to make this template work: var cart = ko.observableArray([]); var showCartDetails = function () { if (cart().length > 0) {    $("#cartContainer").removeClass("hidden"); } }; And you should expose these two objects in the view-model: return {    searchTerm: searchTerm,    catalog: filteredCatalog,    newProduct: newProduct,    totalItems:totalItems,    addProduct: addProduct,    cart: cart,    showCartDetails: showCartDetails, }; The catalog template The next step is to define the catalog template just below the header template: <script type="text/html" id="catalog"> <div class="input-group">    <span class="input-group-addon">      <i class="glyphicon glyphicon-search"></i> Search    </span>    <input type="text" class="form-control" data-bind="textInput:       searchTerm"> </div> <table class="table">    <thead>    <tr>      <th>Name</th>      <th>Price</th>      <th>Stock</th>      <th></th>    </tr>    </thead>    <tbody data-bind="foreach:catalog">    <tr data-bind="style:color:stock() < 5?'red':'black'">      <td data-bind="text:name"></td>      <td data-bind="text:price"></td>      <td data-bind="text:stock"></td>      <td>        <button class="btn btn-primary"          data-bind="click:$parent.addToCart">          <i class="glyphicon glyphicon-plus-sign"></i> Add        </button>      </td>    </tr>    </tbody>    <tfoot>    <tr>      <td colspan="3">        <strong>Items:</strong><span           data-bind="text:catalog().length"></span>      </td>      <td colspan="1">        <span data-bind="template:{name:'cart-widget'}"></span>      </td>    </tr>    </tfoot> </table> </script> Now, each line uses the style binding to alert the user, while they are shopping, that the stock is reaching the maximum limit. The style binding works the same way that CSS binding does with classes. It allows us to add style attributes depending on the value of the expression. In this case, the color of the text in the line must be black if the stock is higher than five, and red if it is four or less. We can use other CSS attributes, so feel free to try other behaviors. For example, set the line of the catalog to green if the element is inside the cart. We should remember that if an attribute has dashes, you should wrap it in single quotes. For example, background-color will throw an error, so you should write 'background-color'. When we work with bindings that are activated depending on the values of the viewmodel, it is good practice to use computed observables. Therefore, we can create a computed value in our product model that returns the value of the color that should be displayed: //In the Product.js var _lineColor = ko.computed(function(){ return (_stock() < 5)? 'red' : 'black'; }); return { lineColor:_lineColor }; //In the template <tr data-bind="style:lineColor"> ... </tr> It would be even better if we create a class in our style.css file that is called stock-alert and we use the CSS binding: //In the style file .stock-alert { color: #f00; } //In the Product.js var _hasStock = ko.computed(function(){ return (_stock() < 5);   }); return { hasStock: _hasStock }; //In the template <tr data-bind="css: hasStock"> ... </tr> Now, look inside the <tfoot> tag. <td colspan="1"> <span data-bind="template:{name:'cart-widget'}"></span> </td> As you can see, we can have nested templates. In this case, we have the cart-widget template inside our catalog template. This give us the possibility of having very complex templates, splitting them into very small pieces, and combining them, to keep our code clean and maintainable. Finally, look at the last cell of each row: <td> <button class="btn btn-primary"     data-bind="click:$parent.addToCart">    <i class="glyphicon glyphicon-plus-sign"></i> Add </button> </td> Look at how we call the addToCart method using the magic variable $parent. Knockout gives us some magic words to navigate through the different contexts we have in our app. In this case, we are in the catalog context and we want to call a method that lies one level up. We can use the magical variable called $parent. There are other variables we can use when we are inside a Knockout context. There is complete documentation on the Knockout website http://knockoutjs.com/documentation/binding-context.html. In this project, we are not going to use all of them. But we are going quickly explain these binding context variables, just to understand them better. If we don't know how many levels deep we are, we can navigate to the top of the view-model using the magic word $root. When we have many parents, we can get the magic array $parents and access each parent using indexes, for example, $parents[0], $parents[1]. Imagine that you have a list of categories where each category contains a list of products. These products are a list of IDs and the category has a method to get the name of their products. We can use the $parents array to obtain the reference to the category: <ul data-bind="foreach: {data: categories}"> <li data-bind="text: $data.name"></li> <ul data-bind="foreach: {data: $data.products, as: 'prod'}>    <li data-bind="text:       $parents[0].getProductName(prod.ID)"></li> </ul> </ul> Look how helpful the as attribute is inside the foreach binding. It makes code more readable. But if you are inside a foreach loop, you can also access each item using the $data magic variable, and you can access the position index that each element has in the collection using the $index magic variable. For example, if we have a list of products, we can do this: <ul data-bind="foreach: cart"> <li><span data-bind="text:$index">    </span> - <span data-bind="text:$data.name"></span> </ul> This should display: 0 – Product 1 1 – Product 2 2 – Product 3 ...  KnockoutJS magic variables to navigate through contexts Now that we know more about what binding variables are, let's go back to our code. We are now going to write the addToCart method. We are going to define the cart items in our js/models folder. Create a file called CartProduct.js and insert the following code in it: //js/models/CartProduct.js var CartProduct = function (product, units) { "use strict";   var _product = product,    _units = ko.observable(units);   var subtotal = ko.computed(function(){    return _product.price() * _units(); });   var addUnit = function () {    var u = _units();    var _stock = _product.stock();    if (_stock === 0) {      return;    } _units(u+1);    _product.stock(--_stock); };   var removeUnit = function () {    var u = _units();    var _stock = _product.stock();    if (u === 0) {      return;    }    _units(u-1);    _product.stock(++_stock); };   return {    product: _product,    units: _units,    subtotal: subtotal,    addUnit : addUnit,    removeUnit: removeUnit, }; }; Each cart product is composed of the product itself and the units of the product we want to buy. We will also have a computed field that contains the subtotal of the line. We should give the object the responsibility for managing its units and the stock of the product. For this reason, we have added the addUnit and removeUnit methods. These methods add one unit or remove one unit of the product if they are called. We should reference this JavaScript file into our index.html file with the other <script> tags. In the viewmodel, we should create a cart array and expose it in the return statement, as we have done earlier: var cart = ko.observableArray([]); It's time to write the addToCart method: var addToCart = function(data) { var item = null; var tmpCart = cart(); var n = tmpCart.length; while(n--) {    if (tmpCart[n].product.id() === data.id()) {      item = tmpCart[n];    } } if (item) {    item.addUnit(); } else {    item = new CartProduct(data,0);    item.addUnit();    tmpCart.push(item);       } cart(tmpCart); }; This method searches the product in the cart. If it exists, it updates its units, and if not, it creates a new one. Since the cart is an observable array, we need to get it, manipulate it, and overwrite it, because we need to access the product object to know if the product is in the cart. Remember that observable arrays do not observe the objects they contain, just the array properties. The add-to-cart-modal template This is a very simple template. We just wrap the code to add a product to a Bootstrap modal: <script type="text/html" id="add-to-catalog-modal"> <div class="modal fade" id="addToCatalogModal">    <div class="modal-dialog">      <div class="modal-content">        <form class="form-horizontal" role="form"           data-bind="with:newProduct">          <div class="modal-header">            <button type="button" class="close"               data-dismiss="modal">              <span aria-hidden="true">&times;</span>              <span class="sr-only">Close</span>            </button><h3>Add New Product to the Catalog</h3>          </div>          <div class="modal-body">            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                  placeholder="Name" data-bind="textInput:name">              </div>            </div>            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                   placeholder="Price" data-bind="textInput:price">              </div>            </div>            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                   placeholder="Stock" data-bind="textInput:stock">              </div>            </div>          </div>          <div class="modal-footer">            <div class="form-group">              <div class="col-sm-12">                <button type="submit" class="btn btn-default"                  data-bind="{click:$parent.addProduct}">                  <i class="glyphicon glyphicon-plus-sign">                  </i> Add Product                </button>              </div>            </div>          </div>        </form>      </div><!-- /.modal-content -->    </div><!-- /.modal-dialog --> </div><!-- /.modal --> </script> The cart-widget template This template gives the user information quickly about how many items are in the cart and how much all of them cost: <script type="text/html" id="cart-widget"> Total Items: <span data-bind="text:totalItems"></span> Price: <span data-bind="text:grandTotal"></span> </script> We should define totalItems and grandTotal in our viewmodel: var totalItems = ko.computed(function(){ var tmpCart = cart(); var total = 0; tmpCart.forEach(function(item){    total += parseInt(item.units(),10); }); return total; }); var grandTotal = ko.computed(function(){ var tmpCart = cart(); var total = 0; tmpCart.forEach(function(item){    total += (item.units() * item.product.price()); }); return total; }); Now you should expose them in the return statement, as we always do. Don't worry about the format now, you will learn how to format currency or any kind of data in the future. Now you must focus on learning how to manage information and how to show it to the user. The cart-item template The cart-item template displays each line in the cart: <script type="text/html" id="cart-item"> <div class="list-group-item" style="overflow: hidden">    <button type="button" class="close pull-right" data-bind="click:$root.removeFromCart"><span>&times;</span></button>    <h4 class="" data-bind="text:product.name"></h4>    <div class="input-group cart-unit">      <input type="text" class="form-control" data-bind="textInput:units" readonly/>        <span class="input-group-addon">          <div class="btn-group-vertical">            <button class="btn btn-default btn-xs"               data-bind="click:addUnit">              <i class="glyphicon glyphicon-chevron-up"></i>            </button>            <button class="btn btn-default btn-xs"               data-bind="click:removeUnit">              <i class="glyphicon glyphicon-chevron-down"></i>            </button>          </div>        </span>    </div> </div> </script> We set an x button in the top-right of each line to easily remove a line from the cart. As you can see, we have used the $root magic variable to navigate to the top context because we are going to use this template inside a foreach loop, and it means this template will be in the loop context. If we consider this template as an isolated element, we can't be sure how deep we are in the context navigation. To be sure, we go to the right context to call the removeFormCart method. It's better to use $root instead of $parent in this case. The code for removeFromCart should lie in the viewmodel context and should look like this: var removeFromCart = function (data) { var units = data.units(); var stock = data.product.stock(); data.product.stock(units+stock); cart.remove(data); }; Notice that in the addToCart method, we get the array that is inside the observable. We did that because we need to navigate inside the elements of the array. In this case, Knockout observable arrays have a method called remove that allows us to remove the object that we pass as a parameter. If the object is in the array, it will be removed. Remember that the data context is always passed as the first parameter in the function we use in the click events. The cart template The cart template should display the layout of the cart: <script type="text/html" id="cart"> <button type="button" class="close pull-right"     data-bind="click:hideCartDetails">    <span>&times;</span> </button> <h1>Cart</h1> <div data-bind="template: {name: 'cart-item', foreach:cart}"     class="list-group"></div> <div data-bind="template:{name:'cart-widget'}"></div> <button class="btn btn-primary btn-sm"     data-bind="click:showOrder">    Confirm Order </button> </script> It's important that you notice the template binding that we have just below <h1>Cart</h1>. We are binding a template with an array using the foreach argument. With this binding, Knockout renders the cart-item template for each element inside the cart collection. This considerably reduces the code we write in each template and in addition makes them more readable. We have once again used the cart-widget template to show the total items and the total amount. This is one of the good features of templates, we can reuse content over and over. Observe that we have put a button at the top-right of the cart to close it when we don't need to see the details of our cart, and the other one to confirm the order when we are done. The code in our viewmodel should be as follows: var hideCartDetails = function () { $("#cartContainer").addClass("hidden"); }; var showOrder = function () { $("#catalogContainer").addClass("hidden"); $("#orderContainer").removeClass("hidden"); }; As you can see, to show and hide elements we use jQuery and CSS classes from the Bootstrap framework. The hidden class just adds the display: none style to the elements. We just need to toggle this class to show or hide elements in our view. Expose these two methods in the return statement of your view-model. We will come back to this when we need to display the order template. This is the result once we have our catalog and our cart:   The order template Once we have clicked on the Confirm Order button, the order should be shown to us, to review and confirm if we agree. <script type="text/html" id="order"> <div class="col-xs-12">    <button class="btn btn-sm btn-primary"       data-bind="click:showCatalog">      Back to catalog    </button>    <button class="btn btn-sm btn-primary"       data-bind="click:finishOrder">      Buy & finish    </button> </div> <div class="col-xs-6">    <table class="table">      <thead>      <tr>        <th>Name</th>        <th>Price</th>        <th>Units</th>        <th>Subtotal</th>      </tr>      </thead>      <tbody data-bind="foreach:cart">      <tr>        <td data-bind="text:product.name"></td>        <td data-bind="text:product.price"></td>        <td data-bind="text:units"></td>        <td data-bind="text:subtotal"></td>      </tr>      </tbody>      <tfoot>      <tr>        <td colspan="3"></td>        <td>Total:<span data-bind="text:grandTotal"></span></td>      </tr>      </tfoot>    </table> </div> </script> Here we have a read-only table with all cart lines and two buttons. One is to confirm, which will show the modal dialog saying the order is completed, and the other gives us the option to go back to the catalog and keep on shopping. There is some code we need to add to our viewmodel and expose to the user: var showCatalog = function () { $("#catalogContainer").removeClass("hidden"); $("#orderContainer").addClass("hidden"); }; var finishOrder = function() { cart([]); hideCartDetails(); showCatalog(); $("#finishOrderModal").modal('show'); }; As we have done in previous methods, we add and remove the hidden class from the elements we want to show and hide. The finishOrder method removes all the items of the cart because our order is complete; hides the cart and shows the catalog. It also displays a modal that gives confirmation to the user that the order is done.  Order details template The finish-order-modal template The last template is the modal that tells the user that the order is complete: <script type="text/html" id="finish-order-modal"> <div class="modal fade" id="finishOrderModal">    <div class="modal-dialog">            <div class="modal-content">        <div class="modal-body">        <h2>Your order has been completed!</h2>        </div>        <div class="modal-footer">          <div class="form-group">            <div class="col-sm-12">              <button type="submit" class="btn btn-success"                 data-dismiss="modal">Continue Shopping              </button>            </div>          </div>        </div>      </div><!-- /.modal-content -->    </div><!-- /.modal-dialog --> </div><!-- /.modal --> </script> The following screenshot displays the output:   Handling templates with if and ifnot bindings You have learned how to show and hide templates with the power of jQuery and Bootstrap. This is quite good because you can use this technique with any framework you want. The problem with this type of code is that since jQuery is a DOM manipulation library, you need to reference elements to manipulate them. This means you need to know over which element you want to apply the action. Knockout gives us some bindings to hide and show elements depending on the values of our view-model. Let's update the show and hide methods and the templates. Add both the control variables to your viewmodel and expose them in the return statement. var visibleCatalog = ko.observable(true); var visibleCart = ko.observable(false); Now update the show and hide methods: var showCartDetails = function () { if (cart().length > 0) {    visibleCart(true); } };   var hideCartDetails = function () { visibleCart(false); };   var showOrder = function () { visibleCatalog(false); };   var showCatalog = function () { visibleCatalog(true); }; We can appreciate how the code becomes more readable and meaningful. Now, update the cart template, the catalog template, and the order template. In index.html, consider this line: <div class="row" id="catalogContainer"> Replace it with the following line: <div class="row" data-bind="if: visibleCatalog"> Then consider the following line: <div id="cartContainer" class="col-xs-6 well hidden"   data-bind="template:{name:'cart'}"></div> Replace it with this one: <div class="col-xs-6" data-bind="if: visibleCart"> <div class="well" data-bind="template:{name:'cart'}"></div> </div> It is important to know that the if binding and the template binding can't share the same data-bind attribute. This is why we go from one element to two nested elements in this template. In other words, this example is not allowed: <div class="col-xs-6" data-bind="if:visibleCart,   template:{name:'cart'}"></div> Finally, consider this line: <div class="row hidden" id="orderContainer"   data-bind="template:{name:'order'}"> Replace it with this one: <div class="row" data-bind="ifnot: visibleCatalog"> <div data-bind="template:{name:'order'}"></div> </div> With the changes we have made, showing or hiding elements now depends on our data and not on our CSS. This is much better because now we can show and hide any element we want using the if and ifnot binding. Let's review, roughly speaking, how our files are now: We have our index.html file that has the main container, templates, and libraries: <!DOCTYPE html> <html> <head> <title>KO Shopping Cart</title> <meta name="viewport" content="width=device-width,     initial-scale=1"> <link rel="stylesheet" type="text/css"     href="css/bootstrap.min.css"> <link rel="stylesheet" type="text/css" href="css/style.css"> </head> <body>   <div class="container-fluid"> <div class="row" data-bind="if: visibleCatalog">    <div class="col-xs-12"       data-bind="template:{name:'header'}"></div>    <div class="col-xs-6"       data-bind="template:{name:'catalog'}"></div>    <div class="col-xs-6" data-bind="if: visibleCart">      <div class="well" data-bind="template:{name:'cart'}"></div>    </div> </div> <div class="row" data-bind="ifnot: visibleCatalog">    <div data-bind="template:{name:'order'}"></div> </div> <div data-bind="template: {name:'add-to-catalog-modal'}"></div> <div data-bind="template: {name:'finish-order-modal'}"></div> </div>   <!-- templates --> <script type="text/html" id="header"> ... </script> <script type="text/html" id="catalog"> ... </script> <script type="text/html" id="add-to-catalog-modal"> ... </script> <script type="text/html" id="cart-widget"> ... </script> <script type="text/html" id="cart-item"> ... </script> <script type="text/html" id="cart"> ... </script> <script type="text/html" id="order"> ... </script> <script type="text/html" id="finish-order-modal"> ... </script> <!-- libraries --> <script type="text/javascript"   src="js/vendors/jquery.min.js"></script> <script type="text/javascript"   src="js/vendors/bootstrap.min.js"></script> <script type="text/javascript"   src="js/vendors/knockout.debug.js"></script> <script type="text/javascript"   src="js/models/product.js"></script> <script type="text/javascript"   src="js/models/cartProduct.js"></script> <script type="text/javascript" src="js/viewmodel.js"></script> </body> </html> We also have our viewmodel.js file: var vm = (function () { "use strict"; var visibleCatalog = ko.observable(true); var visibleCart = ko.observable(false); var catalog = ko.observableArray([...]); var cart = ko.observableArray([]); var newProduct = {...}; var totalItems = ko.computed(function(){...}); var grandTotal = ko.computed(function(){...}); var searchTerm = ko.observable(""); var filteredCatalog = ko.computed(function () {...}); var addProduct = function (data) {...}; var addToCart = function(data) {...}; var removeFromCart = function (data) {...}; var showCartDetails = function () {...}; var hideCartDetails = function () {...}; var showOrder = function () {...}; var showCatalog = function () {...}; var finishOrder = function() {...}; return {    searchTerm: searchTerm,    catalog: filteredCatalog,    cart: cart,    newProduct: newProduct,    totalItems:totalItems,    grandTotal:grandTotal,    addProduct: addProduct,    addToCart: addToCart,    removeFromCart:removeFromCart,    visibleCatalog: visibleCatalog,    visibleCart: visibleCart,    showCartDetails: showCartDetails,    hideCartDetails: hideCartDetails,    showOrder: showOrder,    showCatalog: showCatalog,    finishOrder: finishOrder }; })(); ko.applyBindings(vm); It is useful to debug to globalize the view-model. It is not good practice in production environments, but it is good when you are debugging your application. Window.vm = vm; Now you have easy access to your view-model from the browser debugger or from your IDE debugger. In addition to the product model, we have created a new model called CartProduct: var CartProduct = function (product, units) { "use strict"; var _product = product,    _units = ko.observable(units); var subtotal = ko.computed(function(){...}); var addUnit = function () {...}; var removeUnit = function () {...}; return {    product: _product,    units: _units,    subtotal: subtotal,    addUnit : addUnit,    removeUnit: removeUnit }; }; You have learned how to manage templates with Knockout, but maybe you have noticed that having all templates in the index.html file is not the best approach. We are going to talk about two mechanisms. The first one is more home-made and the second one is an external library used by lots of Knockout developers, created by Jim Cowart, called Knockout.js-External-Template-Engine (https://github.com/ifandelse/Knockout.js-External-Template-Engine). Managing templates with jQuery Since we want to load templates from different files, let's move all our templates to a folder called views and make one file per template. Each file will have the same name the template has as an ID. So if the template has the ID, cart-item, the file should be called cart-item.html and will contain the full cart-item template: <script type="text/html" id="cart-item"></script>  The views folder with all templates Now in the viewmodel.js file, remove the last line (ko.applyBindings(vm)) and add this code: var templates = [ 'header', 'catalog', 'cart', 'cart-item', 'cart-widget', 'order', 'add-to-catalog-modal', 'finish-order-modal' ];   var busy = templates.length; templates.forEach(function(tpl){ "use strict"; $.get('views/'+ tpl + '.html').then(function(data){    $('body').append(data);    busy--;    if (!busy) {      ko.applyBindings(vm);    } }); }); This code gets all the templates we need and appends them to the body. Once all the templates are loaded, we call the applyBindings method. We should do it this way because we are loading templates asynchronously and we need to make sure that we bind our view-model when all templates are loaded. This is good enough to make our code more maintainable and readable, but is still problematic if we need to handle lots of templates. Further more, if we have nested folders, it becomes a headache listing all our templates in one array. There should be a better approach. Managing templates with koExternalTemplateEngine We have seen two ways of loading templates, both of them are good enough to manage a low number of templates, but when lines of code begin to grow, we need something that allows us to forget about template management. We just want to call a template and get the content. For this purpose, Jim Cowart's library, koExternalTemplateEngine, is perfect. This project was abandoned by the author in 2014, but it is still a good library that we can use when we develop simple projects. We just need to download the library in the js/vendors folder and then link it in our index.html file just below the Knockout library. <script type="text/javascript" src="js/vendors/knockout.debug.js"></script> <script type="text/javascript"   src="js/vendors/koExternalTemplateEngine_all.min.js"></script> Now you should configure it in the viewmodel.js file. Remove the templates array and the foreach statement, and add these three lines of code: infuser.defaults.templateSuffix = ".html"; infuser.defaults.templateUrl = "views"; ko.applyBindings(vm); Here, infuser is a global variable that we use to configure the template engine. We should indicate which suffix will have our templates and in which folder they will be. We don't need the <script type="text/html" id="template-id"></script> tags any more, so we should remove them from each file. So now everything should be working, and the code we needed to succeed was not much. KnockoutJS has its own template engine, but you can see that adding new ones is not difficult. If you have experience with other template engines such as jQuery Templates, Underscore, or Handlebars, just load them in your index.html file and use them, there is no problem with that. This is why Knockout is beautiful, you can use any tool you like with it. You have learned a lot of things in this article, haven't you? Knockout gives us the CSS binding to activate and deactivate CSS classes according to an expression. We can use the style binding to add CSS rules to elements. The template binding helps us to manage templates that are already loaded in the DOM. We can iterate along collections with the foreach binding. Inside a foreach, Knockout gives us some magic variables such as $parent, $parents, $index, $data, and $root. We can use the binding as along with the foreach binding to get an alias for each element. We can show and hide content using just jQuery and CSS. We can show and hide content using the bindings: if, ifnot, and visible. jQuery helps us to load Knockout templates asynchronously. You can use the koExternalTemplateEngine plugin to manage templates in a more efficient way. The project is abandoned but it is still a good solution. Summary In this article, you have learned how to split an application using templates that share the same view-model. Now that we know the basics, it would be interesting to extend the application. Maybe we can try to create a detailed view of the product, or maybe we can give the user the option to register where to send the order. Resources for Article: Further resources on this subject: Components [article] Web Application Testing [article] Top features of KnockoutJS [article]
Read more
  • 0
  • 0
  • 11034

article-image-tracking-sql-queries-request-using-django
Packt
22 Apr 2010
13 min read
Save for later

Tracking SQL Queries for a Request using Django

Packt
22 Apr 2010
13 min read
For a typical Django application, database interactions are of key importance. Ensuring that the database queries being made are correct helps to ensure that the application results are correct. Further, ensuring that the database queries produced for the application are efficient helps to make sure that the application will be able to support the desired number of concurrent users. Django provides support in this area by making the database query history available for examination. This type of access is useful to see the SQL that is issued as a result of calling a particular model method. However, it is not helpful in learning about the bigger picture of what SQL queries are made during the processing of a particular request. This section will show how to include information about the SQL queries needed for production of a page in the page itself. We will alter our existing survey application templates to include query information, and examine the query history for some of the existing survey application views. Though we are not aware of any problems with the existing views, we may learn something in the process of verifying that they issue the queries we expect. Settings for accessing query history in templates Before the query history can be accessed from a template, we need to ensure some required settings are configured properly. Three settings are needed in order for the SQL query information to be available in a template. First, the debug context processor, django.core.context_processors.debug, must be included in the TEMPLATE_CONTEXT_PROCESSORS setting. This context processor is included in the default value for TEMPLATE_CONTEXT_PROCESSORS. We have not changed that setting; therefore we do not need to do anything to enable this context processor in our project. Second, the IP address of the machine sending the request must be listed in the INTERNAL_IPS setting. This is not a setting we have used before, and it is empty by default, so we will need to add it to the settings file. When testing using the same machine as where the development server runs, setting INTERNAL_IPS to include the loopback address is sufficient: # Addresses for internal machines that can see potentially sensitive # information such as the query history for a request. INTERNAL_IPS = ('127.0.0.1', ) If you also test from other machines, you will need to include their IP addresses in this setting as well. Third and finally, DEBUG must be True in order for the SQL query history to be available in templates. When those three settings conditions are met, the SQL query history may be available in templates via a template variable named sql_queries. This variable contains a list of dictionaries. Each dictionary contains two keys: sql and time. The value for sql is the SQL query itself, and the value for time is the number of seconds the query took to execute. Note that the sql_queries context variable is set by the debug context processor. Context processors are only called during template rendering when a RequestContext is used to render the template. Up until now, we have not used RequestContexts in our survey application views, since they were not necessary for the code so far. But in order to access the query history from the template, we will need to start using RequestContexts. Therefore, in addition to modifying the templates, we will need to change the view code slightly in order to include query history in the generated pages for the survey application. SQL queries for the home page Let's start by seeing what queries are issued in order to generate the survey application home page. Recall that the home page view code is: def home(request): today = datetime.date.today() active = Survey.objects.active() completed = Survey.objects.completed().filter(closes__gte=today- datetime.timedelta(14)) upcoming = Survey.objects.upcoming().filter( opens__lte=today+datetime.timedelta(7)) return render_to_response('survey/home.html', {'active_surveys': active, 'completed_surveys': completed, 'upcoming_surveys': upcoming, }) There are three QuerySets rendered in the template, so we would expect to see that this view generates three SQL queries. In order to check that, we must first change the view to use a RequestContext: from django.template import RequestContext def home(request): today = datetime.date.today() active = Survey.objects.active() completed = Survey.objects.completed().filter(closes__gte=today- datetime.timedelta(14)) upcoming = Survey.objects.upcoming().filter( opens__lte=today+datetime.timedelta(7)) return render_to_response('survey/home.html', {'active_surveys': active, 'completed_surveys': completed, 'upcoming_surveys': upcoming,}, RequestContext(request)) The only change here is to add the RequestContext(request) as a third parameter to render_to_response, after adding an import for it earlier in the file. When we make this change, we may as well also change the render_to_response lines for the other views to use RequestContexts as well. That way when we get to the point of examining the SQL queries for each, we will not get tripped up by having forgotten to make this small change. Second, we'll need to display the information from sql_queries somewhere in our survey/home.html template. But where? We don't necessarily want this information displayed in the browser along with the genuine application data, since that could get confusing. One way to include it in the response but not have it be automatically visible on the browser page is to put it in an HTML comment. Then the browser will not display it on the page, but it can be seen by viewing the HTML source for the displayed page. As a first attempt at implementing this, we might change the top of survey/home.html to look like this: {% extends "survey/base.html" %} {% block content %} <!-- {{ sql_queries|length }} queries {% for qdict in sql_queries %} {{ qdict.sql }} ({{ qdict.time }} seconds) {% endfor %} --> This template code prints out the contents of sql_queries within an HTML comment at the very beginning of the content block supplied by survey/home. html. First, the number of queries is noted by filtering the list through the length filter. Then the code iterates through each dictionary in the sql_queries list and displays sql, followed by a note in parentheses about the time taken for each query. How well does that work? If we try it out by retrieving the survey home page (after ensuring the development server is running), and use the browser's menu item for viewing the HTML source for the page, we might see that the comment block contains something like: <!-- 1 queries SELECT `django_session`.`session_key`, `django_session`.`session_data`, `django_session`.`expire_date` FROM `django_session` WHERE (`django_ session`.`session_key` = d538f13c423c2fe1e7f8d8147b0f6887 AND `django_ session`.`expire_date` &gt; 2009-10-24 17:24:49 ) (0.001 seconds) --> Note that the exact number of queries displayed here will depend on the version of Django you are running. This result is from Django 1.1.1; later versions of Django may not show any queries displayed here. Furthermore, the history of the browser's interaction with the site will affect the queries issued. This result is from a browser that had been used to access the admin application, and the last interaction with the admin application was to log out. You may see additional queries if the browser had been used to access the admin application but the user had not logged out. Finally, the database in use can also affect the specific queries issued and their exact formatting. This result is from a MySQL database. That's not exactly what we expected. First, a minor annoyance, but 1 queries is wrong, it should be 1 query. Perhaps that wouldn't annoy you, particularly just in internal or debug information, but it would annoy me. I would change the template code that displays the query count to use correct pluralization: {% with sql_queries|length as qcount %} {{ qcount }} quer{{ qcount|pluralize:"y,ies" }} {% endwith %} Here, since the template needs to use the length result multiple times, it is first cached in the qcount variable by using a {% with %} block. Then it is displayed, and it is used as the variable input to the pluralize filter that will put the correct letters on the end of quer depending on the qcount value. Now the comment block will show 0 queries, 1 query, 2 queries, and so on. With that minor annoyance out of the way, we can concentrate on the next, larger, issue, which is that the displayed query is not a query we were expecting. Furthermore, the three queries we were expecting, to retrieve the lists of completed, active, and upcoming surveys, are nowhere to be seen. What's going on? We'll take each of these in turn. The query that is shown is accessing the django_session table. This table is used by the django.contrib.sessions application. Even though the survey application does not use this application, it is listed in our INSTALLED_APPS, since it is included in the settings.py file that startproject generates. Also, the middleware that the sessions application uses is listed in MIDDLEWARE_CLASSES. The sessions application stores the session identifier in a cookie, named sessionid by default, that is sent to the browser as soon as any application uses a session. The browser will return the cookie in all requests to the same server. If the cookie is present in a request, the session middleware will use it to retrieve the session data. This is the query we see previously listed: the session middleware is retrieving the data for the session identified by the session cookie sent by the browser. But the survey application does not use sessions, so how did the browser get a session cookie in the first place? The answer is that the admin application uses sessions, and this browser had previously been used to access the admin application. At that time, the sessionid cookie was set in a response, and the browser faithfully returns it on all subsequent requests. Thus, it seems likely that this django_session table query is due to a sessionid cookie set as a side-effect of using the admin application. Can we confirm that? If we find and delete the cookie from the browser and reload the page, we should see that this SQL query is no longer listed. Without the cookie in the request, whatever code was triggering access to the session data won't have anything to look up. And since the survey application does not use sessions, none of its responses should include a new session cookie, which would cause subsequent requests to include a session lookup. Is this reasoning correct? If we try it, we will see that the comment block changes to: <!-- 0 queries --> Thus, we seem to have confirmed, to some extent, what happened to cause a django_session table query during processing of a survey application response. We did not track down what exact code accessed the session identified by the cookie—it could have been middleware or a context processor, but we probably don't need to know the details. It's enough to keep in mind that there are other applications running in our project besides the one we are working on, and they may cause database interactions independent of our own code. If we observe behavior which looks like it might cause a problem for our code, we can investigate further, but for this particular case we will just avoid using the admin application for now, as we would like to focus attention on the queries our own code is generating. Now that we understand the query that was listed, what about the expected ones that were not listed? The missing queries are due to a combination of the lazy evaluation property of QuerySets and the exact placement of the comment block that lists the contents of sql_queries. We put the comment block at the top of the content block in the home page, to make it easy to find the SQL query information when looking at the page source. The template is rendered after the three QuerySets are created by the view, so it might seem that the comment placed at the top should show the SQL queries for the three QuerySets. However, QuerySets are lazy; simply creating a QuerySet does not immediately cause interaction with the database. Rather, sending the SQL to the database is delayed until the QuerySet results are actually accessed. For the survey home page, that does not happen until the parts of the template that loop through each QuerySet are rendered. Those parts are all below where we placed the sql_queries information, so the corresponding SQL queries had not yet been issued. The fix for this is to move the placement of the comment block to the very bottom of the content block. When we do that we should also fix two other issues with the query display. First, notice that the query displayed above has &gt; shown instead of the > symbol that would actually have been in the query sent to the database. Furthermore, if the database in use is one (such as PostgreSQL) that uses straight quotes instead of back quotes for quoting, all of the back quotes in the query would be shown as &quot;. This is due to Django's automatic escaping of HTML markup characters. This is unnecessary and hard to read in our HTML comment, so we can suppress it by sending the sql query value through the safe filter. Second, the query is very long. In order to avoid needing to scroll to the right in order to see the entire query, we can also filter the sql value through wordwrap to introduce some line breaks and make the output more readable. To make these changes, remove the added comment block from the top of the content block in the survey/home.html template and instead change the bottom of this template to be: {% endif %} <!-- {% with sql_queries|length as qcount %} {{ qcount }} quer{{ qcount|pluralize:"y,ies" }} {% endwith %} {% for qdict in sql_queries %} {{ qdict.sql|safe|wordwrap:60 }} ({{ qdict.time }} seconds) {% endfor %} --> {% endblock content %} Now, if we again reload the survey home page and view the source for the returned page, we will see the queries listed in a comment at the bottom: <!-- 3 queries SELECT `survey_survey`.`id`, `survey_survey`.`title`, `survey_survey`.`opens`, `survey_survey`.`closes` FROM `survey_survey` WHERE (`survey_survey`.`opens` <= 2009-10-25 AND `survey_survey`.`closes` >= 2009-10-25 ) (0.000 seconds) SELECT `survey_survey`.`id`, `survey_survey`.`title`, `survey_survey`.`opens`, `survey_survey`.`closes` FROM `survey_survey` WHERE (`survey_survey`.`closes` < 2009-10-25 AND `survey_survey`.`closes` >= 2009-10-11 ) (0.000 seconds) SELECT `survey_survey`.`id`, `survey_survey`.`title`, `survey_survey`.`opens`, `survey_survey`.`closes` FROM `survey_survey` WHERE (`survey_survey`.`opens` > 2009-10-25 AND `survey_survey`.`opens` <= 2009-11-01 ) (0.000 seconds) --> That is good, those look like exactly what we expect to see for queries for the home page. Now that we seem to have some working template code to show queries, we will consider packaging up this snippet so that it can easily be reused elsewhere.
Read more
  • 0
  • 0
  • 11020

article-image-configuring-ovirt
Packt
19 Nov 2013
9 min read
Save for later

Configuring oVirt

Packt
19 Nov 2013
9 min read
(For more resources related to this topic, see here.) Configuring the NFS storage NFS storage is a fairly common type of storage that is quite easy to set up and run even without special equipment. You can take the server with large disks and create NFS directory.But despite the apparent simplicity of NFS, setting s should be done with attention to details. Make sure that the NFS directory is suitable for use; go to the procedure of connecting storage to the data center. The following options are displayed after you click on the Configure Storage dialog box in which we specify the basic storage configuration: Name and Data Center: It is used to specify a name and target of the data center for storage Domain Function/Storage Type: It is used to choose a data function and NFS type Use Host: It is used to enter the host that will make the initial connection to the storage and a host who will be in the role of SPM Export Path: It is used to enter the storage server name and path of the exported directory Advanced Parameters: It provides additional connection options, such as NFS version, number of retransmissions and timeout, that are recommended to be changed only in exceptional cases Fill in the required storage settings and click on the OK button; this will start the process of connecting storage. The following image shows the New Storage dialog box with the connecting NFS storage: Configuring the iSCSI storage This section will explain how to connect the iSCSI storage to the data center with the type of storage as iSCSI. You can skip this section if you do not use iSCSI storage. iSCSI is a technology for building SAN (Storage Area Network). A key feature of this technology is the transmission of SCSI commands over the IP networks. Thus, there is a transfer of block data via IP. By using the IP networks, data transfer can take place over long distances and through network equipment such as routers and switches. These features make the iSCSI technology good for construction of low-cost SAN. oVirt supports iSCSI and iSCSI storages that can be connected to oVirt data centers. Then begin the process of connecting the storage to the data center. After you click on the Configure Storage dialog box in which you specify the basic storage configuration, the following options are displayed: Name and Data Center: It is used to specify the name and target of the data center. Domain Function/Storage Type: It is used to specify the domain function and storage type. In this case, the data function and iSCSI type. Use Host: It is used to specify the host to which the storage (SPM) will be attached. The following options are present in the search box for iSCSI targets: Address and Port: It is used to specify the address and port of the storage server that contains the iSCSI target User Authentication: Enable this checkbox if authentication is to be used on the iSCSI target CHAP username and password: It is used to specify the username and password for authentication Click on the Discover button and oVirt Engine connects to the specified server for the searching of iSCSI targets. In the resulting list, click on the designated targets, we click on the Login button to authenticate. Upon successful completion of the authentication, the display target LUN will be displayed; check it and click on OK to start connection to the data center. New storage will automatically connect to the data center. If it does not, select the location from the list and click on the Attach button in the detail pane where we choose a target data center. Configuring the Fibre Channel storage If you have selected Fibre Channel when creating the data center, we should create a Fibre Channel storage domain. oVirt supports Fibre Channel storage based on multiple preconfigured Logical Unit Numbers (LUN). Skip this section if you do not use Fibre Channel equipment. Begin the process of connecting the storage to the data center. Open the Guide Me wizard and click on the Configure Storage dialog box where you specify the basic storage configuration: Name and Data Center: It is used to specify the name and data center Domain Function/Storage Type: Here we need to specify the data function and Fibre Channel type Use Host: It specifies the address of the virtualization host that will act as the SPM In the area below, the list of LUNs are displayed, enable the Add LUN checkbox on the selected LUN to use it as Fibre Channel data storage. Click on the OK button and this will start the process of connecting storage to the data centers. In the Storage tab and in the list of storages, we can see created Fibre Channel storage. In the process of connecting, its status will change and at the end new storage will be activated and connected to the data center. The connection process can also be seen in the event pane. The following screenshot shows the New Storage dialog box with Fibre Channel storage type: Configuring the GlusterFS storage GlusterFS is a distributed, parallel, and linearly scalable filesystem. GlusterFS can combine the data storage that are located on different servers into a parallel network filesystem. GlusterFS's potential is very large, so developers directed their efforts towards the implementation and support of GlusterFS in oVirt (GlusterFS documentation is available at http://www.gluster.org/community/documentation/index.php/Main_Page). oVirt 3.3 has a complete data center with the GlusterFS type of storage. Configuring the GlusterFS volume Before attempting to connect GlusterFS storage into the data center, we need to create the volume. The procedure of creating GlusterFS volume is common in all versions. Select the Volumes tab in the resource pane and click on Create Volume. In the open window, fill the volume settings: Data Center: It is used to specify the data center that will be attached to the GlusterFS storage. Volume Cluster: It is used to specify the name of the cluster that will be created. Name: It is used to specify a name for the new volume. Type: It is used to specify the type of GlusterFS volume available to choose from, there are seven types of volume that implement various strategic placements of data on the filesystem. Base types are Distribute, Replicate, and Stripe and other combination of these types: Distributed Replicate, Distributed Stripe, Striped Replicate, and Distributed Striped Replicate (additional info can be found at the link: http://gluster.org/community/documentation/index.php/GlusterFS_Concepts). Bricks: With this button, a list of bricks will be collected from the filesystem. Brick is a separate piece with which volume will be built. These bricks are distributed across the hosts. As bricks use a separate directory, it should be placed on a separate partition. Access Protocols: It defines basic access protocols that can be used to gain access to the following: Gluster: It is a native protocol access to volumes GlusterFS, enabled by default. NFS: It is an access protocol based on NFS. CIFS: It is an access protocol based on CIFS. Allow Access From: It allows us to enter a comma-separated IP address, hostnames, or * for all hosts that are allowed to access GlusterFS volume. Optimize for oVirt Store: Enabling this checkbox will enable extended options for created volume. The following screenshot shows the dialog box of Create Volume: Fill in the parameters, click on the Bricks button, and go to the new window to add new bricks with the following properties: Volume Type: This is used to change the previously marked type of the GlusterFS volume Server: It is used to specify a separate server that will export GlusterFS brick Brick Directory: It is used to specify the directory to use Specify the server and directory and click on Add. Depending on the type of volume, specify multiple bricks. After completing the list with bricks, click on the OK button to add volume and return to the menu. Click on the OK button to create GlusterFS volumes with the specified parameters. The following screenshot shows the Add Bricks dialog box: Now that we have GlusterFS volume, we select it from the list and click on Start. Configuring the GlusterFS storage oVirt 3.3 has support for creating data centers with the GlusterFS storage type: The GlusterFS storage type requires a preconfigured data center. A pre-created cluster should be present inside the data center. The enabled Gluster service is required. Go to the Storage section in resource pane and click on New Domain. In the dialog box that opens, fill in the details of our storage. The details are given as follows: Name and Data Center: It is used to specify the name and data center Domain Function/Storage Type: It is used to specify the data function and GlusterFS type Use Host: It is used to specify the host that will connect to the SPM Path: It is used to specify the path to the location in the format hostname:volume_name VFS Type: Leave it as glusterfs and leave Mount Option blank Click on the OK button; this will start the process of creating the repository. The created storage automatically connects to the specified data centers. If not, select the repository created in the list, and in the subtab named Data Center in the detail pane, click on the Attach button and choose our data center. After you click on OK, the process of connecting storage to the data center starts. The following screenshot shows the New Storage dialog box with the GlusterFS storage type. Summary In this article we learned how to configure NFS Storage, iSCSI Storage, FC storages, and GlusterFS Storage. Resources for Article: Further resources on this subject: Tips and Tricks on Microsoft Application Virtualization 4.6 [Article] VMware View 5 Desktop Virtualization [Article] Qmail Quickstarter: Virtualization [Article]
Read more
  • 0
  • 0
  • 10991
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-interfacing-react-components-angular-applications
Patrick Marabeas
26 Sep 2014
10 min read
Save for later

Interfacing React Components with Angular Applications

Patrick Marabeas
26 Sep 2014
10 min read
There's been talk lately of using React as the view within Angular's MVC architecture. Angular, as we all know, uses dirty checking. As I'll touch on later, it accepts the fact of (minor) performance loss to gain the great two-way data binding it has. React, on the other hand, uses a virtual DOM and only renders the difference. This results in very fast performance. So, how do we leverage React's performance from our Angular application? Can we retain two-way data flow? And just how significant is the performance increase? The nrg module and demo code can be found over on my GitHub. The application To demonstrate communication between the two frameworks, let's build a reusable Angular module (nrg[Angular(ng) + React(r) = energy(nrg)!]) which will render (and re-render) a React component when our model changes. The React component will be composed of aninputandpelement that will display our model and will also update the model on change. To show this, we'll add aninputandpto our view bound to the model. In essence, changes to either input should result in all elements being kept in sync. We'll also add a button to our component that will demonstrate component unmounting on scope destruction. ;( ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app', ['nrg']) .controller('MainController', ['$scope', function($scope) { $scope.text = 'This is set in Angular'; $scope.destroy = function() { $scope.$destroy(); } }]); })(window, document, angular); data-component specifies the React component we want to mount.data-ctrl (optional) specifies the controller we want to inject into the directive—this will allow specific components to be accessible onscope itself rather than scope.$parent.data-ng-model is the model we are going to pass between our Angular controller and our React view. <div data-ng-controller="MainController"> <!-- React component --> <div data-component="reactComponent" data-ctrl="" data-ng-model="text"> <!-- <input /> --> <!-- <button></button> --> <!-- <p></p> --> </div> <!-- Angular view --> <input type="text" data-ng-model="text" /> <p>{{text}}</p> </div> As you can see, the view has meaning when using Angular to render React components.<div data-component="reactComponent" data-ctrl="" data-ng-model="text"></div> has meaning when compared to<div id="reactComponent"></div>,which requires referencing a script file to see what component (and settings) will be mounted on that element. The Angular module - nrg.js The main functions of this reusable Angular module will be to: Specify the DOM element that the component should be mounted onto. Render the React component when changes have been made to the model. Pass the scope and element attributes to the component. Unmount the React component when the Angular scope is destroyed. The skeleton of our module looks like this: ;(function(window, document, angular, React, undefined) { 'use strict'; angular.module('nrg', []) To keep our code modular and extensible, we'll create a factory that will house our component functions, which are currently justrender and unmount . .factory('ComponentFactory', [function() { return { render: function() { }, unmount: function() { } } }]) This will be injected into our directive. .directive('component', ['$controller', 'ComponentFactory', function($controller, ComponentFactory) { return { restrict: 'EA', If a controller has been specified on the elements viadata-ctrl , then inject the$controller service. As mentioned earlier, this will allow scope variables and functions to be used within the React component to be accessible directly onscope , rather thanscope.$parent (the controller also doesn't need to be declared in the view withng-controller ). controller: function($scope, $element, $attrs){ return ($attrs.ctrl) ? $controller($attrs.ctrl, {$scope:$scope, $element:$element, $attrs:$attrs}) : null; }, Here’s an isolated scope with two-way-binding ondata-ng-model . scope: { ngModel: '=' }, link: function(scope, element, attrs) { // Calling ComponentFactory.render() & watching ng-model } } }]); })(window, document, angular, React); ComponentFactory Fleshing out theComponentFactory , we'll need to know how to render and unmount components. React.renderComponent( ReactComponent component, DOMElement container, [function callback] ) As such, we'll need to pass the component we wish to mount (component), the container we want to mount it in (element) and any properties (attrsandscope) we wish to pass to the component. This render function will be called every time the model is updated, so the updated scope will be pushed through each time. According to the React documentation, "If the React component was previously rendered into container, this (React.renderComponent) will perform an update on it and only mutate the DOM as necessary to reflect the latest React component." .factory('ComponentFactory', [function() { return { render: function(component, element, scope, attrs) { // If you have name-spaced your components, you'll want to specify that here - or pass it in via an attribute etc React.renderComponent(window[component]({ scope: scope, attrs: attrs }), element[0]); }, unmount: function(element) { React.unmountComponentAtNode(element[0]); } } }]) Component directive Back in our directive, we can now set up when we are going to call these two functions. link: function(scope, element, attrs) { // Collect the elements attrs in a nice usable object var attributes = {}; angular.forEach(element[0].attributes, function(a) { attributes[a.name.replace('data-','')] = a.value; }); // Render the component when the directive loads ComponentFactory.render(attrs.component, element, scope, attributes); // Watch the model and re-render the component scope.$watch('ngModel', function() { ComponentFactory.render(attrs.component, element, scope, attributes); }, true); // Unmount the component when the scope is destroyed scope.$on('$destroy', function () { ComponentFactory.unmount(element); }); } This implements dirty checking to see if the model has been updated. I haven't played around too much to see if there's a notable difference in performance between this and using a broadcast/listener. That said, to get a listener working as expected, you will need to wrap the render call in a $timeout to push it to the bottom of the stack to ensure scope is updated. scope.$on('renderMe', function() { $timeout(function() { ComponentFactory.render(attrs.component, element, scope, attributes); }); }); The React component We can now build our React component, which will use the model we defined as well as inform Angular of any updates it performs. /** @jsx React.DOM */ ;(function(window, document, React, undefined) { 'use strict'; window.reactComponent = React.createClass({ This is the content that will be rendered into the container. The properties that we passed to the component ({ scope: scope, attrs: attrs }) when we called React.renderComponent back in our component directive are now accessible via this.props. render: function(){ return ( <div> <input type='text' value={this.props.scope.ngModel} onChange={this.handleChange} /> <button onClick={this.deleteScope}>Destroy Scope</button> <p>{this.props.scope.ngModel}</p> </div> ) }, Via the on Change   event, we can call for Angular to run a digest, just as we normally would, but accessing scope via this.props : handleChange: function(event) { var _this = this; this.props.scope.$apply(function() { _this.props.scope.ngModel = event.target.value; }); }, Here we deal with the click event deleteScope  . The controller is accessible via scope.$parent  . If we had injected a controller into the component directive, its contents would be accessible directly on scope  , just as ngModel is.     deleteScope: function() { this.props.scope.$parent.destroy(); } }); })(window, document, React); The result Putting this code together (you can view the completed code on GitHub, or see it in action) we end up with: Two input elements, both of which update the model. Any changes in either our Angular application or our React view will be reflected in both. A React component button that calls a function in our MainController, destroying the scope and also resulting in the unmounting of the component. Pretty cool. But where is my perf increase!? This is obviously too small an application for anything to be gained by throwing your view over to React. To demonstrate just how much faster applications can be (by using React as the view), we'll throw a kitchen sink worth of randomly generated data at it. 5000 bits to be precise. Now, it should be stated that you probably have a pretty questionable UI if you have this much data binding going on. Misko Hevery has a great response regarding Angular's performance on StackOverflow. In summary: Humans are: Slow: Anything faster than 50ms is imperceptible to humans and thus can be considered as "instant". Limited: You can't really show more than about 2000 pieces of information to a human on a single page. Anything more than that is really bad UI, and humans can't process this anyway. Basically, know Angular's limits and your user's limits! That said, the following performance test was certainly accentuated on mobile devices. Though, on the flip side, UI should be simpler on mobile. Brute force performance demonstration ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app') .controller('NumberController', ['$scope', function($scope) { $scope.numbers = []; ($scope.numGen = function(){ for(var i = 0; i < 5000; i++) { $scope.numbers[i] = Math.floor(Math.random() * (999999999999999 - 1)) + 1; } })(); }]); })(window, document, angular); Angular ng-repeat <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <table> <tr ng-repeat="number in numbers"> <td>{{number}}</td> </tr> </table> </div> There was definitely lag felt as the numbers were loaded in and refreshed. From start to finish, this took around 1.5 seconds. React component <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <div data-component="numberComponent" data-ng-model="numbers"></div> </div> ;(function(window, document, React, undefined) { window.numberComponent = React.createClass({ render: function() { var rows = this.props.scope.ngModel.map(function(number) { return ( <tr> <td>{number}</td> </tr> ); }); return ( <table>{rows}</table> ); } }); })(window, document, React); So that just happened. 270 milliseconds start to finish. Around 80% faster! Conclusion So, should you go rewrite all those Angular modules as React components? Probably not. It really comes down to the application you are developing and how dependent you are on OSS. It's definitely possible that a handful of complex modules could put your application in the realm of “feeling a tad sluggish”, but it should be remembered that perceived performance is all that matters to the user. Altering the manner in which content is loaded could end up being a better investment of time. Users will definitely feel performance increases on mobile websites sooner, however, and is certainly something to keep in mind. The nrg module and demo code can be found over on my GitHub. Visit our JavaScript page for more JavaScript content and tutorials!  About the author A guest post by Patrick Marabeas, a freelance frontend developer who loves learning and working with cutting edge web technologies. He spends much of his free time developing Angular modules, such as ng-FitText, ng-Slider, ng-YouTubeAPI, and ng-ScrollSpy. You can follow him on Twitter: @patrickmarabeas.
Read more
  • 0
  • 0
  • 10953

article-image-social-media-and-magento
Packt
19 Aug 2014
17 min read
Save for later

Social Media and Magento

Packt
19 Aug 2014
17 min read
Social networks such as Twitter and Facebook are ever popular and can be a great source of new customers if used correctly on your store. In this article by Richard Carter, the author of Learning Magento Theme Development, covers the following topics: Integrating a Twitter feed into your Magento store Integrating a Facebook Like Box into your Magento store Including social share buttons in your product pages Integrating product videos from YouTube into the product page (For more resources related to this topic, see here.) Integrating a Twitter feed into your Magento store If you're active on Twitter, it can be worthwhile to let your customers know. While you can't (yet, anyway!) accept payment for your goods through Twitter, it can be a great way to develop a long term relationship with your store's customers and increase repeat orders. One way you can tell customers you're active on Twitter is to place a Twitter feed that contains some of your recent tweets on your store's home page. While you need to be careful not to get in the way of your store's true content, such as your most recent products and offers, you could add the Twitter feed in the footer of your website.   Creating your Twitter widget   To embed your tweets, you will need to create a Twitter widget. Log in to your Twitter account, navigate to https://twitter.com/settings/widgets, and follow the instructions given there to create a widget that contains your most recent tweets. This will create a block of code for you that looks similar to the following code:   <a class="twitter-timeline" href="https://twitter.com/RichardCarter" data-widget-id="123456789999999999">Tweets by @RichardCarter</a>   <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?'http':'https';if(!d. getElementById(id)){js=d.createElement(s);js.id=id;js. src=p+"://platform.twitter.com/widgets.js";fjs.parentNode. insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script> Embedding your Twitter feed into a Magento template   Once you have the Twitter widget code to embed, you're ready to embed it into one of Magento's template files. This Twitter feed will be embedded in your store's footer area. So, so open your theme's /app/design/frontend/default/m18/template/page/html/footer.phtml file and add the highlighted section of the following code:   <div class="footer-about footer-col">   <?php echo $this->getLayout()->createBlock('cms/block')->setBlockId('footer_about')->toHtml(); ?>   <?php   $_helper = Mage::helper('catalog/category'); $_categories = $_helper->getStoreCategories(); if (count($_categories) > 0): ?> <ul>   <?phpforeach($_categories as $_category): ?> <li>   <a href="<?php echo $_helper->getCategoryUrl($_category) ?>"> <?php echo $_category->getName() ?>   </a>   </li> <?phpendforeach; ?> </ul>   <?phpendif; ?>   <a class="twitter-timeline" href="https://twitter.com/RichardCarter" data-widget-id="123456789999999999">Tweets by @RichardCarter</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?'http':'https';if(!d. getElementById(id)){js=d.createElement(s);js.id=id;js. src=p+"://platform.twitter.com/widgets.js";fjs.parentNode. insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>   </div>   The result of the preceding code is a Twitter feed similar to the following one embedded on your store:     As you can see, the Twitter widget is quite cumbersome. So, it's wise to be sparing when adding this to your website. Sometimes, a simple Twitter icon that links to your account is all you need!   Integrating a Facebook Like Box into your Magento store   Facebook is one of the world's most popular social networks; with careful integration, you can help drive your customers to your Facebook page and increase long term interaction. This will drive repeat sales and new potential customers to your store. One way to integrate your store's Facebook page into your Magento site is to embed your Facebook page's news feed into it.   Getting the embedding code from Facebook   Getting the necessary code for embedding from Facebook is relatively easy; navigate to the Facebook Developers website at https://developers.facebook.com/docs/plugins/like-box-for-pages. Here, you are presented with a form. Complete the form to generate your embedding code; enter your Facebook page's URL in the Facebook Page URL field (the following example uses Magento's Facebook page):   Click on the Get Code button on the screen to tell Facebook to generate the code you will need, and you will see a pop up with the code appear as shown in the following screenshot:   Adding the embed code into your Magento templates   Now that you have the embedding code from Facebook, you can alter your templates to include the code snippets. The first block of code for the JavaScript SDK is required in the header.phtml file in your theme's directory at /app/design/frontend/default/m18/template/page/html/. Then, add it at the top of the file:   <div id="fb-root"></div> <script>(function(d, s, id) {   var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id;   js.src = "//connect.facebook.net/en_GB/all.js#xfbml=1"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script>   Next, you can add the second code snippet provided by the Facebook Developers site where you want the Facebook Like Box to appear in your page. For flexibility, you can create a static block in Magento's CMS tool to contain this code and then use the Magento XML layout to assign the static block to a template's sidebar.   Navigate to CMS | Static Blocks in Magento's administration panel and add a new static block by clicking on the Add New Block button at the top-right corner of the screen. Enter a suitable name for the new static block in the Block Title field and give it a value facebook in the Identifier field. Disable Magento's rich text editor tool by clicking on the Show / Hide Editor button above the Content field.   Enter in the Content field the second snippet of code the Facebook Developers website provided, which will be similar to the following code:   <div class="fb-like-box" data-href="https://www.facebook.com/Magento" data-width="195" data-colorscheme="light" data-show-faces="true" data-header="true" data-stream="false" data-show-border="true"></div> Once complete, your new block should look like the following screenshot:   Click on the Save Block button to create a new block for your Facebook widget. Now that you have created the block, you can alter your Magento theme's layout files to include the block in the right-hand column of your store.   Next, open your theme's local.xml file located at /app/design/frontend/default/m18/layout/ and add the following highlighted block of XML to it. This will add the static block that contains the Facebook widget:   <reference name="right">   <block type="cms/block" name="cms_facebook">   <action method="setBlockId"><block_id>facebook</block_id></action>   </block>   <!--other layout instructions -->   </reference>   If you save this change and refresh your Magento store on a page that uses the right-hand column page layout, you will see your new Facebook widget appear in the right-hand column. This is shown in the following screenshot:   Including social share buttons in your product pages   Particularly if you are selling to consumers rather than other businesses, you can make use of social share buttons in your product pages to help customers share the products they love with their friends on social networks such as Facebook and Twitter. One of the most convenient ways to do this is to use a third-party service such as AddThis, which also allows you to track your most shared content. This is useful to learn which products are your most-shared products within your store! Styling the product page a little further   Before you begin to integrate the share buttons, you can style your product page to provide a little more layout and distinction between the blocks of content. Open your theme's styles.css file and append the following CSS (located at /skin/frontend/default/m18/css/) to provide a column for the product image and a column for the introductory content of the product:   .product-img-box, .product-shop {   float: left;   margin: 1%;   padding: 1%;   width: 46%;   }   You can also add some additional CSS to style some of the elements that appear on the product view page in your Magento store:   .product-name { margin-bottom: 10px; }   .or {   color: #888; display: block; margin-top: 10px;   }   .add-to-box { background: #f2f2f2; border-radius: 10px; margin-bottom: 10px; padding: 10px; }   .more-views ul { list-style-type: none;   }   If you refresh a product page on your store, you will see the new layout take effect: Integrating AddThis   Now that you have styled the product page a little, you can integrate AddThis with your Magento store. You will need to get a code snippet from the AddThis website at http://www.addthis.com/get/sharing. Your snippet will look something similar to the following code:   <div class="addthis_toolboxaddthis_default_style ">   <a class="addthis_button_facebook_like" fb:like:layout="button_ count"></a>   <a class="addthis_button_tweet"></a>   <a class="addthis_button_pinterest_pinit" pi:pinit:layout="horizontal"></a>   <a class="addthis_counteraddthis_pill_style"></a> </div>   <script type="text/javascript">var addthis_config = {"data_track_ addressbar":true};</script>   <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_ widget.js#pubid=youraddthisusername"></script>   Once the preceding code is included in a page, this produces a social share tool that will look similar to the following screenshot:   Copy the product view template from the view.phtml file from /app/design/frontend/base/default/catalog/product/ to /app/design/frontend/default/m18/catalog/product/ and open your theme's view.phtml file for editing. You probably don't want the share buttons to obstruct the page name, add-to-cart area, or the brief description field. So, positioning the social share tool underneath those items is usually a good idea. Locate the snippet in your view.phtml file that has the following code:   <?php if ($_product->getShortDescription()):?>   <div class="short-description">   <h2><?php echo $this->__('Quick Overview') ?></h2>   <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?></div>   </div>   <?phpendif;?>   Below this block, you can insert your AddThis social share tool highlighted in the following code so that the code is similar to the following block of code (the youraddthisusername value on the last line becomes your AddThis account's username):   <?php if ($_product->getShortDescription()):?>   <div class="short-description">   <h2><?php echo $this->__('Quick Overview') ?></h2>   <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?></div>   </div>   <?phpendif;?>   <div class="addthis_toolboxaddthis_default_style ">   <a class="addthis_button_facebook_like" fb:like:layout="button_ count"></a>   <a class="addthis_button_tweet"></a>   <a class="addthis_button_pinterest_pinit" pi:pinit:layout="horizontal"></a>   <a class="addthis_counteraddthis_pill_style"></a> </div>   <script type="text/javascript">var addthis_config = {"data_track_ addressbar":true};</script>   <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_ widget.js#pubid=youraddthisusername"></script>   If you want to reuse this block in multiple places throughout your store, consider adding this to a static block in Magento and using Magento's XML layout to add the block as required.   Once again, refresh the product page on your Magento store and you will see the AddThis toolbar appear as shown in the following screenshot. It allows your customers to begin sharing their favorite products on their preferred social networking sites.     If you can't see your changes, don't forget to clear your caches by navigating to System | Cache Management. If you want to provide some space between other elements and the AddThis toolbar, add the following CSS to your theme's styles.css file:   .addthis_toolbox {   margin: 10px 0;   }   The resulting product page will now look similar to the following screenshot. You have successfully integrated social sharing tools on your Magento store's product page:     Integrating product videos from YouTube into the product page   An increasingly common occurrence on ecommerce stores is the use of video in addition to product photography. The use of videos in product pages can help customers overcome any fears they're not buying the right item and give them a better chance to see the quality of the product they're buying. You can, of course, simply add the HTML provided by YouTube's embedding tool to your product description. However, if you want to insert your video on a specific page within your product template, you can follow the steps described in this section. Product attributes in Magento   Magento products are constructed from a number of attributes (different fields), such as product name, description, and price. Magento allows you to customize the attributes assigned to products, so you can add new fields to contain more information on your product. Using this method, you can add a new Video attribute that will contain the video embedding HTML from YouTube and then insert it into your store's product page template.   An attribute value is text or other content that relates to the attribute, for example, the attribute value for the Product Name attribute might be Blue Tshirt. Magento allows you to create different types of attribute:   •        Text Field: This is used for short lines of text.   •        Text Area: This is used for longer blocks of text.   •        Date: This is used to allow a date to be specified.   •        Yes/No: This is used to allow a Boolean true or false value to be assignedto the attribute.   •        Dropdown: This is used to allow just one selection from a list of optionsto be selected.   •        Multiple Select: This is used for a combination box type to allow one ormore selections to be made from a list of options provided.   •        Price: This is used to allow a value other than the product's price, specialprice, tier price, and cost. These fields inherit your store's currency settings.   •        Fixed Product Tax: This is required in some jurisdictions for certain types ofproducts (for example, those that require an environmental tax to be added). Creating a new attribute for your video field   Navigate to Catalog | Attributes | Manage Attributes in your Magento store's control panel. From there, click on the Add New Attribute button located near the top-right corner of your screen:     In the Attribute Properties panel, enter a value in the Attribute Code field that will be used internally in Magento to refer this. Remember the value you enter here, as you will require it in the next step! We will use video as the Attribute Code value in this example (this is shown in the following screenshot). You can leave the remaining settings in this panel as they are to allow this newly created attribute to be used with all types of products within your store.   In the Frontend Properties panel, ensure that Allow HTML Tags on Frontend is set to Yes (you'll need this enabled to allow you to paste the YouTube embedding HTML into your store and for it to work in the template). This is shown in the following screenshot:   Now select the Manage Labels / Options tab in the left-hand column of your screen and enter a value in the Admin and Default Store View fields in the Manage Titles panel:     Then, click on the Save Attribute button located near the top-right corner of the screen. Finally, navigate to Catalog | Attributes | Manage Attribute Sets and select the attribute set you wish to add your new video attribute to (we will use the Default attribute set for this example). In the right-hand column of this screen, you will see the list of Unassigned Attributes with the newly created video attribute in this list:     Drag-and-drop this attribute into the Groups column under the General group as shown in the following screenshot:   Click on the Save Attribute Set button at the top-right corner of the screen to add the new video attribute to the attribute set.   Adding a YouTube video to a product using the new attribute   Once you have added the new attribute to your Magento store, you can add a video to a product. Navigate to Catalog | Manage Products and select a product to edit (ensure that it uses one of the attribute sets you added the new video attribute to). The new Video field will be visible under the General tab:   Insert the embedding code from the YouTube video you wish to use on your product page into this field. The embed code will look like the following:   <iframe width="320" height="240" src="https://www.youtube.com/embed/dQw4w9WgXcQ?rel=0" frameborder="0" allowfullscreen></iframe> Once you have done that, click on the Save button to save the changes to the product.   Inserting the video attribute into your product view template   Your final task is to allow the content of the video attribute to be displayed in your product page templates in Magento. Open your theme's view.phtml file from /app/design/frontend/default/m18/catalog/product/ and locate the followingsnippet of code:   <div class="product-img-box">   <?php echo $this->getChildHtml('media') ?> </div>   Add the following highlighted code to the preceding code to check whether a video for the product exists and show it if it does exist:   <div class="product-img-box">   <?php   $_video-html = $_product->getResource()->getAttribute('video')->getFrontend()->getValue($_product);   if ($_video-html) echo $_video-html ;   ?>   <?php echo $this->getChildHtml('media') ?>   </div>   If you now refresh the product page that you have added a video to, you will see that the video appears in the same column as the product image. This is shown in the following screenshot: Summary In this article, we looked at expanding the customization of your Magento theme to include elements from social networking sites. You learned about integrating a Twitter feed and Facebook feed into your Magento store, including social share buttons in your product pages, and integrating product videos from YouTube. Resources for Article: Further resources on this subject: Optimizing Magento Performance — Using HHVM [article] Installing Magento [article] Magento Fundamentals for Developers [article]
Read more
  • 0
  • 0
  • 10941

article-image-visualizing-productions-ahead-time-celtx
Packt
12 Apr 2011
13 min read
Save for later

Visualizing Productions Ahead of Time with Celtx

Packt
12 Apr 2011
13 min read
Celtx: Open Source Screenwriting Beginner's Guide Write and market Hollywood-perfect movie scripts the free way!      If you just write scripts, you won't need the features in this article. However, Indie (independent) producers, folks actually making movies, putting together audio visual shows, or creating documentaries will find these tools of immense value and here we look at visualizing all this good stuff (pun, as ever, intended). Sketching Celtx's Sketch Tool allows us to easily visualize ideas and shot setups by adding our drawings of them to projects. Sketches can be separate items in the Project Library (or in folders within the library) or added to a project's Storyboard (more on that in the next section of this article). The Sketch Tool comes with pre-loaded icons for people, cameras, and lights, which we can drag and drop into our sketches, making them look more polished. The icons are SVG images (Scalable Vector Graphics), which allow us to make them as large as we like without losing any quality in the image. The http://celtx.com site makes additional icons available (Art Packs) at low costs (example: $2.99 for 23 icons). Also provided in the Sketch tool are tools for drawing lines, arrows, shapes, and for adding text labels. Just to avoid confusion, let me tell you that there is nothing like pens or erasers or other free drawing features. We'll use various drag-and-drop icons and any of us, artistic talent or not, can turn out very professional-looking storyboards in no time at all. Celtx Projects are containers which hold items such as scripts, index cards, reports, schedules, storyboards, prop lists, and more including sketches. Time for action - starting a new sketch We have two ways of creating a new Sketch, which are as follows: First, open a project (new or already in progress) and look in the Project Library in the upper-left quadrant of the Celtx screen. Sketch is included by default, as shown in the following screenshot: The following steps show the second method for creating a new sketch: Click on File at the very upper-left of the Celtx main window On the drop-down menu, choose Add Item... Click on Sketch and then click on OK, as shown in the following screenshot: What just happened? The new Sketch is added to the Project Library window. If one was already there (likely since it is by default), you now have two with the same name. No problem; simply right-click on one of them, choose Rename, and change its name. We can also delete or duplicate a Sketch this way. To open the main Sketch Tool window from anywhere in a Celtx project, double-click on the name of the Sketch in the Project Library. In the case of a new Sketch created through the Add Item dialog box, as shown in the preceding section, it will already be open and ready for use. That is, the main window covering most of the center of the Celtx screen is where we sketch. Double-clicking on Screenplay or any other item in the Project Library window navigates us to that item and away from our Sketch. It is saved automatically. The following screenshot shows how the Sketch Tool looks when opened: Sketch Tool toolbar Along the top of the middle window (the Sketch Tool window), we have a toolbar. In the preceding screenshot, most of these tools are grayed out. They become fully visible when conditions are met for their use. Let's take a tour. The following screenshot shows the sketch toolbar in its entirety: Now, let's explore these tools individually and see what they do. In addition to the tool bar (shown in the preceding screenshot), I'll include an image of each individual tool as well: Select tool: The first tool from the left is for selecting items in the sketch. There's nothing to select yet, so let's click on the second tool from the left. The select tool is shown in the following screenshot: The diagonal line: It is the second tool from the left and it draws a line. Move the cursor (now a cross) to the point where the line begins, hold down the left mouse button, and drag out the line, releasing the mouse button at its ending point. The diagonal line is shown in the following screenshot: Line tool: Click on the first tool above. The mouse cursor becomes a hollow arrow on a PC but remains a black arrow on the Mac. Select the line we drew by clicking on it (a little hard for a line) or holding down the left mouse button and drawing a box all the way around the line (easier). When the mouse button is released, we know the line is selected because it has a dotted blue line around it and two small gray circles or "handles" at either end. Once the item is selected, just hold down the left mouse button and it can be moved anywhere in the Sketch Tool window.Select either of the handles by moving the cursor arrow over it and pressing the left mouse button. We can now move that end of the line all over the place and it stays wherever the button is released; it's the same for the other end of the line. While the line is selected, just hit the Delete key to erase it. This also works in the same way for all the other elements. Arrow Tool: The third tool from the left (the diagonal arrow) works exactly like the line tool, except there's an arrowhead on one end. It's a useful drawing tool for pointing to something in our diagrams or using as a spear if it's that kind of movie, eh? The arrow tool is shown in the following screenshot: Box and Circle Tools: The fourth tool (the box) draws a box or rectangle and the fifth (the circle) a circle or oval. Clicking on the select tool (the first tool on the left) and using its cursor to click inside of a square or circle selects it. There are two little gray circles which allow us to manipulate the figure just as we did with the line above. The box tool is shown in the following screenshot: And the circle tool is shown in the following screenshot: Text Tool: Suppose we want to label a line, arrow, box, or circle, we can use the sixth tool, that is, the little letter "A", which is shown in the following screenshot: Draw a box and click on the A. The mouse cursor is now an "I-beam". Click in the box. A mini-dialog box appears, as shown in the following screenshot: This Edit Text box allows the selection of the font, size, bold, italic, and provides a place to type in the label, such as the stirring This is a box. Click on OK and the label is in the box, as shown in the following screenshot: If we need to edit an existing label, click on the select tool, double-click on the text, the Edit Text mini-dialog box comes up, and you can edit the text. Keeping the labeled box as an example, we're ready to visit the next two tools, namely, the empty circle and his brother the solid circle, both of which are grayed out at the moment. Let's wake them up. Stroke and Fill Tools: Click on the select tool and then click inside the box. These two tools turn blue and allow us access to them. These are shown in the following screenshot: The empty circle controls the color of the stroke (that's the outline of the item, such as our box) and the solid circle, the fill (the inside color of the item). Note that there is a small arrow on the right side of each circle. Click on the one next to the solid (fill) circle. A color selection box drops down; choose a color by clicking on it. The box now has that color inside it as a fill, as shown in the following screenshot: If you want to change the stroke and/or fill colors, just click on the stroke or fill tool to drop-down the selection box again. Moving on, add another box (or circle, whatever) and move it. Use the select tool, hold down the Shift key, click on the new box, and move it over the original box. Layer Tools: Okay, we now have one item on top of another. Sometimes that's inconvenient in a scene diagram and we need to reverse the order (move one or more items up a layer or more). With the top box selected, look at the toolbar. The next four icons to the right of the stroke and fill circles are now "lit up" (no longer grayed out). The layer tools are shown in the following screenshot: These are, in the order, lower to bottom, lower, raise, and raise to top. In other words, the selected box would be lowered to the bottom-most layer, lowered one layer, raised one layer, or jumped all the way to the top-most layer. Group and Ungroup: Now, to save a few million electrons, let's use the same two boxes again. Select the one on top, hold down the Shift key, and both boxes are now selected. We can move them together, for example. However, note that the next icon to the right is now no longer gray (it's now two blue boxes, one over the other and four smaller black ones). This is the group tool, which is shown in the following screenshot: Clicking on it groups or bonds the selected items together. This, of course, lights up the next icon on the toolbar, the (wait for it) ungroup tool, which restores independence to grouped items. Undo and Redo Tools: The next two toolbar icons, the curved arrows, are undo and redo tools. They reverse an action to the previous state or restore an item to its next state (if there is one, that is, "undo" and "redo"). These tools are shown in the following screenshot: Cut, Copy, and Paste Tools: The last three tools on the Sketch Tool window toolbar are the cut, copy, and paste tools, as shown in the following screenshot: Cutting removes an item but retains it on the clipboard, copying leaves the item and also puts a copy of it on the clipboard, while paste puts the item from the clipboard back into the sketch. Now, we come to the fun part, icons! As in "yes, icon do a professional-looking sketch." (Sorry, couldn't resist.) Icons for a professional look Celtx provides icons, giving our sketches a polished professional look (neat, artistic, follows industry entertainment conventions) while requiring little or no artistic ability. The Palettes windows, found on the right side of the main Sketch Tool window, list available icons. The default installation of Celtx includes a very limited number of icons, one camera, two kinds of lights, and a top-down view of a man and a woman. Celtx, of course, is open source software and thus free (a price I can afford). However, one of the ways in which its sponsoring developer, a Canadian company, Greyfirst Corp. in St. John's, Newfoundland, makes money is by selling add-ons to the program, one type being additional icons in the form of Art Packs. In the following screenshot, if we click on the + Get Art Packs link, a webpage opens where one can order Art Packs and other add-ons at quite reasonable prices: Now, to use an icon in a sketch, let's start with the camera. Open a new sketch by doubleclicking on Sketch in the Project Library window or Add Item from the main File menu. In the Palettes window, move the mouse cursor over Camera and hold down the left mouse button while dragging the camera icon into the main Sketch Tool window. It looks like the following screenshot: Manipulating icons: When any icon is dragged into the main window of the Sketch Tool (and anytime that icon is selected by clicking on it with the select tool cursor described earlier) it has a dotted circle around it (as shown in the preceding screenshot) and two small solid circles (violet on top, olive below). Clicking on the violet circle and holding down the left mouse button while dragging allows rotation of the icon. Releasing the button stops rotation and leaves the icon in that orientation. Clicking on the olive circle (the lower one) and holding down the left mouse button and dragging allow resizing the icon, either larger or smaller. As these icons, like the lines, arrows, boxes, and circles we discussed earlier in this article are also SVG (Scalable Vector Graphics), we can have them as large as desired with no pixilation or other distortion. Using the Sketch Tool toolbar and the supplied icons, we can rapidly and easily draw professional looking diagrams like the scene shown in the following screenshot, which shows two lights, the camera, the talent, arrows showing their movement in the scene, and the props: Again, additional icons may be purchased from the http://celtx.com website. For example, the following screenshot shows the twenty-three icons available in Art Pack 1: Saving a finished Sketch Now is a good time for us to take a moment and discuss the limitations of the Sketch Tool. This feature provides a fast way of whipping up a scene diagram from inside a Celtx project. It does not replace an outside drawing program nor give us the functionality of something like Adobe Illustrator, but it is quite powerful and very handy. By the way, we can use external media files in Celtx and we'll do just that in both of the remaining sections of this article. Another limitation concerns saving sketches. There's no way of exporting the sketch as an external image such as .jpg or .bmp. In fact, even saving within the Celtx project is automated. Do the following to see what I mean: In a Celtx project, double-click on Sketch in the Project Library to start a sketch. Draw something. Double-click on Screenplay. Then double-click on Sketch. The drawing is still there. Save the Celtx project, exit, and open it again. Double-click on Sketch. Drawing's still there! We can even use Add Item from the File menu (a shortcut is the little plus symbol beneath the Project Library title) and add another Sketch (same name) to the Project Library and even draw a new sketch in it. Of course, having different drawings with the same name is hardly utilitarian, so here's how we really save a sketch.
Read more
  • 0
  • 0
  • 10914

article-image-installation-and-configuration-oracle-soa-suite-11g-r1-part-1
Packt
19 Nov 2009
6 min read
Save for later

Installation and Configuration of Oracle SOA Suite 11g R1: Part 1

Packt
19 Nov 2009
6 min read
These instructions are Windows based but Linux users should have no difficulty adjusting them for their environment. Checking your installation If you already have SOA Suite and JDeveloper installed, confirm that you have the correct version and configuration by following the steps in the section below called Testing your installation. In addition, you may want to complete the items in the section called Additional actions. Finally, you must complete the section called Configuration to run a tutorial. What you will need and where to get it This installation requires 3 GB or more available memory. If you have less memory, try separating the installation of the database, the servers, and JDeveloper onto different machines. Memory and Disk Space requirements This installation requires 3 GB or more available memory. If you have less memory, try separating the installation of the database, the servers, and JDeveloper onto different machines. The installation process requires about 12 GB of disk space. After installation, you can delete the files used by installation to save about 4 GB. As you can see, you are installing a lot of software with a large memory and disk footprint. Running your disk defragmentation program now, before you start downloading and installing, can significantly improve install time as well as performance and disk space usage later on. Downloading files Download all the software to get started. In the following steps, save all downloaded files to c:stageSOA. This document assumes that path. If you save them somewhere else then make sure there are no spaces in your path and adjust accordingly when c:stageSOA is referenced in this document. Go to: http://www.oracle.com/technology/products/soa/soasuite/index.html, and download the following from SOA Suite 11g Release 1 (11.1.1.1.0) to c:stageSOA: WebLogic Server:wls1031_win32.exe Repository Creation Utility:ofm_rcu_win32_11.1.1.1.0_disk1_1of1.zip SOA Suite:ofm_soa_generic_11.1.1.1.0_disk1_1of1.zip JDeveloper Studio, base install:jdevstudio11111install.jar Unzip the SOA Suite ZIP file to c:stageSOA. Unzip the RCU ZIP file to c:stageSOA. Additional Files needed: Tutorial Files: In Chapter 3, you were directed to download the files needed for this tutorial. Do that now as some are used during installation. You can download the files from here:http://www.oracle.com/technology/products/soa/soasuite/11gthebook.html. Unzip the tutorial ZIP file to c:stageSOA. SOA Extension for JDeveloper: You will get this later using the JDeveloper update option. Oracle Service Bus: When you are ready to do the Oracle Service Bus (OSB) lab, you will download the install file to install OSB. Checking your database Having your database up and running is the most important pre-requisite for installing SOA Suite. Read the following bulleted requirements carefully to be sure you are ready to begin the SOA Suite installation: You need one of: Oracle XE Universal database version 10.2.0.1 Oracle 10g database version 10.2.0.4+ Oracle 11g database version 11.1.0.7+ You cannot use any other database version in 11gR1 (certification of additional databases is on the roadmap). Specifically, you cannot use XE Standard, it must be Universal. We have seen problems with installing XE when a full 10g database is already installed in the environment. The Windows registry sometimes gets the database file location confused. It is recommended to pick one or the other to avoid such issues. If you need to uninstall XE, make sure that you follow the instructions in Oracle Database Express Edition Installation Guide 10g  Release 2 (10.2) for Microsoft Windows Part Number B25143-03, Section 7, Deinstalling Oracle Database XE (http://download.oracle.com/docs/cd/B25329_01/doc/install.102/b25143/toc.htm). If you need to uninstall 10.2, be sure to follow the instructions in Oracle Database Installation Guide 10g Release 2 (10.2) for Microsoft Windows (32-Bit) Part Number B14316-04, Section 6, Removing Oracle Database Software(http://download.oracle.com/docs/cd/B19306_01/install.102/b14316/ deinstall.htm). Optional: Install OracleXEUniv.exe—recommended for a small footprint database. Make sure that you read step 1 above before installing. You can get XE from here: http://www.oracle.com/technology/products/database/ xe/index.html. When you are using XE, you will see a warning when you install the database schema that this database version is too old. You can safely ignore this warning as it applies only to production environments. If needed, configure Oracle XE Universal. When you are using Oracle XE, you must update database parameters if you have never done this for your database installation. You only have to do this once after installing. Set the processes parameter to >=200 using the following instructions. C:OracleMiddlewarehome_11gR1user_projectsdomains domain1binsetSOADomainEnv.cmdCODE 1sqlplus sys/welcome1@XE as sysdbaSQL> show parameter sessionSQL> show parameter processesSQL> alter system reset sessions scope=spfile sid='*';SQL> alter system set processes=200 scope=spfile;SQL> shutdown immediateSQL> startupSQL> show parameter sessionSQL> show parameter processes The shutdown command can take a few minutes and sometimes the shutdown/startup command fails. In that case, simply restart the XE service in the Control Panel | Administrative Tools | Services dialog after setting up your parameters. Checking your browser Oracle SOA Suite 11gR1 has specific browser version requirements. Enterprise Manager requires Firefox 3 or IE 7. Firefox 3—get a portable version, such as the one available from http://portableapps.com, if you want it to co-exist peacefully with your Firefox 2 installation. Firefox 2 and IE 6 are not supported and will not work. BAM requires IE 7. Beware of certain IE 7 plugins that can create conflicts (a few search plugins have proved to be incompatible with BAM). IE 8 is not supported with 11gR1 (but is on the roadmap). IE 6 has a few issues and Firefox will not work with BAM Studio. Checking your JDK If you are going to install WebLogic server and JDeveloper on the same machine, you will use the JDK from WebLogic for JDeveloper too. However, if you are going to install on two machines, you need Java 1.6 update 11 JDK for JDeveloper. JDK 1.6 update 11—from the Sun downloads page: http://java.sun.com/products/archive/ You must use Java 1.6 update 11. Update 12 does not work.
Read more
  • 0
  • 0
  • 10880
article-image-guidelines-creating-responsive-forms
Packt
23 Oct 2015
12 min read
Save for later

Guidelines for Creating Responsive Forms

Packt
23 Oct 2015
12 min read
In this article by Chelsea Myers, the author of the book, Responsive Web Design Patterns, covers the guidelines for creating responsive forms. Online forms are already modular. Because of this, they aren't hard to scale down for smaller screens. The little boxes and labels can naturally shift around between different screen sizes since they are all individual elements. However, form elements are naturally tiny and very close together. Small elements that you are supposed to click and fill in, whether on a desktop or mobile device, pose obstacles for the user. If you developed a form for your website, you more than likely want people to fill it out and submit it. Maybe the form is a survey, a sign up for a newsletter, or a contact form. Regardless of the type of form, online forms have a purpose; get people to fill them out! Getting people to do this can be difficult at any screen size. But when users are accessing your site through a tiny screen, they face even more challenges. As designers and developers, it is our job to make this process as easy and accessible as possible. Here are some guidelines to follow when creating a responsive form: Give all inputs breathing room. Use proper values for input's type attribute. Increase the hit states for all your inputs. Stack radio inputs and checkboxes on small screens. Together, we will go over each of these guidelines and see how to apply them. (For more resources related to this topic, see here.) The responsive form pattern Before we get started, let's look at the markup for the form we will be using. We want to include a sample of the different input options we can have. Our form will be very basic and requires simple information from the users such as their name, e-mail, age, favorite color, and favorite animal. HTML: <form> <!—- text input --> <label class="form-title" for="name">Name:</label> <input type="text" name="name" id="name" /> <!—- email input --> <label class="form-title" for="email">Email:</label> <input type="email" name="email" id="email" /> <!—- radio boxes --> <label class="form-title">Favorite Color</label> <input type="radio" name="radio" id="red" value="Red" /> <label>Red</label> <input type="radio" name="radio" id="blue" value="Blue" /><label>Blue</label> <input type="radio" name="radio" id="green" value="Green" /><label>Green</label> <!—- checkboxes --> <label class="form-title" for="checkbox">Favorite Animal</label> <input type="checkbox" name="checkbox" id="dog" value="Dog" /><label>Dog</label> <input type="checkbox" name="checkbox" id="cat" value="Cat" /><label>Cat</label> <input type="checkbox" name="checkbox" id="other" value="Other" /><label>Other</label> <!—- drop down selection --> <label class="form-title" for="select">Age:</label> <select name="select" id="select"> <option value="age-group-1">1-17</option> <option value="age-group-2">18-50</option> <option value="age-group-3">&gt;50</option> </select> <!—- textarea--> <label class="form-title" for="textarea">Tell us more:</label> <textarea cols="50" rows="8" name="textarea" id="textarea"></textarea> <!—- submit button --> <input type="submit" value="Submit" /> </form> With no styles applied, our form looks like the following screenshot: Several of the form elements are next to each other, making the form hard to read and almost impossible to fill out. Everything seems tiny and squished together. We can do better than this. We want our forms to be legible and easy to fill. Let's go through the guidelines and make this eyesore of a form more approachable. #1 Give all inputs breathing room In the preceding screenshot, we can't see when one form element ends and the other begins. They are showing up as inline, and therefore displaying on the same line. We don't want this though. We want to give all our form elements their own line to live on and not share any space to the right of each type of element. To do this, we add display: block to all our inputs, selects, and text areas. We also apply display:block to our form labels using the class .form-title. We will be going more into why the titles have their own class for the fourth guideline. CSS: input[type="text"], input[type="email"], textarea, select { display: block; margin-bottom: 10px; } .form-title { display:block; font-weight: bold; } As mentioned, we are applying display:block to text and e-mail inputs. We are also applying it to textarea and select elements. Just having our form elements display on their own line is not enough. We also give everything a margin-bottom element of 10px to give the elements some breathing room between one another. Next, we apply display:block to all the form titles and make them bold to add more visual separation. #2 Use proper values for input's type attribute Technically, if you are collecting a password from a user, you are just asking for text. E-mail, search queries, and even phone numbers are just text too. So, why would we use anything other than <input type="text"…/>? You may not notice the difference on your desktop computer between these form elements, but the change is the biggest on mobile devices. To show you, we have two screenshots of what the keyboard looks like on an iPhone while filling out the text input and the e-mail input: In the left image, we are focused on the text input for entering your name. The keyboard here is normal and nothing special. In the right image, we are focused on the e-mail input and can see the difference on the keyboard. As the red arrow points out, the @ key and the . key are now present when typing in the e-mail input. We need both of those to enter in a valid e-mail, so the device brings up a special keyboard with those characters. We are not doing anything special other than making sure the input has type="email" for this to happen. This works because type="email" is a new HTML5 attribute. HTML5 will also validate that the text entered is a correct email format (which used to be done with JavaScript). Here are some other HTML5 type attribute values from the W3C's third HTML 5.1 Editor's Draft (http://www.w3.org/html/wg/drafts/html/master/semantics.html#attr-input-type-keywords): color date datetime email month number range search tel time url week #3 Increase the hit states for all your inputs It would be really frustrating for the user if they could not easily select an option or click a text input to enter information. Making users struggle isn't going to increase your chance of getting them to actually complete the form. The form elements are naturally very small and not large enough for our fingers to easily tap. Because of this, we should increase the size of our form inputs. Having form inputs to be at least 44 x 44 px large is a standard right now in our industry. This is not a random number either. Apple suggests this size to be the minimum in their iOS Human Interface Guidelines, as seen in the following quote: "Make it easy for people to interact with content and controls by giving each interactive element ample spacing. Give tappable controls a hit target of about 44 x 44 points." As you can see, this does not apply to only form elements. Apple's suggest is for all clickable items. Now, this number may change along with our devices' resolutions in the future. Maybe it will go up or down depending on the size and precision of our future technology. For now though, it is a good place to start. We need to make sure that our inputs are big enough to tap with a finger. In the future, you can always test your form inputs on a touchscreen to make sure they are large enough. For our form, we can apply this minimum size by increasing the height and/or padding of our inputs. CSS: input[type="text"], input[type="email"], textarea, select { display: block; margin-bottom: 10px; font-size: 1em; padding:5px; min-height: 2.75em; width: 100%; max-width: 300px; } The first two styles are from the first guideline. After this, we are increasing the font-size attribute of the inputs, giving the inputs more padding, and setting a min-height attribute for each input. Finally, we are making the inputs wider by setting the width to 100%, but also applying a max-width attribute so the inputs do not get too unnecessarily wide. We want to increase the size of our submit button as well. We definitely don't want our users to miss clicking this: input[type="submit"] { min-height: 3em; padding: 0 2.75em; font-size: 1em; border:none; background: mediumseagreen; color: white; } Here, we are also giving the submit button a min-height attribute, some padding, and increasing the font-size attribute. We are striping the browser's native border style on the button with border:none. We also want to make this button very obvious, so we apply a mediumseagreen color as the background and white text color. If you view the form so far in the browser, or look at the image, you will see all the form elements are bigger now except for the radio inputs and checkboxes. Those elements are still squished together. To make our radio and checkboxes bigger in our example, we will make the option text bigger. Doesn't it make sense that if you want to select red as your favorite color, you would be able to click on the word "red" too, and not just the box next to the word? In the HTML for the radio inputs and the checkboxes, we have markup that looks like this: <input type="radio" name="radio" id="red" value="Red" /><label>Red</label> <input type="checkbox" name="checkbox" id="dog" value="Dog" /><label>Dog</label> To make the option text clickable, all we have to do is set the for attribute on the label to match the id attribute of the input. We will wrap the radio and checkbox inputs inside of their labels so that we can easily stack them for guideline #4. We will also give the labels a class of choice to help style them. <label class="choice" for="red"><input type="radio" name="radio" id="red" value="Red" />Red</label> <label class="choice" for="dog"><input type="checkbox" name="checkbox" id="dog" value="Dog" />Dog</label> Now, the option text and the actual input are both clickable. After doing this, we can apply some more styles to make selecting a radio or checkbox option even easier: label input { margin-left: 10px; } .choice { margin-right: 15px; padding: 5px 0; } .choice + .form-title { margin-top: 10px; } With label input, we are giving the input and the label text a little more space between each other. Then, using the .choice class, we are spreading out each option with margin-right: 15px and making the hit states bigger with padding: 5px 0. Finally, with .choice + .form-title, we are giving any .form-title element that comes after an element with a class of .choice more breathing room. This is going back to the responsive form guideline #1. There is only one last thing we need to do. On small screens, we want to stack the radio and checkbox inputs. On large screens, we want to keep them inline. To do this, we will add display:block to the .choice class. We will then be using a media query to change it back: @media screen and (min-width: 600px){ .choice { display: inline; } } With each input on its own line for smaller screens, they are easier to select. But we don't need to take up all that vertical space on wider screens. With this, our form is done. You can see our finished form, as shown in the following screenshot: Much better, wouldn't you say? No longer are all the inputs tiny and mushed together. The form is easy to read, tap, and begin entering in information. Filling in forms is not considered a fun thing to do, especially on a tiny screen with big thumbs. But there are ways in which we can make the experience easier and a little more visually pleasing. Summary A classic user experience challenge is to design a form that encourages completion. When it comes to fact, figures, and forms, it can be hard to retain the user's attention. This does not mean it is impossible. Having a responsive website does make styling tables and forms a little more complex. But what is the alternative? Nonresponsive websites make you pinch and zoom endlessly to fill out a form or view a table. Having a responsive website gives you the opportunity to make this task easier. It takes a little more code, but in the end, your users will greatly benefit from it. With this article, we have wrapped up guidelines for creating responsive forms. Resources for Article: Further resources on this subject: Securing and Authenticating Web API [article] Understanding CRM Extendibility Architecture [article] CSS3 – Selectors and nth Rules [article]
Read more
  • 0
  • 0
  • 10879

article-image-exceptions-and-logging-apache-struts-2
Packt
16 Oct 2009
8 min read
Save for later

Exceptions and Logging in Apache Struts 2

Packt
16 Oct 2009
8 min read
Handling exceptions in Struts 2 Struts 2 provides a declarative exception handling mechanism that can be configured globally (for an entire package), or for a specific action. This capability can reduce the amount of exception handling code necessary inside actions under some circumstances, most notably when underlying systems, such as our services, throw runtime exceptions (exceptions that we don't need to wrap in a try/catch or declare that a method throws). To sum it up, we can map exception classes to Struts 2 results. The exception handling mechanism depends on the exception interceptor. If we modify our interceptor stack, we must keep that in mind. In general, removing the exception interceptor isn't preferred. Global exception mappings Setting up a global exception handler result is as easy as adding a global exception mapping element to a Struts 2 configuration file package definition and configuring its result. For example, to catch generic runtime exceptions, we could add the following: <global-exception-mappings><exception-mapping result="runtime"exception="java.lang.RuntimeException"/></global-exception-mappings> This means that if a java.lang.RuntimeException (or a subclass) is thrown, the framework will take us to the runtime result. The runtime result may be declared in an element, an action configuration, or both. The most specific result will be used. This implies that an action's result configuration might take precedence over a global exception mapping. For example, consider the global exception mapping shown in the previous code snippet. If we configure an action as follows, and a RuntimeException is thrown, we'll see the locally defined runtime result, even if there is a global runtime result. <action name="except1"class="com.packt.s2wad.ch09.examples.exceptions.Except1"><result name="runtime">/WEB-INF/jsps/ch9/exceptions/except1-runtime.jsp</result>... This can occasionally lead to confusion if a result name happens to collide with a result used for an exception. However, this can happen with global results anyway (a case where a naming convention for global results can be handy). Action-specific exception mappings In addition to overriding the result used for an exception mapping, we can also override a global exception mapping on a per-action basis. For example, if an action needs to use a result named runtime2 as the destination of a RuntimeException, we can configure an exception mapping specific to that action. <action name="except2"class="com.packt.s2wad.ch09.examples.exceptions.Except1"><exception-mapping result="runtime2"exception="java.lang.RuntimeException"/>... As with our earlier examples, the runtime2 result may be configured either as a global result or as an action-specific result. Accessing the exception We have many options regarding how to handle exceptions. We can show the user a generic "Something horrible has happened!" page, we can take the user back and allow them to retry the operation or refill the input form, and so on. The appropriate course of action depends on the application and, most likely, on the type of exception. We can display exception-specific information as well. The exception interceptor pushes an exception encapsulation object onto the stack with the exception and exceptionStack properties. While the stack trace is probably not appropriate for user-level error pages, the exception can be used to help create a useful error message, provide I18N property keys for messages (or values used in messages), suggest possible remedies, and so on. The simplest example of accessing the exception property from our JSP is to simply display the exception message. For example, if we threw a RuntimeException, we might create it as follows: throw newRuntimeException("Runtime thrown from ThrowingAction"); Our exception result page, then, could access the message using the usual property tag (or JSTL, if we're taking advantage of Struts 2's custom request processor): <s:property value="exception.message"/> The underlying action is still available on the stack—it's the next object on the value stack. It can be accessed from the JSP as usual, as long as we're not trying to access properties named exception or exceptionStack, which would be masked by the exception holder. (We can still access an action property named exception using OGNL's immediate stack index notation—[1].exception.) Architecting exceptions and exception handling We have pretty good control over what is displayed for our application exceptions. It is customizable based on exception type, and may be overridden on a per-action basis. However, to make use of this flexibility, we require a well-thought-out exception policy in our application. There are some general principles we can follow to help make this easier. Checked versus unchecked exceptions Before we start, let's recall that Java offers two main types of exceptions—checked and unchecked. Checked exceptions are exceptions we declare with a throws keyword or wrapped in a try/catch block. Unchecked exceptions are runtime exceptions or a subclass. It isn't always clear what type we should use when writing our code or creating our exceptions. It's been the subject of much debate over the years, but some guidelines have become apparent. One clear thing about checked exceptions is that they aren't always worth the aggravation they cause, but may be useful when the programmer has a reasonable chance of recovering from the exception. One issue with checked exceptions is that unless they're caught and wrapped in a more abstract exception (coming up next), we're actually circumventing some of the benefits of encapsulation. One of the benefits being circumvented is that when exceptions are declared as being thrown all the way up a call hierarchy, all of the classes involved are forced to know something about the class throwing the exception. It's relatively rare that this exposure is justifiable. Application-specific exceptions One of the more useful exception techniques is to create application-specific exception classes. A compelling feature of providing our own exception classes is that we can include useful diagnostic information in the exception class itself. These classes are like any other Java class. They can contain methods, properties, and constructors. For example, let's assume a service that throws an exception when the user calling the service doesn't have access rights to the service. One way to create and throw this exception would be as follows: throw new RuntimeException("User " + user.getId()+ " does not have access to the 'update' service."); However, there are some issues with this approach. It's awkward from the Struts 2's standpoint. Because it's a RuntimeException, we have only one option for handling the exception—mapping a RuntimeException to a result. Yes, we could map the exception type per-action, but that gets unwieldy. It also doesn't help if we need to map two different types of RuntimeExceptions to two different results. Another potential issue would arise if we had a process that examined exceptions and did something useful with them. For example, we might send an email with user details based on the above exception. This would amount to parsing the exception message, pulling out the user ID, and using it to get user details for inclusion in the email. This is where we'd need to create an exception class of our own, subclassed from RuntimeException. The class would have encapsulated exception related information, and a mechanism to differentiate between the different types of exceptions. A third benefit comes when we wrap lower-level exceptions—for example, a Spring-related exception. Rather than create a Spring dependency up the entire call chain, we'd wrap it in our own exception, abstracting the lower-level exception. This allows us to change the underlying implementation and aggregate differing exception types under one (or more) application-specific exception. One way of creating the above scenario would be to create an exception class that takes a User object and a message as its constructor arguments: package com.packt.s2wad.ch09.exceptions;public class UserAccessException extends RuntimeException {private User user;private String msg;public UserAccessException(User user, String msg) {this.user = user;this.msg = msg;}public String getMessage() {return "User " + user.getId() + " " + msg;}} We can now create an exception mapping for a UserAccessException (as well as a generic RuntimeException if we need it). In addition, the exception carries along with it the information needed to create useful messages: throw new UserAccessException(user,"does not have access to the 'update' service."); "Self-documenting" of code could be made even safer, in the sense of ensuring that it's only used in the ways in which it is intended. We could add an enum to the class to encapsulate the reasons the exception can be thrown, including the text for each reason. We'll add the following inside  our UserAccessException: public enum Reason {NO_ROLE("does not have role"),NO_ACCESS("does not have access");private String message;private Reason(String message) {this.message = message;}public String getMessage() { return message; }}; We'll also modify the constructor and getMessage() method to use the new Reason enumeration. public UserAccessException(User user, Reason reason) {this.user = user;this.reason = reason;}public String getMessage() {return String.format("User %d %s.",user.getId(), reason.getMessage());} Now, when we throw the exception, we explicitly know that we're using the exception class correctly (at least type-wise). The string message for each of the exception reasons is encapsulated within the exception class itself. throw new UserAccessException(user,UserAccessException.Reason.NO_ACCESS); With Java 5's static imports, it might make even more sense to create static helper methods in the exception class, leading to the concise, but understandable code: throw userHasNoAccess(user);
Read more
  • 0
  • 1
  • 10849

article-image-styling-forms
Packt
20 Nov 2013
8 min read
Save for later

Styling the Forms

Packt
20 Nov 2013
8 min read
(For more resources related to this topic, see here.) CSS3 for web forms CSS3 brings us infinite new possibilities and allows styling to make better web forms. CSS3 gives us a number of new ways to create an impact with our form designs, with quite a few important changes. HTML5 introduced useful new form elements such as sliders and spinners and old elements such as textbox and textarea, and we can make them look really cool with our innovation and CSS3. Using CSS3, we can turn an old and boring form into a modern, cool, and eye catching one. CSS3 is completely backwards compatible, so we will not have to change the existing form designs. Browsers have and will always support CSS2. CSS3 forms can be split up into modules. Some of the most important CSS3 modules are: Selectors (with pseudo-selectors) Backgrounds and Borders Text (with Text Effects) Fonts Gradients Styling of forms always varies with requirements and the innovation of the web designer or developer. In this article, we will look at those CSS3 properties with which we can style our forms and give them a rich and elegant look. Some of the new properties of CSS3 required vendor prefixes, which were used frequently as they helped browsers to read the code. In general, it is no longer needed to use them with CSS3 for some of the properties, such as border-radius, but they come into action when the browser doesn't interpret the code. A list of all the vendor prefixes for major browsers is given as follows: -moz-: Firefox -webkit-: WebKit browsers such as Safari and Chrome -o-: Opera -ms-: Internet Explorer Before we start styling the form, let us have a quick revision of form modules for better understanding and styling of the forms. Selectors and pseudo-selectors Selectors are a pattern used to select the elements which we want to style. A selector can contain one or more simple selectors separated by combinators. The CSS3 Selectors module introduces three new attribute selectors; they are grouped together under the heading Substring Matching Attribute Selectors. These new selectors are as follows: [att^=val]: The "begins with" selector [att$=val]: The "ends with" selector [att*=val]: The "contains" selector The first of these new selectors, which we will refer to as the "begins with" selector, allows the selection of elements where a specified attribute (for example, the href attribute of a hyperlink) begins with a specified string (for example, http://, https://, or mailto:). In the same way, the additional two new selectors, which we will refer to as the "ends with" and "contains" selectors, allow the selection of elements where a specified attribute either ends with or contains a specified string respectively. A CSS pseudo-class is just an additional keyword to selectors that tells a special state of the element to be selected. For example, :hover will apply a style when the user hovers over the element specified by the selector. Pseudo-classes, along with pseudo-elements, apply a style to an element not only in relation to the content of the document tree, but also in relation to external factors like the history of the navigator, such as :visited, and the status of its content, such as :checked, on some form elements. The new pseudo-classes are as follows: Type Details :last-child It is used to match an element that is the last child element of its parent element. :first-child It is used to match an element that is the first child element of its parent element. :checked It is used to match elements such as radio buttons or checkboxes which are checked. :first-of-type It is used to match the first child element of the specified element type. :last-of-type It is used to match the last child element of the specified element type. :nth-last-of-type(N) It is used to match the Nth child element from the last of the specified element type. :only-child It is used to match an element if it's the only child element of its parent. :only-of-type It is used to match an element that is the only child element of its type. :root It is used to match the element that is the root element of the document. :empty It is used to match elements that have no children. :target It is used to match the current active element that is the target of an identifier in the document's URL. :enabled It is used to match user interface elements that are enabled. :nth-child(N) It is used to match every Nth child element of the parent. :nth-of-type(N) It is used to match every Nth child  element of the parent counting from the last of the parent. :disabled It is used to match user interface elements that are disabled. :not(S) It is used to match elements that aren't matched by the specified selector. :nth-last-child(N) Within a parent element's list of child elements, it is used to match elements on the basis of their positions. Backgrounds CSS3 contains several new background attributes; and moreover, in CSS3, some changes are also made in the previous properties of the background; which allow greater control on the background element. The new background properties added are as follows. The background-clip property The background-clip property is used to determine the allowable area for the background image. If there is no background image, then this property has only visual effects such as when the border has transparent regions or partially opaque regions; otherwise, the border covers up the difference. Syntax The syntax for the background-clip property are as follows: background-clip: no-clip / border-box / padding-box / content-box; Values The values for the background-clip property is as follows: border-box: With this, the background extends to the outside edge of the border padding-box: With this, no background is drawn below the border content-box: With this, the background is painted within the content box; only the area the content covers is painted no-clip: This is the default value, same as border-box The background-origin property The background-origin property specifies the positioning of the background image or color with respect to the background-position property. This property has no effect if the background-attachment property for the background image is fixed. Syntax The following is the syntax for the background-attachment property: background-origin: border-box / padding-box / content-box; Values The values for the background-attachment property are as follows: border-box: With this, the background extends to the outside edge of the border padding-box: By using this, no background is drawn below the border content-box: With this, the background is painted within the content box The background-size property The background-size property specifies the size of the background image. If this property is not specified then the original size of the image will be displayed. Syntax The following is the syntax for the background-size property: background-size: length / percentage / cover / contain; Values The values for the background-size property are as follows: length: This specifies the height and width of the background image. No negative values are allowed. percentage: This specifies the height and width of the background image in terms of the percent of the parent element. cover: This specifies the background image to be as large as possible so that the background area is completely covered. contain: This specifies the image to the largest size such that its width and height can fit inside the content area. Apart from adding new properties, CSS3 has also enhanced some old background properties, which are as follows. The background-color property If the underlying layer of the background image of the element cannot be used, we can specify a fallback color in addition to specifying a background color. We can implement this by adding a forward slash before the fallback color. background-color: red / blue; The background-repeat property In CSS2 when an image is repeated at the end, the image often gets cut off. CSS3 introduced new properties with which we can fix this problem: space: By using this property between the image tiles, an equal amount of space is applied until they fill the element round: By using this property until the tiles fit the element, the image is scaled down The background-attachment property With the new possible value of local, we can now set the background to scroll when the element's content is scrolled. This comes into action with elements that can scroll. For example: body{background-image:url('example.gif');background-repeat:no-repeat;background-attachment:fixed;} CSS3 allows web designers and developers to have multiple background images, using nothing but just a simple comma-separated list. For example: background-image: url(abc.png), url(xyz.png); Summary In this article, we learned about the basics of CSS3 and the modules in which we can categorize the CSS3 for forms, such as backgrounds. Using this we can improvise the look and feel of a form. This makes a form more effective and attractive. Resources for Article: Further resources on this subject: HTML5 Canvas [Article] Building HTML5 Pages from Scratch [Article] HTML5 Presentations - creating our initial presentation [Article]
Read more
  • 0
  • 0
  • 10789
article-image-displaying-posts-and-pages-using-wordpress-loop
Packt
28 Jun 2010
12 min read
Save for later

Displaying Posts and Pages Using Wordpress Loop

Packt
28 Jun 2010
12 min read
(For more resources on Wordpress, see here.) The Loop is the basic building block of WordPress template files. You'll use The Loop when displaying posts and pages, both when you're showing multiple items or a single one. Inside of The Loop you use WordPress template tags to render information in whatever manner your design requires. WordPress provides the data required for a default Loop on every single page load. In addition, you're able to create your own custom Loops that display post and page information that you need. This power allows you to create advanced designs that require a variety of information to be displayed. This article will cover both basic and advanced Loop usage and you'll see exactly how to use this most basic WordPress structure. Creating a basic Loop The Loop nearly always takes the same basic structure. In this recipe, you'll become acquainted with this structure, find out how The Loop works, and get up and running in no time. How to do it... First, open the file in which you wish to iterate through the available posts. In general, you use The Loop in every single template file that is designed to show posts. Some examples include index.php, category.php, single.php, and page.php. Place your cursor where you want The Loop to appear, and then insert the following code: <?phpif( have_posts() ) { while( have_posts() ) { the_post(); ?> <h2><?php the_title(); ?></h2> <?php }}?> Using the WordPress theme test data with the above Loop construct, you end up with something that looks similar to the example shown in following screenshot: Depending on your theme's styles, this output could obviously look very different. However, the important thing to note is that you've used The Loop to iterate over available data from the system and then display pieces of that data to the user in the way that you want to. From here, you can use a wide variety of template tags in order to display different information depending on the specific requirements of your theme. How it works... A deep understanding of The Loop is paramount to becoming a great WordPress designer and developer, so you should understand each of the items in the above code snippet fairly well. First, you should recognize that this is just a standard while loop with a surrounding if conditional. There are some special WordPress functions that are used in these two items, but if you've done any PHP programming at all, you should be intimately familiar with the syntax here. If you haven't experienced programming in PHP, then you might want to check out the syntax rules for if and while constructs at http://php.net/if and http://php.net/ while, respectively. The next thing to understand about this generic loop is that it depends directly on the global $wp_query object. $wp_query is created when the request is parsed, request variables are found, and WordPress figures out the posts that should be displayed for the URL that a visitor has arrived from. $wp_query is an instance of the WP_Query object, and the have_posts and the_post functions delegate to methods on that object. The $wp_query object holds information about the posts to be displayed and the type of page being displayed (normal listing, category archive, date archive, and so on). When have_posts is called in the if conditional above, the $wp_query object determines whether any posts matched the request that was made, and if so, whether there are any posts that haven't been iterated over. If there are posts to display, a while construct is used that again checks the value of have_posts. During each iteration of the while loop, the the_post function is called. the_post sets an index on $wp_query that indicates which posts have been iterated over. It also sets up several global variables, most notably $post. Inside of The Loop, the the_title function uses the global $post variable that was set up in the_post to produce the appropriate output based on the currently-active post item. This is basically the way that all template tags work. If you're interested in further information on how the WP_Query class works, you should read the documentation about it in the WordPress Codex at http://codex.wordpress.org/Function_ Reference/WP_Query. You can find more information about The Loop at http://codex. wordpress.org/The_Loop. Displaying ads after every third post If you're looking to display ads on your site, one of the best places to do it is mixed up with your main content. This will cause visitors to view your ads, as they're engaged with your work, often resulting in higher click-through rates and better paydays for you. How to do it... First, open the template in which you wish to display advertisements while iterating over the available posts. This will most likely be a listing template file like index.php or category. php. Decide on the number of posts that you wish to display between advertisements. Place your cursor where you want your loop to appear, and then insert the following code: <?phpif( have_posts() ) { $ad_counter = 0; $after_every = 3; while( have_posts() ) { $ad_counter++; the_post(); ?> <h2><?php the_title(); ?></h2> <?php // Display ads $ad_counter = $ad_counter % $after_every; if( 0 == $ad_counter ) { echo '<h2 style="color:red;">Advertisement</h2>'; } }}?> If you've done everything correctly, and are using the WordPress theme test data, you should see something similar to the example shown in the following screenshot: Obviously, the power here comes when you mix in paying ads or images that link to products that you're promoting. Instead of a simple heading element for the Advertisement text, you could dynamically insert JavaScript or Flash elements that pull in advertisements for you. How it works... As with the basic Loop, this code snippet iterates over all available posts. In this recipe, however, a counter variable is declared that counts the number of posts that have been iterated over. Every time that a post is about to be displayed, the counter is incremented to track that another post has been rendered. After every third post, the advertisement code is displayed because the value of the $ad_counter variable is equal to 0. It is very important to put the conditional check and display code after the post has been displayed. Also, notice that the $ad_counter variable will never be greater than 3 because the modulus operator (%) is being used every time through The Loop. Finally, if you wish to change the frequency of the ad display, simply modify the $after_every variable from 3 to whatever number of posts you want to display between ads. Removing posts in a particular category Sometimes you'll want to make sure that posts from a certain category never implicitly show up in the Loops that you're displaying in your template. The category could be a special one that you use to denote portfolio pieces, photo posts, or whatever else you wish to remove from regular Loops. How to do it... First, you have to decide which category you want to exclude from your Loops. Note the name of the category, and then open or create your theme's functions.php file. Your functions. php file resides inside of your theme's directory and may contain some other code. Inside of functions.php, insert the following code: add_action('pre_get_posts', 'remove_cat_from_loops');function remove_cat_from_loops( $query ) { if(!$query->get('suppress_filters')) { $cat_id = get_cat_ID('Category Name'); $excluded_cats = $query->get('category__not_in'); if(is_array($excluded_cats)) { $excluded_cats[] = $cat_id; } else { $excluded_cats = array($cat_id); } $query->set('category__not_in', $excluded_cats); } return $query;} How it works... In the above code snippet, you are excluding the category with the name Category Name. To exclude a different category, change the Category Name string to the name of the category you wish to remove from loops. You are filtering the WP_Query object that drives every Loop. Before any posts are fetched from the database, you dynamically change the value of the category__not_in variable in the WP_Query object. You append an additional category ID to the existing array of excluded category IDs to ensure that you're not undoing work of some other developer. Alternatively, if the category__not_in variable is not an array, you assign it an array with a single item. Every category ID in the category__not_in array will be excluded from The Loop, because when the WP_Query object eventually makes a request to the database, it structures the query such that no posts contained in any of the categories identified in the category__not_in variable are fetched. Please note that the denoted category will be excluded by default from all Loops that you create in your theme. If you want to display posts from the category that you've marked to exclude, then you need to set the suppress_filters parameter to true when querying for posts, as follows: query_posts( array( 'cat'=>get_cat_ID('Category Name'), 'suppress_filters'=>true)); Removing posts with a particular tag Similar to categories, it could be desirable to remove posts with a certain tag from The Loop. You may wish to do this if you are tagging certain posts as asides, or if you are saving posts that contain some text that needs to be displayed in a special context elsewhere on your site. How to do it... First, you have to decide which tag you want to exclude from your Loops. Note the name of the tag, and then open or create your theme's functions.php file. Inside of functions.php, insert the following code: add_action('pre_get_posts', 'remove_tag_from_loops');function remove_tag_from_loops( $query ) { if(!$query->get('suppress_filters')) { $tag_id = get_term_by('name','tag1','post_tag')->term_id; $excluded_tags = $query->get('tag__not_in'); if(is_array( $excluded_tags )) { $excluded_tags[] = $tag_id; } else { $excluded_tags = array($tag_id); } $query->set('tag__not_in', $excluded_tags); } return $query;} How it works... In the above code snippet, you are excluding the tag with the slug tag1. To exclude a different tag, change the string tag1 to the name of the tag that you wish to remove from all Loops. When deciding what tags to exclude, the WordPress system looks at a query parameter named tag__not_in, which is an array. In the above code snippet, the function appends the ID of the tag that should be excluded directly to the tag__not_in array. Alternatively, if tag__not_in isn't already initialized as an array, it is assigned an array with a single item, consisting of the ID for the tag that you wish to exclude. After that, all posts with that tag will be excluded from WordPress Loops. Please note that the chosen tag will be excluded, by default, from all Loops that you create in your theme. If you want to display posts from the tag that you've marked to exclude, then you need to set the suppress_filters parameter to true when querying for posts, as follows: query_posts( array( 'tag'=>get_term_by('name','tag1','post_tag')->term_id, 'suppress_filters'=>true )); Highlighting sticky posts Sticky posts are a feature added in version 2.7 of WordPress and can be used for a variety of purposes. The most frequent use is to mark posts that should be "featured" for an extended period of time. These posts often contain important information or highlight things (like a product announcement) that the blog author wants to display in a prominent position for a long period of time. How to do it... First, place your cursor inside of a Loop where you're displaying posts and want to single out your sticky content. Inside The Loop, after a call to the_post, insert the following code: <?phpif(is_sticky()) { ?> <div class="sticky-announcer"> <p>This post is sticky.</p> </div> <?php}?> Create a sticky post on your test blog and take a look at your site's front page. You should see text appended to the sticky post, and the post should be moved to the top of The Loop. You can see this in the following screenshot: How it works... The is_sticky function checks the currently-active post to see if it is a sticky post. It does this by examining the value retrieved by calling get_option('sticky_posts'), which is an array, and trying to find the active post's ID in that array. In this case, if the post is sticky then the sticky-announcer div is output with a message. However, there is no limit to what you can do once you've determined if a post is sticky. Some ideas include: Displaying a special icon for sticky posts Changing the background color of sticky posts Adding content dynamically to sticky posts Displaying post content differently for sticky posts
Read more
  • 0
  • 0
  • 10788

article-image-packaging-and-publishing-an-oracle-jet-hybrid-mobile-application
Vijin Boricha
17 Apr 2018
4 min read
Save for later

Packaging and publishing an Oracle JET Hybrid mobile application

Vijin Boricha
17 Apr 2018
4 min read
Today, we will learn how to package and publish an Oracle JET mobile application on Apple Store and Android Play Store. We can package and publish Oracle JET-based hybrid mobile applications to Google Play or Apple App stores, using framework support and third-party tools. Packaging a mobile application An Oracle JET Hybrid mobile application can be packaged with the help of Grunt build release commands, as described in the following steps: The Grunt release command can be issued with the desired platform as follows: grunt build:release --platform={ios|android} Code sign the application based on the platform in the buildConfig.json component. Note: Further details regarding code sign per platform are available at: Android: https:// cordova.apache.org/docs/en/latest/guide/platforms/android/tools. html.iOS:https:// cordova.apache.org/docs/en/latest/guide/platforms/ios/tools.Html. We can pass the code sign details and rebuild the application using the following command: grunt build:release --platform={ios|android} --buildConfig=path/buildConfig.json The application can be tested after the preceding changes using the following serve command: grunt serve:release --platform=ios|android [--web=true --serverPort=server-port-number --livereloadPort=live-reload-port-number - -destination=emulator-name|device] --buildConfig==path/buildConfig.json Publishing a mobile application Publishing an Oracle JET hybrid mobile application is as per the platform-specific guidelines. Each platform has defined certain standards and procedures to distribute an app on the respective platforms. Publishing on an iOS platform The steps involved in publishing iOS applications include: Enrolling in the Apple Developer Program to distribute the app. Adding advanced, integrated services based on the application type. Preparing our application with approval and configuration according to iOS standards. Testing our app on numerous devices and application releases. Submitting and releasing the application as a mobile app in the store. Alternatively, we can also distribute the app outside the store. The iOS distribution process is represented in the following diagram: Please note that the process for distributing applications on iOS presented in the preceding diagram is the latest, as of writing this chapter. It may be altered by the iOS team at a later point. The preceding said process is up-to-date, as of writing this chapter. Note: For the latest iOS app distribution procedure, please refer to the official iOS documentation at: https://developer.apple.com/library/ios/documentation/IDEs/ Conceptual/AppDistributionGuide/Introduction/Introduction.html Publishing on an Android platform There are multiple approaches to publish our application on Android platforms. The following are the steps involved in publishing the app on Android: Preparing the app for release. We need to perform the following activities to prepare an app for release: Configuring the app for release, including logs and manifests. Testing the release version of the app on multiple devices. Updating the application resources for the release. Preparing any remote applications or services the app interacts with. 2. Releasing the app to the market through Google Play. We need to perform the following activities to prepare an app for release through Google Play: Prepare any promotional documentation required for the app. Configure all the default options and prepare the components. Publish the prepared release version of the app to Google Play. 3. Alternately, we can publish the application through email, or through our own website, for users to download and install. The steps involved in publishing Android applications are advised in the following diagram: Please note that the process for distributing applications on Android presented in the preceding diagram is the latest, as of writing this chapter. It may be altered by the Android team at a later point. Note: For the latest Android app distribution procedure, please refer to the official Android documentation at http://developer.android.com/tools/publishing/publishing_overview. html#publishing-release. You enjoyed an excerpt  from Oracle JET for Developers written by Raja Malleswara Rao Pattamsetti.  With this book, you will learn to leverage Oracle JavaScript Extension Toolkit (JET) to develop efficient client-side applications. Auditing Mobile Applications Creating and configuring a basic mobile application    
Read more
  • 0
  • 0
  • 10736
Modal Close icon
Modal Close icon