Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-using-object-oriented-approach-implementing-php-classes-interact-oracle
Packt
22 Oct 2009
8 min read
Save for later

Using An Object Oriented Approach for Implementing PHP Classes to Interact with Oracle

Packt
22 Oct 2009
8 min read
Before you start developing object-oriented solutions with PHP 5, it is important to understand that its object model provides more features than PHP 4's object model. Like most object-oriented languages, PHP 5 allows the developer to take advantage of interfaces, abstract classes, private/public/protected access modifiers, static members and methods, exception handling, and other features that were not available in PHP 4. But perhaps the most important thing to note about the object-oriented improvements of PHP 5 is that objects are now referenced byhandle, and not by value. Building Blocks of Applications As you no doubt know, the fundamental building block in any object-oriented language is a structure called a class. A class is a template for an object. It describes the data and behavior of its instances (objects). During run time, an application can create as many instances of a single class as necessary. The following diagram conceptually depicts the structure of a class. You might find it handy to think of an object-oriented application as a building made of blocks, where classes are those blocks. However, it is important to note that all blocks in this case are exchangeable. What this means is that if you are not satisfied with the implementation of a certain class, you can use a relevant class that has the same Application Programming Interface (API) but a different implementation instead. This allows you to increase the reusability and maintainability of your application, without increasing the complexity. The intent of the example discussed in this section is to illustrate how you can rewrite the implementation of a class so that this doesn't require a change in the existing code that employs this class. In particular, you'll see how a custom PHP 4 class designed to interact with Oracle can be rewritten to use the new object-oriented features available in PHP 5. Creating a Custom PHP Class from Scratch To proceed with the example, you first need to create a PHP 4 class interacting with Oracle. Consider the following dbConn4 class: <?php //File: dbConn4.php class dbConn4 { var $user; var $pswd; var $db; var $conn; var $query; var $row; var $exec_mode; function dbConn4($user, $pswd, $db, $exec_mode= OCI_COMMIT_ON_SUCCESS) { $this->user = $user; $this->pswd = $pswd; $this->db = $db; $this->exec_mode = $exec_mode; $this->GetConn (); } function GetConn() { if(!$this->conn = OCILogon($this->user, $this->pswd, $this->db)) { $err = OCIError(); trigger_error('Failed to establish a connection: ' . $err['message']); } } function query($sql) { if(!$this->query = OCIParse($this->conn, $sql)) { $err = OCIError($this->conn); trigger_error('Failed to parse SQL query: ' . $err['message']); return false; } else if(!OCIExecute($this->query, $this->exec_mode)) { $err = OCIError($this->query); trigger_error('Failed to execute SQL query: ' . $err['message']); return false; } return true; } function fetch() { if(!OCIFetchInto($this->query, $this->row, OCI_ASSOC)) { return false; } return $this->row; } } ?> In the above script, to define a class, you use the class keyword followed by the class name. Then, within curly braces, you define class properties and methods. Since this class is designed to work under both PHP 4 and PHP 5, all the class properties are defined with the var keyword. Declaring a property with var makes it publicly readable and writable. In PHP 5, you would use the public keyword instead. In PHP 4, you define the class constructor as a function with the same name as the class itself. This still works in PHP 5 for backward compatibility. However, in PHP 5, it's recommended that you use __construct as the constructor name. In the above example, the class constructor is used to set the member variables of a class instance to the values passed to the constructor as parameters. Note the use of the self-referencing variable $this that is used here to access the member variables of the current class instance. Within class methods, you can use $this, the special variable that points to the current instance of a class. This variable is created automatically during the execution of any object's method and can be used to access both member variables of the current instance and its methods. Then, you call the GetConn method from within the constructor to obtain a connection to the database. You reference the method using the $this variable. In this example, the GetConn method is supposed to be called from within the constructor only. In PHP 5, you would declare this method as private. To obtain a connection to the database in this example, you use the OCILogon function. In PHP 5, you would use the oci_connect function instead. The query method defined here takes an SQL string as the parameter and then parses and executes the query. It returns true on success or false on failure. This method is supposed to be called from outside an object. So, in PHP 5, you would declare it as public. Finally, you define the fetch method. You will call this method to fetch the results retrieved by a SELECT statement that has been executed with the query method. Testing the Newly Created Class Once written, the dbConn4 class discussed in the preceding section can be used in applications in order to establish a connection to an Oracle database and then issue queries against it as needed. To see this class in action, you might use the following PHP script. Assuming that you have saved the dbConn4 class as the dbConn4.php file, save the following script as select.php: <?php //File: select.php require_once 'dbConn4.php'; require_once 'hrCred.php'; $db = new dbConn4($user, $pswd, $conn); $sql="SELECT FIRST_NAME, LAST_NAME FROM employees"; if($db->query($sql)){ print 'Employee Names: ' . '<br />'; while ($row = $db->fetch()) { print $row['FIRST_NAME'] . '&nbsp;'; print $row['LAST_NAME'] . '<br />'; } } ?> The above select.php script employs the employees table from the hr/hr demonstration schema. So, before you can execute this script, you must create the hrCred.php file that contains all the information required to establish a connection to your Oracle database using the HR account. The hrCred.php file should look as shown below (note that the connection string may vary depending on your configuration): <?php //File: hrCred.php $user="hr"; $pswd="hr"; $conn="(DESCRIPTION= (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)) ) (CONNECT_DATA=(SID=orcl)(SERVER=DEDICATED)) )"; ?> Once you have created the hrCred.php script, you can execute the select.php script. As a result, it should output the names of employees from the employees table in the hr/hr demonstration schema. Taking Advantage of PHP 5's Object-Oriented Features Turning back to the dbConn4 class, you may have noticed that it was written for PHP 4. Of course, it still can be used in new applications written for PHP 5. However, to take advantage of the new object-oriented features available in PHP 5, you might want to rewrite this class as follows: <?php //File: dbConn5.php class dbConn5 { private $user; private $pswd; private $db; private $conn; private $query; private $row; private $exec_mode; public function __construct($user, $pswd, $db, $exec_mode= OCI_COMMIT_ON_SUCCESS) { $this->user = $user; $this->pswd = $pswd; $this->db = $db; $this->exec_mode = $exec_mode; $this->GetConn(); } private function GetConn() { if(!$this->conn = oci_connect($this->user, $this->pswd, $this->db)) { $err = oci_error(); trigger_error('Failed to establish a connection: ' . $err['message']); } } public function query($sql) { if(!$this->query = oci_parse($this->conn, $sql)) { $err = oci_error($this->conn); trigger_error('Failed to execute SQL query: ' . $err['message']); return false; } else if(!oci_execute($this->query, $this->exec_mode)) { $err = oci_error($this->query); trigger_error('Failed to execute SQL query: ' . $err['message']); return false; } return true; } public function fetch() { if($this->row=oci_fetch_assoc($this->query)){ return $this->row; } else { return false; } } } ?> As you can see, the implementation of the class has been improved to conform to the new standards of PHP 5. For instance, the above class takes advantage of encapsulation that is accomplished in PHP 5—like most other object-oriented languages—by means of access modifiers, namely public, protected, and private. The idea behind encapsulation is to enable the developer to design the classes that reveal only the important members and methods and hide the internals. For instance, the GetConn method in the dbConn5 class is declared with the private modifier because this method is supposed to be called only from inside the constructor when a new instance of the class is initialized; therefore, there is no need to allow client code to access this method directly. Since the implementation of the newly created dbConn5 class is different from the one used in dbConn4, you may be asking yourself: "Does that mean we need to rewrite the client code that uses the dbConn4 class as well?" The answer is obvious: you don't need to rewrite client code that uses the dbConn4 class since you have neither changed the Application Programming Interface (API) of the class nor,more importantly, its functionality. Thus, all you need to do in order to make the select.php script work with dbConn5 is simply replace all the references to dbConn4 with references to dbConn5 throughout the script.
Read more
  • 0
  • 0
  • 7001

article-image-man-do-i-templates
Packt
07 Jul 2015
22 min read
Save for later

Man, Do I Like Templates!

Packt
07 Jul 2015
22 min read
In this article by Italo Maia, author of the book Building Web Applications with Flask, we will discuss what Jinja2 is, and how Flask uses Jinja2 to implement the View layer and awe you. Be prepared! (For more resources related to this topic, see here.) What is Jinja2 and how is it coupled with Flask? Jinja2 is a library found at http://jinja.pocoo.org/; you can use it to produce formatted text with bundled logic. Unlike the Python format function, which only allows you to replace markup with variable content, you can have a control structure, such as a for loop, inside a template string and use Jinja2 to parse it. Let's consider this example: from jinja2 import Template x = """ <p>Uncle Scrooge nephews</p> <ul> {% for i in my_list %} <li>{{ i }}</li> {% endfor %} </ul> """ template = Template(x) # output is an unicode string print template.render(my_list=['Huey', 'Dewey', 'Louie']) In the preceding code, we have a very simple example where we create a template string with a for loop control structure ("for tag", for short) that iterates over a list variable called my_list and prints the element inside a "li HTML tag" using curly braces {{ }} notation. Notice that you could call render in the template instance as many times as needed with different key-value arguments, also called the template context. A context variable may have any valid Python variable name—that is, anything in the format given by the regular expression [a-zA-Z_][a-zA-Z0-9_]*. For a full overview on regular expressions (Regex for short) with Python, visit https://docs.python.org/2/library/re.html. Also, take a look at this nice online tool for Regex testing http://pythex.org/. A more elaborate example would make use of an environment class instance, which is a central, configurable, extensible class that may be used to load templates in a more organized way. Do you follow where we are going here? This is the basic principle behind Jinja2 and Flask: it prepares an environment for you, with a few responsive defaults, and gets your wheels in motion. What can you do with Jinja2? Jinja2 is pretty slick. You can use it with template files or strings; you can use it to create formatted text, such as HTML, XML, Markdown, and e-mail content; you can put together templates, reuse templates, and extend templates; you can even use extensions with it. The possibilities are countless, and combined with nice debugging features, auto-escaping, and full unicode support. Auto-escaping is a Jinja2 configuration where everything you print in a template is interpreted as plain text, if not explicitly requested otherwise. Imagine a variable x has its value set to <b>b</b>. If auto-escaping is enabled, {{ x }} in a template would print the string as given. If auto-escaping is off, which is the Jinja2 default (Flask's default is on), the resulting text would be b. Let's understand a few concepts before covering how Jinja2 allows us to do our coding. First, we have the previously mentioned curly braces. Double curly braces are a delimiter that allows you to evaluate a variable or function from the provided context and print it into the template: from jinja2 import Template # create the template t = Template("{{ variable }}") # – Built-in Types – t.render(variable='hello you') >> u"hello you" t.render(variable=100) >> u"100" # you can evaluate custom classes instances class A(object): def __str__(self):    return "__str__" def __unicode__(self):    return u"__unicode__" def __repr__(self):    return u"__repr__" # – Custom Objects Evaluation – # __unicode__ has the highest precedence in evaluation # followed by __str__ and __repr__ t.render(variable=A()) >> u"__unicode__" In the preceding example, we see how to use curly braces to evaluate variables in your template. First, we evaluate a string and then an integer. Both result in a unicode string. If we evaluate a class of our own, we must make sure there is a __unicode__ method defined, as it is called during the evaluation. If a __unicode__ method is not defined, the evaluation falls back to __str__ and __repr__, sequentially. This is easy. Furthermore, what if we want to evaluate a function? Well, just call it: from jinja2 import Template # create the template t = Template("{{ fnc() }}") t.render(fnc=lambda: 10) >> u"10" # evaluating a function with argument t = Template("{{ fnc(x) }}") t.render(fnc=lambda v: v, x='20') >> u"20" t = Template("{{ fnc(v=30) }}") t.render(fnc=lambda v: v) >> u"30" To output the result of a function in a template, just call the function as any regular Python function. The function return value will be evaluated normally. If you're familiar with Django, you might notice a slight difference here. In Django, you do not need the parentheses to call a function, or even pass arguments to it. In Flask, the parentheses are always needed if you want the function return evaluated. The following two examples show the difference between Jinja2 and Django function call in a template: {# flask syntax #} {{ some_function() }}   {# django syntax #} {{ some_function }} You can also evaluate Python math operations. Take a look: from jinja2 import Template # no context provided / needed Template("{{ 3 + 3 }}").render() >> u"6" Template("{{ 3 - 3 }}").render() >> u"0" Template("{{ 3 * 3 }}").render() >> u"9" Template("{{ 3 / 3 }}").render() >> u"1" Other math operators will also work. You may use the curly braces delimiter to access and evaluate lists and dictionaries: from jinja2 import Template Template("{{ my_list[0] }}").render(my_list=[1, 2, 3]) >> u'1' Template("{{ my_list['foo'] }}").render(my_list={'foo': 'bar'}) >> u'bar' # and here's some magic Template("{{ my_list.foo }}").render(my_list={'foo': 'bar'}) >> u'bar' To access a list or dictionary value, just use normal plain Python notation. With dictionaries, you can also access a key value using variable access notation, which is pretty neat. Besides the curly braces delimiter, Jinja2 also has the curly braces/percentage delimiter, which uses the notation {% stmt %} and is used to execute statements, which may be a control statement or not. Its usage depends on the statement, where control statements have the following notation: {% stmt %} {% endstmt %} The first tag has the statement name, while the second is the closing tag, which has the name of the statement appended with end in the beginning. You must be aware that a non-control statement may not have a closing tag. Let's look at some examples: {% block content %} {% for i in items %} {{ i }} - {{ i.price }} {% endfor %} {% endblock %} The preceding example is a little more complex than what we have been seeing. It uses a control statement for loop inside a block statement (you can have a statement inside another), which is not a control statement, as it does not control execution flow in the template. Inside the for loop you see that the i variable is being printed together with the associated price (defined elsewhere). A last delimiter you should know is {# comments go here #}. It is a multi-line delimiter used to declare comments. Let's see two examples that have the same result: {# first example #} {# second example #} Both comment delimiters hide the content between {# and #}. As can been seen, this delimiter works for one-line comments and multi-line comments, what makes it very convenient. Control structures We have a nice set of built-in control structures defined by default in Jinja2. Let's begin our studies on it with the if statement. {% if true %}Too easy{% endif %} {% if true == true == True %}True and true are the same{% endif %} {% if false == false == False %}False and false also are the same{% endif %} {% if none == none == None %}There's also a lowercase None{% endif %} {% if 1 >= 1 %}Compare objects like in plain python{% endif %} {% if 1 == 2 %}This won't be printed{% else %}This will{% endif %} {% if "apples" != "oranges" %}All comparison operators work = ]{% endif %} {% if something %}elif is also supported{% elif something_else %}^_^{% endif %} The if control statement is beautiful! It behaves just like a python if statement. As seen in the preceding code, you can use it to compare objects in a very easy fashion. "else" and "elif" are also fully supported. You may also have noticed that true and false, non-capitalized, were used together with plain Python Booleans, True and False. As a design decision to avoid confusion, all Jinja2 templates have a lowercase alias for True, False, and None. By the way, lowercase syntax is the preferred way to go. If needed, and you should avoid this scenario, you may group comparisons together in order to change precedence evaluation. See the following example: {% if 5 < 10 < 15 %}true{%else%}false{% endif %} {% if (5 < 10) < 15 %}true{%else%}false{% endif %} {% if 5 < (10 < 15) %}true{%else%}false{% endif %} The expected output for the preceding example is true, true, and false. The first two lines are pretty straightforward. In the third line, first, (10<15) is evaluated to True, which is a subclass of int, where True == 1. Then 5 < True is evaluated, which is certainly false. The for statement is pretty important. One can hardly think of a serious Web application that does not have to show a list of some kind at some point. The for statement can iterate over any iterable instance and has a very simple, Python-like syntax: {% for item in my_list %} {{ item }}{# print evaluate item #} {% endfor %} {# or #} {% for key, value in my_dictionary.items() %} {{ key }}: {{ value }} {% endfor %} In the first statement, we have the opening tag indicating that we will iterate over my_list items and each item will be referenced by the name item. The name item will be available inside the for loop context only. In the second statement, we have an iteration over the key value tuples that form my_dictionary, which should be a dictionary (if the variable name wasn't suggestive enough). Pretty simple, right? The for loop also has a few tricks in store for you. When building HTML lists, it's a common requirement to mark each list item in alternating colors in order to improve readability or mark the first or/and last item with some special markup. Those behaviors can be achieved in a Jinja2 for-loop through access to a loop variable available inside the block context. Let's see some examples: {% for i in ['a', 'b', 'c', 'd'] %} {% if loop.first %}This is the first iteration{% endif %} {% if loop.last %}This is the last iteration{% endif %} {{ loop.cycle('red', 'blue') }}{# print red or blue alternating #} {{ loop.index }} - {{ loop.index0 }} {# 1 indexed index – 0 indexed index #} {# reverse 1 indexed index – reverse 0 indexed index #} {{ loop.revindex }} - {{ loop.revindex0 }} {% endfor %} The for loop statement, as in Python, also allow the use of else, but with a slightly different meaning. In Python, when you use else with for, the else block is only executed if it was not reached through a break command like this: for i in [1, 2, 3]: pass else: print "this will be printed" for i in [1, 2, 3]: if i == 3:    break else: print "this will never not be printed" As seen in the preceding code snippet, the else block will only be executed in a for loop if the execution was never broken by a break command. With Jinja2, the else block is executed when the for iterable is empty. For example: {% for i in [] %} {{ i }} {% else %}I'll be printed{% endfor %} {% for i in ['a'] %} {{ i }} {% else %}I won't{% endfor %} As we are talking about loops and breaks, there are two important things to know: the Jinja2 for loop does not support break or continue. Instead, to achieve the expected behavior, you should use loop filtering as follows: {% for i in [1, 2, 3, 4, 5] if i > 2 %} value: {{ i }}; loop.index: {{ loop.index }} {%- endfor %} In the first tag you see a normal for loop together with an if condition. You should consider that condition as a real list filter, as the index itself is only counted per iteration. Run the preceding example and the output will be the following: value:3; index: 1 value:4; index: 2 value:5; index: 3 Look at the last observation in the preceding example—in the second tag, do you see the dash in {%-? It tells the renderer that there should be no empty new lines before the tag at each iteration. Try our previous example without the dash and compare the results to see what changes. We'll now look at three very important statements used to build templates from different files: block, extends, and include. block and extends always work together. The first is used to define "overwritable" blocks in a template, while the second defines a parent template that has blocks, for the current template. Let's see an example: # coding:utf-8 with open('parent.txt', 'w') as file:    file.write(""" {% block template %}parent.txt{% endblock %} =========== I am a powerful psychic and will tell you your past   {#- "past" is the block identifier #} {% block past %} You had pimples by the age of 12. {%- endblock %}   Tremble before my power!!!""".strip())   with open('child.txt', 'w') as file:    file.write(""" {% extends "parent.txt" %}   {# overwriting the block called template from parent.txt #} {% block template %}child.txt{% endblock %}   {#- overwriting the block called past from parent.txt #} {% block past %} You've bought an ebook recently. {%- endblock %}""".strip()) with open('other.txt', 'w') as file:    file.write(""" {% extends "child.txt" %} {% block template %}other.txt{% endblock %}""".strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() # tell the environment how to load templates env.loader = FileSystemLoader('.') # look up our template tmpl = env.get_template('parent.txt') # render it to default output print tmpl.render() print "" # loads child.html and its parent tmpl = env.get_template('child.txt') print tmpl.render() # loads other.html and its parent env.get_template('other.txt').render() Do you see the inheritance happening, between child.txt and parent.txt? parent.txt is a simple template with two block statements, called template and past. When you render parent.txt directly, its blocks are printed "as is", because they were not overwritten. In child.txt, we extend the parent.txt template and overwrite all its blocks. By doing that, we can have different information in specific parts of a template without having to rewrite the whole thing. With other.txt, for example, we extend the child.txt template and overwrite only the block-named template. You can overwrite blocks from a direct parent template or from any of its parents. If you were defining an index.txt page, you could have default blocks in it that would be overwritten when needed, saving lots of typing. Explaining the last example, Python-wise, is pretty simple. First, we create a Jinja2 environment (we talked about this earlier) and tell it how to load our templates, then we load the desired template directly. We do not have to bother telling the environment how to find parent templates, nor do we need to preload them. The include statement is probably the easiest statement so far. It allows you to render a template inside another in a very easy fashion. Let's look at an example: with open('base.txt', 'w') as file: file.write(""" {{ myvar }} You wanna hear a dirty joke? {% include 'joke.txt' %} """.strip()) with open('joke.txt', 'w') as file: file.write(""" A boy fell in a mud puddle. {{ myvar }} """.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() # tell the environment how to load templates env.loader = FileSystemLoader('.') print env.get_template('base.txt').render(myvar='Ha ha!') In the preceding example, we render the joke.txt template inside base.txt. As joke.txt is rendered inside base.txt, it also has full access to the base.txt context, so myvar is printed normally. Finally, we have the set statement. It allows you to define variables for inside the template context. Its use is pretty simple: {% set x = 10 %} {{ x }} {% set x, y, z = 10, 5+5, "home" %} {{ x }} - {{ y }} - {{ z }} In the preceding example, if x was given by a complex calculation or a database query, it would make much more sense to have it cached in a variable, if it is to be reused across the template. As seen in the example, you can also assign a value to multiple variables at once. Macros Macros are the closest to coding you'll get inside Jinja2 templates. The macro definition and usage are similar to plain Python functions, so it is pretty easy. Let's try an example: with open('formfield.html', 'w') as file: file.write(''' {% macro input(name, value='', label='') %} {% if label %} <label for='{{ name }}'>{{ label }}</label> {% endif %} <input id='{{ name }}' name='{{ name }}' value='{{ value }}'></input> {% endmacro %}'''.strip()) with open('index.html', 'w') as file: file.write(''' {% from 'formfield.html' import input %} <form method='get' action='.'> {{ input('name', label='Name:') }} <input type='submit' value='Send'></input> </form> '''.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() env.loader = FileSystemLoader('.') print env.get_template('index.html').render() In the preceding example, we create a macro that accepts a name argument and two optional arguments: value and label. Inside the macro block, we define what should be output. Notice we can use other statements inside a macro, just like a template. In index.html we import the input macro from inside formfield.html, as if formfield was a module and input was a Python function using the import statement. If needed, we could even rename our input macro like this: {% from 'formfield.html' import input as field_input %} You can also import formfield as a module and use it as follows: {% import 'formfield.html' as formfield %} When using macros, there is a special case where you want to allow any named argument to be passed into the macro, as you would in a Python function (for example, **kwargs). With Jinja2 macros, these values are, by default, available in a kwargs dictionary that does not need to be explicitly defined in the macro signature. For example: # coding:utf-8 with open('formfield.html', 'w') as file:    file.write(''' {% macro input(name) -%} <input id='{{ name }}' name='{{ name }}' {% for k,v in kwargs.items() -%}{{ k }}='{{ v }}' {% endfor %}></input> {%- endmacro %} '''.strip())with open('index.html', 'w') as file:    file.write(''' {% from 'formfield.html' import input %} {# use method='post' whenever sending sensitive data over HTTP #} <form method='post' action='.'> {{ input('name', type='text') }} {{ input('passwd', type='password') }} <input type='submit' value='Send'></input> </form> '''.strip())   from jinja2 import Environment, FileSystemLoader   env = Environment() env.loader = FileSystemLoader('.') print env.get_template('index.html').render() As you can see, kwargs is available even though you did not define a kwargs argument in the macro signature. Macros have a few clear advantages over plain templates, that you notice with the include statement: You do not have to worry about variable names in the template using macros You can define the exact required context for a macro block through the macro signature You can define a macro library inside a template and import only what is needed Commonly used macros in a Web application include a macro to render pagination, another to render fields, and another to render forms. You could have others, but these are pretty common use cases. Regarding our previous example, it is good practice to use HTTPS (also known as, Secure HTTP) to send sensitive information, such as passwords, over the Internet. Be careful about that! Extensions Extensions are the way Jinja2 allows you to extend its vocabulary. Extensions are not enabled by default, so you can enable an extension only when and if you need, and start using it without much trouble: env = Environment(extensions=['jinja2.ext.do',   'jinja2.ext.with_']) In the preceding code, we have an example where you create an environment with two extensions enabled: do and with. Those are the extensions we will study in this article. As the name suggests, the do extension allows you to "do stuff". Inside a do tag, you're allowed to execute Python expressions with full access to the template context. Flask-Empty, a popular flask boilerplate available at https://github.com/italomaia/flask-empty uses the do extension to update a dictionary in one of its macros, for example. Let's see how we could do the same: {% set x = {1:'home', '2':'boat'} %} {% do x.update({3: 'bar'}) %} {%- for key,value in x.items() %} {{ key }} - {{ value }} {%- endfor %} In the preceding example, we create the x variable with a dictionary, then we update it with {3: 'bar'}. You don't usually need to use the do extension but, when you do, a lot of coding is saved. The with extension is also very simple. You use it whenever you need to create block scoped variables. Imagine you have a value you need cached in a variable for a brief moment; this would be a good use case. Let's see an example: {% with age = user.get_age() %} My age: {{ age }} {% endwith %} My age: {{ age }}{# no value here #} As seen in the example, age exists only inside the with block. Also, variables set inside a with block will only exist inside it. For example: {% with %} {% set count = query.count() %} Current Stock: {{ count }} Diff: {{ prev_count - count }} {% endwith %} {{ count }} {# empty value #} Filters Filters are a marvelous thing about Jinja2! This tool allows you to process a constant or variable before printing it to the template. The goal is to implement the formatting you want, strictly in the template. To use a filter, just call it using the pipe operator like this: {% set name = 'junior' %} {{ name|capitalize }} {# output is Junior #} Its name is passed to the capitalize filter that processes it and returns the capitalized value. To inform arguments to the filter, just call it like a function, like this: {{ ['Adam', 'West']|join(' ') }} {# output is Adam West #} The join filter will join all values from the passed iterable, putting the provided argument between them. Jinja2 has an enormous quantity of available filters by default. That means we can't cover them all here, but we can certainly cover a few. capitalize and lower were seen already. Let's look at some further examples: {# prints default value if input is undefined #} {{ x|default('no opinion') }} {# prints default value if input evaluates to false #} {{ none|default('no opinion', true) }} {# prints input as it was provided #} {{ 'some opinion'|default('no opinion') }}   {# you can use a filter inside a control statement #} {# sort by key case-insensitive #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort %}{{ key }}{% endfor %} {# sort by key case-sensitive #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort(true) %}{{ key }}{% endfor %} {# sort by value #} {% for key in {'A':3, 'b':2, 'C':1}|dictsort(false, 'value') %}{{ key }}{% endfor %} {{ [3, 2, 1]|first }} - {{ [3, 2, 1]|last }} {{ [3, 2, 1]|length }} {# prints input length #} {# same as in python #} {{ '%s, =D'|format("I'm John") }} {{ "He has two daughters"|replace('two', 'three') }} {# safe prints the input without escaping it first#} {{ '<input name="stuff" />'|safe }} {{ "there are five words here"|wordcount }} Try the preceding example to see exactly what each filter does. After reading this much about Jinja2, you're probably thinking: "Jinja2 is cool but this is a book about Flask. Show me the Flask stuff!". Ok, ok, I can do that! Of what we have seen so far, almost everything can be used with Flask with no modifications. As Flask manages the Jinja2 environment for you, you don't have to worry about creating file loaders and stuff like that. One thing you should be aware of, though, is that, because you don't instantiate the Jinja2 environment yourself, you can't really pass to the class constructor, the extensions you want to activate. To activate an extension, add it to Flask during the application setup as follows: from flask import Flask app = Flask(__name__) app.jinja_env.add_extension('jinja2.ext.do') # or jinja2.ext.with_ if __name__ == '__main__': app.run() Messing with the template context You can use the render_template method to load a template from the templates folder and then render it as a response. from flask import Flask, render_template app = Flask(__name__)   @app.route("/") def hello():    return render_template("index.html") If you want to add values to the template context, as seen in some of the examples in this article, you would have to add non-positional arguments to render_template: from flask import Flask, render_template app = Flask(__name__)   @app.route("/") def hello():    return render_template("index.html", my_age=28) In the preceding example, my_age would be available in the index.html context, where {{ my_age }} would be translated to 28. my_age could have virtually any value you want to exhibit, actually. Now, what if you want all your views to have a specific value in their context, like a version value—some special code or function; how would you do it? Flask offers you the context_processor decorator to accomplish that. You just have to annotate a function that returns a dictionary and you're ready to go. For example: from flask import Flask, render_response app = Flask(__name__)   @app.context_processor def luck_processor(): from random import randint def lucky_number():    return randint(1, 10) return dict(lucky_number=lucky_number)   @app.route("/") def hello(): # lucky_number will be available in the index.html context by default return render_template("index.html") Summary In this article, we saw how to render templates using only Jinja2, how control statements look and how to use them, how to write a comment, how to print variables in a template, how to write and use macros, how to load and use extensions, and how to register context processors. I don't know about you, but this article felt like a lot of information! I strongly advise you to run the experiment with the examples. Knowing your way around Jinja2 will save you a lot of headaches. Resources for Article: Further resources on this subject: Recommender systems dissected Deployment and Post Deployment [article] Handling sessions and users [article] Introduction to Custom Template Filters and Tags [article]
Read more
  • 0
  • 0
  • 6989

article-image-choosing-lync-2013-clients
Packt
25 Jul 2013
5 min read
Save for later

Choosing Lync 2013 Clients

Packt
25 Jul 2013
5 min read
(For more resources related to this topic, see here.) What clients are available? At the moment, we are writing a list that includes the following clients: Full client, as a part of Office 2013 Plus The Lync 2013 app for Windows 8 Lync 2013 for mobile devices The Lync Basic 2013 version A plugin is needed to enable Lync features on a virtual desktop. We need the full Lync 2013 client installation to allow Lync access to the user. Although they are not clients in the traditional sense of the word, our list must also include the following ones: The Microsoft Lync VDI 2013 plugin Lync Online (Office 365) Lync Web App Lync Phone Edition Legacy clients that are still supported (Lync 2010, Lync 2010 Attendant, and Lync 2010 Mobile) Full client (Office 2013) This is the most complete client available at the moment. It includes full support for voice, video, IM (similarly to the previous versions), and integration for the new features (for example, high-definition video, the gallery feature to see multiple video feeds at the same time, and chat room integration). In the following screenshot, we can see a tabbed conversation in Lync 2013: Its integration with Office implies that the group policies for Lync are now part of the Office group policy's administrative templates. We have to download the Office 2013 templates from the Microsoft site and install the package in order to use them (some of the settings are shown in the following screenshot): Lync is available with the Professional Plus version of Office 2013 (and with some Office 365 subscriptions). Lync 2013 app for Windows 8 The Lync 2013 app for Windows 8 (also called Lync Windows Store app) has been designed and optimized for devices with a touchscreen (with Windows 8 and Windows RT as operating systems). The app (as we can see in the following screenshot) is focused on images and pictures, so we have a tile for each contact we want in our favorites. The Lync Windows Store app supports contact management, conversations, and calls, but some features such as Persistent Chat and the advanced management of Enterprise Voice, are still an exclusive of the full client. Also, talking about conferencing, we will not be able to act as the presenter or manage other participants. The app is integrated with Windows 8, so we are able to use Search to look for Lync contacts (as shown in the following screenshot): Lync 2013 for mobile devices The Lync 2013 client for mobile devices is the solution Microsoft offers for the most common tablet and smartphone systems (excluding those tablets using Windows 8 and Windows RT with their dedicated app). It is available for Windows phones, iPad/iPhone, and for Android. The older version of this client was basically an IM application, and that is something that somehow limited the interest in the mobile versions of Lync. The 2013 version that we are talking about includes support for VOIP and video (using Wi-Fi networks and cellular data networks), meetings, and for voice mail. From an infrastructural point of view, enabling the new mobile client means to apply the Lync 2013 Cumulative Update 1 (CU1) on our Front End and Edge servers and publish a DNS record (lyncdiscover) on our public name servers. If we have had previous experience with Lync 2010 mobility, the difference is really noticeable. The lyncdiscover record must be pointed to the reverse proxy. Reverse proxy deployment requires for a product to be enabled to support Lync mobility, and a certificate with the lyncdiscover's public domain name needs to be included. Lync Basic 2013 version Lync Basic 2013 is a downloadable client that provides basic functionalities. It does not provide support for advanced call features, multiparty videos or galleries, and skill-based searches. Lync Basic 2013 is dedicated to companies with Lync 2013 on-premises, and it is for Office 365 customers that do not have the full client included with their subscription. A client will look really similar to the full one, but the display name on top is Lync Basic as we can see in the following screenshot: Microsoft Lync VDI 2013 plugin As we said before, the VDI plugin is not a client; it is software we need to install to enable Lync on virtual desktops based on the most used technologies, such as Microsoft RDS, VMware View, and XenDesktop. The main challenge of a VDI scenario is granting the same features and quality we expect from a deployment on a physical machine. The plugin uses "Media Redirection", so that audio and video originate and terminate on the plugin running on the thin client. The user is enabled to connect conferencing/telephony hardware (for example microphones, cams, and so on) to the local terminal and use the Lync 2013 client installed on the virtual desktop as it was running locally. The plugin is the only Lync software installed at the end-user workplace. The details of the deployment (Deploying the Lync VDI Plug-in ) are available at http://technet.microsoft.com/en-us/library/jj204683.aspx. Resources for Article : Further resources on this subject: Innovation of Communication and Information Technologies [Article] DPM Non-aware Windows Workload Protection [Article] Installing Microsoft Dynamics NAV [Article]
Read more
  • 0
  • 0
  • 6986

article-image-make-spacecraft-fly-and-shoot-special-effects-using-blender-3d-249
Packt
18 Nov 2009
4 min read
Save for later

Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49

Packt
18 Nov 2009
4 min read
Blender particles In the last versions of Blender 3D, the particle system received a huge upgrade, making it more complex and powerful than before. This upgrade, however, made it necessary to create more parameters and options in order for the system to acts. What didn't change was the need for an object that works as emitter of the particles. The shape and look of this object will be directly related to the type of effects we want to create. Before we discuss the effects that we will be creating, let's look at how the particles work in Blender. To create any type of particle system, go to the Objects panel and find the Particles button. This is where we will set up and change our particles for a variety of effects. The first time we open this menu, nothing will be displayed. But, if we select a mesh object and press the Add New button, this object will immediately turn into a new emitter. When a new emitter is created, we have to choose the type of behavior this emitter has in the particle system. In the top-left part of the menu, we will find a selector that lets us choose the type of interaction of the emitter. These are the three types of emitters: Emitter: This is the standard type, which is a single object that emits particles according to the parameters and rules that we set up in the particles controls. Hair: Here, we have a type of particle emitter that creates particles as thin lines for representing hair and fur. Since this is more related to characters, we won't use this type of emitter in this book. Reactor: With this emitter, we can create particle systems that interact with each other. It works by setting up a particle system that interferes with the motion and changes the trajectories of other particles. In our projects, we will use only the emitter type. However, you can create indirect animations and use particles to interact with each other. For instance, if you want to create a set of asteroids that block the path of our spacecraft, we could create this type of animation easily with a reactor particle system. How particles work To create and use a particle system, we will look at the most important features and parameters of each menu and create some pre-systems to use later in this article for the spacecraft. To fully understand how particles work, we have to become familiar with the forces or parameters that control the look and feel of particles. For each of those parameters and forces, we have a corresponding menu in Blender. Here corresponding parameters that control the particle system: Quantity: This is a basic feature of any particle system that allows us to set up how many particles will be in the system. Life: As a particle system is based on animation parameters, we have to know from how many frames the particle will be visible in the 3D world. Mesh emitting: Our emitters are all meshes, and we have to determine from which part of those 3D objects the particles will be emitted. We have several options to choose from, such as vertices or parts of the objects delimited by vertex groups. Motion: If we set up our particle system and don't give it enough force to make the particles move, nothing will happen to the system. So, even more important than setting up the appearance of the particles is choosing the right forces for the initial velocity of the particles. Physics and forces: Along with the forces that we use in the motion option, we will also apply some force fields and deflectors to particles to simulate and change the trajectories of the objects based on physical reactions. Visualization: A standard particle system has only small dots as particles, but we can change the way particles look in a variety of ways. To create flares and special effects such as the ones we need, we can use mesh objects that have Halo effects and many more. Interaction: At the end of the particle life, we can use several types of actions and behaviors to control the destiny of a particle. Should it spawn a new particle or simply die when it hits a special object? These are the things we have to consider before we begin setting up the animation.
Read more
  • 0
  • 0
  • 6976

article-image-building-tiny-web-applications-ruby-using-sinatra-2
Packt
03 Sep 2009
5 min read
Save for later

Building tiny Web-applications in Ruby using Sinatra

Packt
03 Sep 2009
5 min read
What’s Sinatra? Sinatra is not a framework but a library i.e. a set of classes that allows you to build almost any kind of web-based solution (no matter what the complexity) in a very simple manner, on top of the abstracted HTTP layer it implements from Rack. When you code in Sinatra you’re bound only by HTTP and your Ruby knowledge. Sinatra doesn’t force anything on you, which can lead to awesome or evil code, in equal measures. Sinatra apps are typically written in a single file. It starts up and shuts down nearly instantaneously. It doesn’t use much memory and it serves requests very quickly. But, it also offers nearly every major feature you expect from a full web framework: RESTful resources, templating (ERB, Haml/Sass, and Builder), mime types, file streaming, etags, development/production mode, exception rendering. It’s fully testable with your choice of test or spec framework. It’s multithreaded by default, though you can pass an option to wrap actions in a mutex. You can add in a database by requiring ActiveRecord or DataMapper. And it uses Rack, running on Mongrel by default. Blake Mizerany the creator of Sinatra says that it is better to learn Sinatra before Ruby on Rails: When you learn a large framework first, you’re introduced to an abundance of ideas, constraints, and magic. Worst of all, they start you with a pattern. In the case of Rails, that’s MVC. MVC doesn’t fit most web-applications from the start or at all. You’re doing yourself a disservice starting with it. Back into patterns, never start with them- Reference here Installing Sinatra      The simplest way to obtain Sinatra is through Rubygems. Open a command window in Windows and type: c:> gem install sinatra Linux/OS X the command would be: sudo gem install sinatra   Installing its Dependencies Sinatra depends on the Rack gem which gets installed along with Sinatra. Installing Mongrel (a fast HTTP library and server for Ruby that is intended for hosting Ruby web applications of any kind using plain HTTP - http://mongrel.rubyforge.org/) is quite simple. In the already open command window, type: c:> gem install mongrel What are Routes? The main feature of Sinatra is defining ‘routes’ as an HTTP verb for a path that executes an arbitrary block of Ruby code. Something like: verb ‘path’ do ... # return/render something end Sinatra’s routes are designed to respond to the HTTP request methods (GET, POST, PUT, DELETE). In Sinatra, a route is an HTTP method paired with an URL matching pattern. These URL handlers (also called "routing") can be used to match anything from a static string (such as /hello) to a string with parameters (/hello/:name) or anything you can imagine using wildcards and regular expressions. Each route is associated with a block. Let us look at an example: get '/' do .. show something ..endget '/hello/:name' do # The /hello portion matches that portion of the URL from the # request you made, and :name will absorb any other text you # give it and put it in the params hash under the key :nameendpost '/' do .. create something ..endput '/' do .. update something ..enddelete '/' do .. delete something ..end Routes are matched in the order they are defined. When a new request comes in, the first route that matches the request is invoked i.e. the handler (the code block) attached to that route gets executed. For this reason, you should put your most specific handlers on top and your most vague handlers on the bottom. A tiny web-application Here’s an example of a simple Sinatra application. Write a Ruby program myapp1.rb and store it in the folder: c:sinatra_programs Though the name of the folder is c:>sinatra_programs, we are going to have only one Sinatra program here. The program is: # myapp1.rbrequire 'sinatra' Sinatra applications can be run directly: ruby myapp1.rb [-h] [-x] [-e ENVIRONMENT] [-p PORT] [-s HANDLER] The above options are: -h # help -p # set the port (default is 4567) -e # set the environment (default is development) -s # specify rack server/handler (default is thin) -x # turn on the mutex lock (default off) – currently not used In the article- http://gist.github.com/54177, it states that using: require ‘rubygems’, is wrong. It is an environmental issue and not an app issue. The article mentions that you might It is an environmental issue and not an app issue. The article mentions that you might use: ruby -rubygems myapp1.rb Another way is to use RUBYOPT. Refer article – http://rubygems.org/read/chapter/3. By setting the RUBYOPT environment variable to the value rubygems, you tell Ruby to load RubyGems every time it starts up. This is similar to the -rubygems options above, but you only have to specify this once (rather than each time you run a Ruby script).
Read more
  • 0
  • 0
  • 6960

article-image-ajax-form-validation-part-1
Packt
18 Feb 2010
4 min read
Save for later

AJAX Form Validation: Part 1

Packt
18 Feb 2010
4 min read
The server is the last line of defense against invalid data, so even if you implement client-side validation, server-side validation is mandatory. The JavaScript code that runs on the client can be disabled permanently from the browser's settings and/or it can be easily modified or bypassed. Implementing AJAX form validation The form validation application we will build in this article validates the form at the server side on the classic form submit, implementing AJAX validation while the user navigates through the form. The final validation is performed at the server, as shown in Figure 5-1: Doing a final server-side validation when the form is submitted should never be considered optional. If someone disables JavaScript in the browser settings, AJAX validation on the client side clearly won't work, exposing sensitive data, and thereby allowing an evil-intentioned visitor to harm important data on the server (for example, through SQL injection). Always validate user input on the server. As shown in the preceding figure, the application you are about to build validates a registration form using both AJAX validation (client side) and typical server-side validation: AJAX-style (client side): It happens when each form field loses focus (onblur). The field's value is immediately sent to and evaluated by the server, which then returns a result (0 for failure, 1 for success). If validation fails, an error message will appear and notify the user about the failed validation, as shown in Figure 5-3. PHP-style (server side): This is the usual validation you would do on the server—checking user input against certain rules after the entire form is submitted. If no errors are found and the input data is valid, the browser is redirected to a success page, as shown in Figure 5-4. If validation fails, however, the user is sent back to the form page with the invalid fields highlighted, as shown in Figure 5-3. Both AJAX validation and PHP validation check the entered data against our application's rules: Username must not already exist in the database Name field cannot be empty A gender must be selected Month of birth must be selected Birthday must be a valid date (between 1-31) Year of birth must be a valid year (between 1900-2000) The date must exist in the number of days for each month (that is, there's no February 31) E-mail address must be written in a valid email format Phone number must be written in standard US form: xxx-xxx-xxxx The I've read the Terms of Use checkbox must be selected Watch the application in action in the following screenshots: XMLHttpRequest, version 2 We do our best to combine theory and practice, before moving on to implementing the AJAX form validation script, we'll have another quick look at our favorite AJAX object—XMLHttpRequest. On this occasion, we will step up the complexity (and functionality) a bit and use everything we have learned until now. We will continue to build on what has come before as we move on; so again, it's important that you take the time to be sure you've understood what we are doing here. Time spent on digging into the materials really pays off when you begin to build your own application in the real world. Our OOP JavaScript skills will be put to work improving the existing script that used to make AJAX requests. In addition to the design that we've already discussed, we're creating the following features as well: Flexible design so that the object can be easily extended for future needs and purposes The ability to set all the required properties via a JSON object We'll package this improved XMLHttpRequest functionality in a class named XmlHttp that we'll be able to use in other exercises as well. You can see the class diagram in the following screenshot, along with the diagrams of two helper classes: settings is the class we use to create the call settings; we supply an instance of this class as a parameter to the constructor of XmlHttp complete is a callback delegate, pointing to the function we want executed when the call completes The final purpose of this exercise is to create a class named XmlHttp that we can easily use in other projects to perform AJAX calls. With our goals in mind, let's get to it! Time for action – the XmlHttp object In the ajax folder, create a folder named validate, which will host the exercises in this article.
Read more
  • 0
  • 0
  • 6943
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-django-debugging-overview
Packt
28 Apr 2010
9 min read
Save for later

Django Debugging Overview

Packt
28 Apr 2010
9 min read
Django debug settings Django has a number of settings that control the collection and presentation of debug information. The primary one is named DEBUG; it broadly controls whether the server operates in development (if DEBUG is True) or production mode. In development mode, the end-user is expected to be a site developer. Thus, if an error arises during processing of a request, it is useful to include specific technical information about the error in the response sent to the web browser. This is not useful in production mode, when the user is expected to be simply a general site user. This section describes three Django settings that are useful for debugging during development. Additional settings are used during production to control what errors should be reported, and where error reports should be sent. These additional settings will be discussed in the section on handling problems in production. The DEBUG and TEMPLATE_DEBUG settings DEBUG is the main debug setting. One of the most obvious effects of setting this to True is that Django will generate fancy error page responses in the case of serious code problems, such as exceptions raised during processing of a request. If TEMPLATE_DEBUG is also True, and the exception raised is related to a template error, then the fancy error page will also include information about where in the template the error occurred. The default value for both of these settings is False, but the settings.py file created by manage.py startproject turns both of them on by including these lines at the top of the file: DEBUG = True TEMPLATE_DEBUG = DEBUG Note that setting TEMPLATE_DEBUG to True when DEBUG is False isn't useful. The additional information collected with TEMPLATE_DEBUG turned on will never be displayed if the fancy error pages, controlled by the DEBUG setting, are not displayed. Similarly, setting TEMPLATE_DEBUG to False when DEBUG is True isn't very useful. In this case, for template errors, the fancy debug page will be lacking helpful information. Thus, it makes sense to keep these settings tied to each other, as previously shown. Details on the fancy error pages and when they are generated will be covered in the next section. Besides generating these special pages, turning DEBUG on has several other effects. Specifically, when DEBUG is on: A record is kept of all queries sent to the database. Details of what is recorded and how to access it will be covered in a subsequent section. For the MySQL database backend, warnings issued by the database will be turned into Python Exceptions. These MySQL warnings may indicate a serious problem, but a warning (which only results in a message printed to stderr) may pass unnoticed. Since most development is done with DEBUG turned on, raising exceptions for MySQL warnings then ensures that the developer is aware of the possible issue. The admin application performs extensive validation of the configuration of all registered models and raises an ImproperlyConfigured exception on the first attempt to access any admin page if an error is found in the configuration. This extensive validation is fairly expensive and not something you'd generally want done during production server start-up, when the admin configuration likely has not changed since the last start-up. When running with DEBUG on, though, it is possible that the admin configuration has changed, and thus it is useful and worth the cost to do the explicit validation and provide a specific error message about what is wrong if a problem is detected. Finally, there are several places in Django code where an error will occur while DEBUG is on, and the generated response will contain specific information about the cause of the error, whereas when DEBUG is off the generated response will be a generic error page. The TEMPLATE_STRING_IF_INVALID setting A third setting that can be useful for debugging during development is TEMPLATE_STRING_IF_INVALID. The default value for this setting is the empty string. This setting is used to control what gets inserted into a template in place of a reference to an invalid (for example, non-existent in the template context) variable. The default value of an empty string results in nothing visible taking the place of such invalid references, which can make them hard to notice. Setting TEMPLATE_STRING_IF_INVALID to some value can make tracking down such invalid references easier. However, some code that ships with Django (the admin application, in particular), relies on the default behavior of invalid references being replaced with an empty string. Running code like this with a non-empty TEMPLATE_STRING_IF_INVALID setting can produce unexpected results, so this setting is only useful when you are specifically trying to track down something like a misspelled template variable in code that always ensures that variables, even empty ones, are set in the template context. Debug error pages With DEBUG on, Django generates fancy debug error pages in two circumstances: When a django.http.Http404 exception is raised When any other exception is raised and not handled by the regular view processing code In the latter case, the debug page contains a tremendous amount of information about the error, the request that caused it, and the environment at the time it occurred. The debug pages for Http404 exceptions are considerably simpler. To see examples of the Http404 debug pages, consider the survey_detail view def survey_detail(request, pk): survey = get_object_or_404(Survey, pk=pk) today = datetime.date.today() if survey.closes < today: return display_completed_survey(request, survey) elif survey.opens > today: raise Http404 else: return display_active_survey(request, survey) There are two cases where this view may raise an Http404 exception: when the requested survey is not found in the database, and when it is found but has not yet opened. Thus, we can see the debug 404 page by attempting to access the survey detail for a survey that does not exist, say survey number 24. The result will be as follows: Notice there is a message in the middle of the page that describes the cause of the page not found response: No Survey matches the given query. This message was generated automatically by the get_object_or_404 function. By contrast, the bare raise Http404 in the case where the survey is found but not yet open does not look like it will have any descriptive message. To confirm this, add a survey that has an opens date in the future, and try to access its detail page. The result will be something like the following: That is not a very helpful debug page, since it lacks any information about what was being searched for and why it could not be displayed. To make this page more useful, include a message when raising the Http404 exception. For example: raise Http404("%s does not open until %s; it is only %s" % (survey.title, survey.opens, today)) Then an attempt to access this page will be a little more helpful: Note that the error message supplied with the Http404 exception is only displayed on the debug 404 page; it would not appear on a standard 404 page. So you can make such messages as descriptive as you like and not worry that they will leak private or sensitive information to general users. Another thing to note is that a debug 404 page is only generated when an Http404 exception is raised. If you manually construct an HttpResponse with a 404 status code, it will be returned, not the debug 404 page. Consider this code: return HttpResponse("%s does not open until %s; it is only %s" % (survey.title, survey.opens, today), status=404) If that code were used in place of the raise Http404 variant, then the browser will simply display the passed message: Without the prominent Page not found message and distinctive error page formatting, this page isn't even obviously an error report. Note also that some browsers by default will replace the server-provided content with a supposedly "friendly" error page that tends to be even less informative. Thus, it is both easier and more useful to use the Http404 exception instead of manually building HttpResponse objects with status code 404. A final example of the debug 404 page that is very useful is the one that is generated when URL resolution fails. For example, if we add an extra space before the survey number in the URL, the debug 404 page generated will be as follows: The message on this page includes all of the information necessary to figure out why URL resolution failed. It includes the current URL, the name of the base URLConf used for resolution, and all patterns that were tried, in order, for matching. If you do any significant amount of Django application programming, it's highly likely that at some time this page will appear and you will be convinced that one of the listed patterns should match the given URL. You would be wrong. Do not waste energy trying to figure out how Django could be so broken. Rather, trust the error message, and focus your energies on figuring out why the pattern you think should match doesn't in fact match. Look carefully at each element of the pattern and compare it to the actual element in the current URL: there will be something that doesn't match. In this case, you might think the third listed pattern should match the current URL. The first element in the pattern is the capture of the primary key value, and the actual URL value does contain a number that could be a primary key. However, the capture is done using the pattern d+. An attempt to match this against the actual URL characters—a space followed by 2—fails because d only matches numeric digits and the space character is not a numeric digit. There will always be something like this to explain why the URL resolution failed. For now, we will leave the subject of debug pages and learn about accessing the history of database queries that is maintained when DEBUG is on.
Read more
  • 0
  • 0
  • 6943

article-image-creating-mobile-friendly-themes
Packt
15 Mar 2013
3 min read
Save for later

Creating mobile friendly themes

Packt
15 Mar 2013
3 min read
(For more resources related to this topic, see here.) Getting ready Before we start working on the code, we'll need to copy the basic gallery example code from the previous example and rename the folder and files to be pre fixed with mobile. After this is done, our folder and file structure should look like the following screenshot: How to do it... We'll start off with our galleria.mobile.js theme JavaScript file: (function($) { Galleria.addTheme({ name: 'mobile', author: 'Galleria How-to', css: 'galleria.mobile.css', defaults: { transition: 'fade', swipe: true, responsive: true }, init: function(options) { } }); }(jQuery)); The only difference here from our basic theme example is that we've enabled the swipe parameter for navigating images and the responsive parameter so we can use different styles for different device sizes. Then, we'll provide the additional CSS file using media queries to match only smartphones. Add the following rules in the already present code in the galleria. mobile.css file: /* for smartphones */ @media only screen and (max-width : 480px){ .galleria-stage, .galleria-thumbnails-container, .galleria-thumbnails-container, .galleria-image-nav, .galleria-info { width: 320px; } #galleria{ height: 300px; } .galleria-stage { max-height: 410px; } } Here, we're targeting any device size with a width less than 480 pixels. This should match smartphones in landscape and portrait mode. These styles will override the default styles when the width of the browser is less than 480 pixels. Then, we wire it up just like the previous theme example. Modify the galleria. mobile-example.html file to include the following code snippet for bootstrapping the gallery: <script> $(document).ready(function(){ Galleria.loadTheme('galleria.mobile.js'); Galleria.run('#galleria'); }); </script> We should now have a gallery that scales well for smaller device sizes, as shown in the following screenshot: How to do it... The responsive option tells Galleria to actively detect screen size changes and to redraw the gallery when they are detected. Then, our responsive CSS rules style the gallery differently for different device sizes. You can also provide additional rules so the gallery is styled differently when in portrait versus landscape mode. Galleria will detect when the device screen orientation has changed and apply the new styles accordingly. Media queries A very good list of media queries that can be used to target different devices for responsive web design is available at http://css-tricks.com/snippets/css/media-queriesfor-standard-devices/. Testing for mobile The easiest way to test the mobile theme is to simply resize the browser window to the size of the mobile device. This simulates the mobile device screen size and allows the use of standard web development tools that modern browsers provide. Integrating with existing sites In order for this to work effectively with existing websites, the existing styles will also have to play nicely with mobile devices. This means that an existing site, if trying to integrate a mobile gallery, will also need to have its own styles scale correctly to mobile devices. If this is not the case and the mobile devices switch to zoom-like navigation mode for the site, the mobile gallery styles won't ever kick in. Summary In this article we have seen how to create mobile friendly themes using responsive web design. Resources for Article : Further resources on this subject: jQuery Mobile: Organizing Information with List Views [Article] An Introduction to Rhomobile [Article] Creating and configuring a basic mobile application [Article]
Read more
  • 0
  • 0
  • 6933

article-image-creating-your-first-module-drupal-7-module-development
Packt
14 Dec 2010
15 min read
Save for later

Using Drupal 7 for Module Development

Packt
14 Dec 2010
15 min read
  Drupal 7 Module Development Create your own Drupal 7 modules from scratch Specifically written for Drupal 7 development Write your own Drupal modules, themes, and libraries Discover the powerful new tools introduced in Drupal 7 Learn the programming secrets of six experienced Drupal developers Get practical with this book's project-based format         Read more about this book       The focus of this article by Matt Butcher, author of Drupal 7 Module Development, is module creation. We are going to begin coding in this article. Here are some of the important topics that we will cover in this article: Starting a new module Creating .info files to provide Drupal with module information Creating .module files to store Drupal code Adding new blocks using the Block Subsystem Using common Drupal functions Formatting code according to the Drupal coding standards (For more resources on this subject, see here.) Our goal: a module with a block In this article we are going to build a simple module. The module will use the Block Subsystem to add a new custom block. The block that we add will simply display a list of all of the currently enabled modules on our Drupal installation. We are going to divide this task of building a new module into the three parts: Create a new module folder and module files Work with the Block Subsystem Write automated tests using the SimpleTest framework included in Drupal We are going to proceed in that order for the sake of simplicity. One might object that, following agile development processes, we ought to begin by writing our tests. This approach is called Test-driven Development (TDD), and is a justly popular methodology. Agile software development is a particular methodology designed to help teams of developers effectively and efficiently build software. While Drupal itself has not been developed using an agile process, it does facilitate many of the agile practices. To learn more about agile, visit http://agilemanifesto.org/ However, our goal here is not to exemplify a particular methodology, but to discover how to write modules. It is easier to learn module development by first writing the module, and then learn how to write unit tests. It is easier for two reasons: SimpleTest (in spite of its name) is the least simple part of this article. It will have double the code-weight of our actual module. We will need to become acquainted with the APIs we are going to use in development before we attempt to write tests that assume knowledge of those APIs. In regular module development, though, you may certainly choose to follow the TDD approach of writing tests first, and then writing the module. Let's now move on to the first step of creating a new module. Creating a new module Creating Drupal modules is easy. How easy? Easy enough that over 5,000 modules have been developed, and many Drupal developers are even PHP novices! In fact, the code in this article is an illustration of how easy module coding can be. We are going to create our first module with only one directory and two small files. Module names It goes without saying that building a new module requires naming the module. However, there is one minor ambiguity that ought to be cleared up at the outset, a Drupal module has two names: A human-readable name: This name is designed to be read by humans, and should be one or a couple of words long. The words should be capitalized and separated by spaces. For example, one of the most popular Drupal modules has the human-readable name Views. A less-popular (but perhaps more creatively named) Drupal 6 module has the human-readable name Eldorado Superfly. A machine-readable name: This name is used internally by Drupal. It can be composed of lower-case and upper-case letters, digits, and the underscore character (using upper-case letters in machine names is frowned upon, though). No other characters are allowed. The machine names of the above two modules are views and eldorado_superfly, respectively. By convention, the two names ought to be as similar as possible. Spaces should be replaced by underscores. Upper-case letters should generally be changed to lower-case. Because of the convention of similar naming, the two names can usually be used interchangeably, and most of the time it is not necessary to specifically declare which of the two names we are referring to. In cases where the difference needs to be made (as in the next section), the authors will be careful to make it. Where does our module go? One of the less intuitive aspects of Drupal development is the filesystem layout. Where do we put a new module? The obvious answer would be to put it in the /modules directory alongside all of the core modules. As obvious as this may seem, the /modules folder is not the right place for your modules. In fact, you should never change anything in that directory. It is reserved for core Drupal modules only, and will be overwritten during upgrades. The second, far less obvious place to put modules is in /sites/all/modules. This is the location where all unmodified add-on modules ought to go, and tools like Drush ( a Drupal command line tool) will download modules to this directory. In some sense, it is okay to put modules here. They will not be automatically overwritten during core upgrades. However, as of this writing, /sites/all/modules is not the recommended place to put custom modules unless you are running a multi-site configuration and the custom module needs to be accessible on all sites. The current recommendation is to put custom modules in the /sites/default/modules directory, which does not exist by default. This has a few advantages. One is that standard add-on modules are stored elsewhere, and this separation makes it easier for us to find our own code without sorting through clutter. There are other benefits (such as the loading order of module directories), but none will have a direct impact on us. We will always be putting our custom modules in /sites/default/modules. This follows Drupal best practices, and also makes it easy to find our modules as opposed to all of the other add-on modules. The one disadvantage of storing all custom modules in /sites/default/modules appears only under a specific set of circumstances. If you have Drupal configured to serve multiple sites off of one single instance, then the /sites/default folder is only used for the default site. What this means, in practice, is that modules stored there will not be loaded at all for other sites. In such cases, it is generally advised to move your custom modules into /sites/all/modules/custom. Other module directories Drupal does look in a few other places for modules. However, those places are reserved for special purposes. Creating the module directory Now that we know that our modules should go in /sites/default/modules, we can create a new module there. Modules can be organized in a variety of ways, but the best practice is to create a module directory in /sites/default/modules, and then place at least two files inside the directory: a .info (pronounced "dot-info") file and a .module ("dot-module") file. The directory should be named with the machine-readable name of the module. Similarly, both the .info and .module files should use the machine-readable name. We are going to name our first module with the machine-readable name first, since it is our first module. Thus, we will create a new directory, /sites/default/modules/first, and then create a first.info file and a first.module file: Those are the only files we will need for our module. For permissions, make sure that your webserver can read both the .info and .module files. It should not be able to write to either file, though. In some sense, the only file absolutely necessary for a module is the .info file located at a proper place in the system. However, since the .info file simply provides information about the module, no interesting module can be built with just this file. Next, we will write the contents of the .info file. Writing the .info file The purpose of the .info file is to provide Drupal with information about a module—information such as the human-readable name, what other modules this module requires, and what code files this module provides. A .info file is a plain text file in a format similar to the standard INI configuration file. A directive in the .info file is composed of a name, and equal sign, and a value: name = value By Drupal's coding conventions, there should always be one space on each side of the equals sign. Some directives use an array-like syntax to declare that one name has multiple values. The array-like format looks like this: name[] = value1 name[] = value2 Note that there is no blank space between the opening square bracket and the closing square bracket. If a value spans more than one line, it should be enclosed in quotation marks. Any line that begins with a ; (semi-colon) is treated as a comment, and is ignored by the Drupal INI parser. Drupal does not support INI-style section headers such as those found in the php.ini file. To begin, let's take a look at a complete first.info file for our first module: ;$Id$ name = First description = A first module. package = Drupal 7 Development core = 7.x files[] = first.module ;dependencies[] = autoload ;php = 5.2 This ten-line file is about as complex as a module's .info file ever gets. The first line is a standard. Every .info file should begin with ;$Id$. What is this? It is the placeholder for the version control system to store information about the file. When the file is checked into Drupal's CVS repository, the line will be automatically expanded to something like this: ;$Id: first.info,v 1.1 2009/03/18 20:27:12 mbutcher Exp $ This information indicates when the file was last checked into CVS, and who checked it in. CVS is going away, and so is $Id$. While Drupal has been developed in CVS from the early days through Drupal 7, it is now being migrated to a Git repository. Git does not use $Id$, so it is likely that between the release of Drupal 7 and the release of Drupal 8, $Id$ tags will be removed. You will see all PHP and .info files beginning with the $Id$ marker. Once Drupal uses Git, those tags may go away. The next couple of lines of interest in first.info are these: name = First description = A first module. package = Drupal 7 Development The first two are required in every .info file. The name directive is used to declare what the module's human-readable name is. The description provides a one or two-sentence description of what this module provides or is used for. Among other places, this information is displayed on the module configuration section of the administration interface in Modules. In the screenshot, the values of the name and description fields are displayed in their respective columns. The third item, package, identifies which family (package) of modules this module is related to. Core modules, for example, all have the package Core. In the screenshot above, you can see the grouping package Core in the upper-left corner. Our module will be grouped under the package Drupal 7 Development to represent its relationship. As you may notice, package names are written as human-readable values. When choosing a human-readable module name, remember to adhere to the specifications mentioned earlier in this section. The next directive is the core directive: core = 7.x. This simply declares which main-line version of Drupal is required by the module. All Drupal 7 modules will have the line core = 7.x. Along with the core version, a .info file can also specify what version of PHP it requires. By default, Drupal 7 requires Drupal 5.1 or newer. However, if one were to use, say, closures (a feature introduced in PHP 5.3), then the following line would need to be added: php = 5.3 Next, every .info file must declare which files in the module contain PHP functions, classes, or interfaces. This is done using the files[] directive. Our small initial module will only have one file, first.module. So we need only one files[] directive. files[] = first.module More complex files will often have several files[] directives, each declaring a separate PHP source code file. JavaScript, CSS, image files, and PHP files (like templates) that do not contain functions that the module needs to know about needn't be included in files[] directives. The point of the directive is simply to indicate to Drupal that these files should be examined by Drupal. One directive that we will not use for this module, but which plays a very important role is the dependencies[] directive. This is used to list the other modules that must be installed and active for this module to function correctly. Drupal will not allow a module to be enabled unless its dependencies have been satisfied. Drupal does not contain a directive to indicate that another module is recommended or is optional. It is the task of the developer to appropriately document this fact and make it known. There is currently no recommended best practice to provide such information. Now we have created our first.info file. As soon as Drupal reads this file, the module will appear on our Modules page. In the screenshot, notice that the module appears in the DRUPAL 7 DEVELOPMENT package, and has the NAME and DESCRIPTION as assigned in the .info file. With our .info file completed, we can now move on and code our .module file. Modules checked into Drupal's version control system will automatically have a version directive added to the .info file. This should typically not be altered. Creating a module file The .module file is a PHP file that conventionally contains all of the major hook implementations for a module. We will gain some practical knowledge of them. A hook implementation is a function that follows a certain naming pattern in order to indicate to Drupal that it should be used as a callback for a particular event in the Drupal system. For Object-oriented programmers, it may be helpful to think of a hook as similar to the Observer design pattern. When Drupal encounters an event for which there is a hook (and there are hundreds of such events), Drupal will look through all of the modules for matching hook implementations. It will then execute each hook implementation, one after another. Once all hook implementations have been executed, Drupal will continue its processing. In the past, all Drupal hook implementations had to reside in the .module file. Drupal 7's requirements are more lenient, but in most moderately sized modules, it is still preferable to store most hook implementations in the .module file. There are cases where hook implementations belong in other files. In such cases, the reasons for organizing the module in such a way will be explained. To begin, we will create a simple .module file that contains a single hook implementation – one that provides help information. <?php // $Id$ /** * @file * A module exemplifying Drupal coding practices and APIs. * * This module provides a block that lists all of the * installed modules. It illustrates coding standards, * practices, and API use for Drupal 7. */ /** * Implements hook_help(). */ function first_help($path, $arg) { if ($path == 'admin/help#first') { return t('A demonstration module.'); } } Before we get to the code itself, we will talk about a few stylistic items. To begin, notice that this file, like the .info file, contains an $Id$ marker that CVS will replace when the file is checked in. All PHP files should have this marker following a double-slash-style comment: // $Id$. Next, the preceding code illustrates a few of the important coding standards for Drupal. Source code standards Drupal has a thorough and strictly enforced set of coding standards. All core code adheres to these standards. Most add-on modules do, too. (Those that don't generally receive bug reports for not conforming.) Before you begin coding, it is a good idea to familiarize yourself with the standards as documented here: http://drupal.org/coding-standards. The Coder module can evaluate your code and alert you to any infringement upon the coding standards. We will adhere to the Drupal coding standards. In many cases, we will explain the standards as we go along. Still, the definitive source for standards is the URL listed above, not our code here. We will not re-iterate the coding standards. The details can be found online. However, several prominent standards deserve immediate mention. I will just mention them here, and we will see examples in action as we work through the code. Indenting: All PHP and JavaScript files use two spaces to indent. Tabs are never used for code formatting. The <?php ?> processor instruction: Files that are completely PHP should begin with <?php, but should omit the closing ?>. This is done for several reasons, most notably to prevent the inclusion of whitespace from breaking HTTP headers. Comments: Drupal uses Doxygen-style (/** */) doc-blocks to comment functions, classes, interfaces, constants, files, and globals. All other comments should use the double-slash (//) comment. The pound sign (#) should not be used for commenting. Spaces around operators: Most operators should have a whitespace character on each side. Spacing in control structures: Control structures should have spaces after the name and before the curly brace. The bodies of all control structures should be surrounded by curly braces, and even that of if statements with one-line bodies. Functions: Functions should be named in lowercase letters using underscores to separate words. Later we will see how class method names differ from this. Variables: Variable names should be in all lowercase letters using underscores to separate words. Member variables in objects are named differently. As we work through examples, we will see these and other standards in action. As we work through examples, we will see these and other standards in action.
Read more
  • 0
  • 0
  • 6880

article-image-coldfusion-ajax-programming
Packt
27 Oct 2009
9 min read
Save for later

ColdFusion AJAX Programming

Packt
27 Oct 2009
9 min read
Binding When it comes to programming, the two most commonly used features are CFAJAXProxy and binding. The binding feature allows us to bind or tie things together by using a simpler technique than we would otherwise have needed to create. Binding acts as a double-ended connector in some scenarios. You can set the bind to pull data from another ColdFusion tag on the form. These must be AJAX tags with binding abilities. There are four forms of bindings, on page, CFC, JavaScript, and URL. Let's work through each style so that we will understand them well. We will start with on page binding. Remember that the tag has to support the binding. This is not a general ColdFusion feature, but we can use it wherever we desire. On Page Binding We are going to bind 'cfdiv' to pull its content to show on page binding. We will set the value of a text input to the div. Refer to the following code. ColdFusion AJAX elements work in a manner different from how AJAX is written traditionally. It is more customary to name our browser-side HTML elements with id attributes. This is not the case with the binding features. As we can see in our code example, we have used the name attribute. We should remember to be case sensitive, since this is managed by JavaScript. When we run the code, we will notice that we must leave the input field before the browser registers that there has been a change in the value of the field. This is how the event model for the browser DOM works. <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText"></cfform><hr />And this is the bound div container.<br /><cfdiv bind="{myText}"></cfdiv> Notice how we use curly brackets to bind the value of the 'myText' input box. This inserts the contents into 'div' when the text box loses focus. This is an example of binding to in-page elements. If the binding we use is tied to a hidden window or tab, then the contents may not be updated. CFC Binding Now, we are going to bind our div to a CFC method. We will take the data that was being posted directly to the object, and then we will pass it out to the CFC. The CFC is going to repackage it, and send it back to the browser. The binding will enable the modified version of the content to be sent to the div. Refer to the following CFC code: <cfcomponent output="false"> <cffunction name="getDivContent" returntype="string" access="remote"> <cfargument name="edit"> <cfreturn "This is the content returned from the CFC for the div, the calling page variable is '<strong>#arguments.edit#</strong>'."> </cffunction></cfcomponent> From the previous code, we can see that the CFC only accepts the argument and passes it back. This could have even returned an image HTML segment with something like a user picture. The following code shows the new page code modifications. <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText"></cfform><hr />And this is the bound div container.<br /><cfdiv bind="cfc:bindsource.getDivContent({myText})"></cfdiv> The only change lies in how we bind the cfdiv element tag. Here, you can see that it starts with CFC. Next, it calls bindsource, which is the name of a local CFC. This tells ColdFusion to wire up the browser page, so it will connect to the CFC and things will work as we want. You can observe that inside the method, we are passing the bound variable to the method. When the input field changes by losing focus, the browser sends a new request to the CFC and updates the div. We need to have the same number of parameters going to the CFC as the number of arguments in our CFC method. We should also make sure that the method has its access method set to remote. Here we can see an example results page. It is valid to pass the name of the CFC method argument with the data value. This can prevent exceptions caused by not pairing the data in the same order as the method arguments. The last line of the previous code can be modified as follows: <cfdiv bind="cfc:bindsource.getDivContent(edit:{myText})"></cfdiv> JavaScript Binding Now, we will see how simple power can be managed on the browser. We will create a standard JavaScript function and pass the same bound data field through the function. Whenever we update the text box and it looses focus, the contents of the div will be updated from the function on the page. It is suggested that we include all JavaScript rather than put it directly on the page. Refer to the following code: <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText"></cfform><hr />And this is the bound div container.<br /><cfdiv bind="javascript:updateDiv({myText})"></cfdiv><script> updateDiv = function(myEdit){ return 'This is the result that came from the JavaScript function with the edit box sending "<strong>'+myEdit+'</strong>"'; } </script> Here is the result of placing the same text into our JavaScript example. URL Binding We can achieve the same results by calling a web address. We can actually call a static HTML page. Now, we will call a .cfm page to see the results of changing the text box reflected back, as for CFC and JavaScript. Here is the code for our main page with the URL binding. <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText"></cfform><hr />And this is the bound div container.<br /><cfdiv bind="url:bindsource.cfm?myEdit={myText}"></cfdiv> In the above code, we can see that the binding type is set to URL. Earlier, we used the CFC method bound to a file named bindsource.cfc. Now, we will bind through the URL to a .cfm file. The bound myText data will work in a manner similar to the other cases. It will be sent to the target; in this case, it is a regular server-side page. We require only one line. In this example, our variables are URL variables. Here is the handler page code: <cfoutput> 'This is the result that came from the server page with the edit box sending "<strong>#url.myEdit#</strong>"'</cfoutput> This tells us that if there is no prefix to the browse request on the bind attribute of the <cfdiv> tag, then it will only work with on-page elements. If we prefix it, then we can pass the data through a CFC, a URL, or through a JavaScript function present on the same page. If we bind to a variable present on the same page, then whenever the bound element updates, the binding will be executed. Bind with Event One of the features of binding that we might overlook its binding based on an event. In the previous examples, we mentioned that the normal event trigger for binding took place when the bound field lost its focus. The following example shows a bind that occurs when the key is released. <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText"></cfform><hr />And this is the bound div container.<br /><cfdiv bind="{myText@keyup}"></cfdiv> This is similar to our first example, with the only difference being that the contents of the div are updated as each key is pressed. This works in a manner similar to CFC, JavaScript, and URL bindings. We might also consider binding other elements on a click event, such as a radio button. The following example shows another feature. We can pass any DOM attribute by putting that as an item after the element id. It must be placed before the @ symbol, if you are using a particular event. In this code, we change the input in order to have a class in which we can pass the value of the class attribute and change the binding attribute of the cfdiv element. <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText" class="test"> </cfform><hr />And this is the bound div container.<br /><cfdiv bind="{myText.class@keyup}.{myText}"></cfdiv> Here is a list of the events that we can bind. @click @keyup @mousedown @none The @none event is used for grids and trees, so that changes don't trigger bind events. Extra Binding Notes If you have an ID on your CFForm element, then you can refer to the form element based on the container form. The following example helps us to understand this better. Bind = "url:bindsource.cfm?myEdit={myForm:myText}" The ColdFusion 8 documents give the following guides in order to specify the binding expressions. cfc: componentPath.functionName (parameters) The component path cannot use a mapping. The componentPath value must be a dot-delimited path from the web root or the directory that contains the page. javascript: functionName (parameters) url: URL?parameters ULR?parameters A string containing one or more instances of {bind parmeter}, such as {firstname}.{lastname}@{domain} The following table represents the supported formats based on attributes and tags: Attribute Tags Supported Formats Autosuggest cfinput type="text" 1,2,3 Bind cfdiv, cfinput, cftextarea 1,2,3,5 Bind cfajaxproxy, cfgrid, cfselect cfsprydataset, cftreeitem 1,2,3 onChange cfgrid 1,2,3 Source cflayoutarea, cfpod, cfwindow 4
Read more
  • 0
  • 0
  • 6873
article-image-linux-shell-script-logging-tasks
Packt
28 Jan 2011
7 min read
Save for later

Linux Shell Script: Logging Tasks

Packt
28 Jan 2011
7 min read
Linux Shell Scripting Cookbook Collecting information about the operating environment, logged in users, the time for which the computer has been powered on, and any boot failures are very helpful. This recipe will go through a few commands used to gather information about a live machine. Getting ready This recipe will introduce the commands who, w, users, uptime, last, and lastb. How to do it... To obtain information about users currently logged in to the machine use: $ who slynux   pts/0   2010-09-29 05:24 (slynuxs-macbook-pro.local) slynux   tty7    2010-09-29 07:08 (:0) Or: $ w 07:09:05 up  1:45,  2 users,  load average: 0.12, 0.06, 0.02 USER     TTY     FROM    LOGIN@   IDLE  JCPU PCPU WHAT slynux   pts/0   slynuxs 05:24  0.00s  0.65s 0.11s sshd: slynux slynux   tty7    :0      07:08  1:45m  3.28s 0.26s gnome-session It will provide information about logged in users, the pseudo TTY used by the users, the command that is currently executing from the pseudo terminal, and the IP address from which the users have logged in. If it is localhost, it will show the hostname. who and w format outputs with slight difference. The w command provides more detail than who. TTY is the device file associated with a text terminal. When a terminal is newly spawned by the user, a corresponding device is created in /dev/ (for example, /dev/pts/3). The device path for the current terminal can be found out by typing and executing the command tty. In order to list the users currently logged in to the machine, use: $ users Slynux slynux slynux hacker If a user has opened multiple pseudo terminals, it will show that many entries for the same user. In the above output, the user slynux has opened three pseudo terminals. The easiest way to print unique users is to use sort and uniq to filter as follows: $ users | tr ' ' 'n' | sort | uniq slynux hacker We have used tr to replace ' ' with 'n'. Then combination of sort and uniq will produce unique entries for each user. In order to see how long the system has been powered on, use: $ uptime 21:44:33 up  3:17,  8 users,  load average: 0.09, 0.14, 0.09 The time that follows the word up indicates the time for which the system has been powered on. We can write a simple one-liner to extract the uptime only. Load average in uptime's output is a parameter that indicates system load. In order to get information about previous boot and user logged sessions, use: $ last slynux tty7         :0              Tue Sep 28 18:27   still logged in reboot system boot 2.6.32-21-generi Tue Sep 28 18:10 - 21:46 (03:35) slynux pts/0      :0.0          Tue Sep 28 05:31 - crash (12:39) The last command will provide information about logged in sessions. It is actually a log of system logins that consists of information such as tty from which it has logged in, login time, status, and so on. The last command uses the log file /var/log/wtmp for input log data. It is also possible to explicitly specify the log file for the last command using the –f option. For example: $ last -f /var/log/wtmp In order to obtain info about login sessions for a single user, use: $ last USER Get information about reboot sessions as follows: $ last reboot reboot system boot 2.6.32-21-generi Tue Sep 28 18:10 - 21:48 (03:37) reboot system boot 2.6.32-21-generi Tue Sep 28 05:14 - 21:48 (16:33) In order to get information about failed user login sessions use: # lastb test     tty8    :0          Wed Dec 15 03:56 - 03:56  (00:00) slynux tty8    :0          Wed Dec 15 03:55 - 03:55  (00:00) You should run lastb as the root user. Logging access to files and directories Logging of file and directory access is very helpful to keep track of changes that are happening to files and folders. This recipe will describe how to log user accesses. Getting ready The inotifywait command can be used to gather information about file accesses. It doesn't come by default with every Linux distro. You have to install the inotify-tools package by using a package manager. It also requires the Linux kernel to be compiled with inotify support. Most of the new GNU/Linux distributions come with inotify enabled in the kernel. How to do it... Let's walk through the shell script to monitor the directory access: #/bin/bash #Filename: watchdir.sh #Description: Watch directory access path=$1 #Provide path of directory or file as argument to script inotifywait -m -r -e create,move,delete $path -q A sample output is as follows: $ ./watchdir.sh . ./ CREATE new ./ MOVED_FROM new ./ MOVED_TO news ./ DELETE news How it works... The previous script will log events create, move, and delete files and folders from the given path. The -m option is given for monitoring the changes continuously rather than going to exit after an event happens. -r is given for enabling a recursive watch the directories. -e specifies the list of events to be watched. -q is to reduce the verbose messages and print only required ones. This output can be redirected to a log file. We can add or remove the event list. Important events available are as follows: Logfile management with logrotate Logfiles are essential components of a Linux system's maintenance. Logfiles help to keep track of events happening on different services on the system. This helps the sysadmin to debug issues and also provides statistics on events happening on the live machine. Management of logfiles is required because as time passes the size of a logfile gets bigger and bigger. Therefore, we use a technique called rotation to limit the size of the logfile and if the logfile reaches a size beyond the limit, it will strip the logfile and store the older entries from the logfile in an archive. Hence older logs can be stored and kept for future reference. Let's see how to rotate logs and store them. Getting ready logrotate is a command every Linux system admin should know. It helps to restrict the size of logfile to the given SIZE. In a logfile, the logger appends information to the log file. Hence the recent information appears at the bottom of the log file. logrotate will scan specific logfiles according to the configuration file. It will keep the last 100 kilobytes (for example, specified SIZE = 100k) from the logfile and move rest of the data (older log data) to a new file logfile_name.1 with older entries. When more entries occur in the logfile (logfile_name.1) and it exceeds the SIZE, it updates the logfile with recent entries and creates logfile_name.2 with older logs. This process can easily be configured with logrotate.logrotate can also compress the older logs as logfile_name.1.gz, logfile_name2.gz, and so on. The option for whether older log files are to be compressed or not is available with the logrotate configuration. How to do it... logrotate has the configuration directory at /etc/logrotate.d. If you look at this directory by listing contents, many other logfile configurations can be found. We can write our custom configuration for our logfile (say /var/log/program.log) as follows: $ cat /etc/logrotate.d/program /var/log/program.log { missingok notifempty size 30k compress weekly rotate 5 create 0600 root root } Now the configuration is complete. /var/log/program.log in the configuration specifies the logfile path. It will archive old logs in the same directory path. Let's see what each of these parameters are: The options specified in the table are optional; we can specify the required options only in the logrotate configuration file. There are numerous options available with logrotate. Please refer to the man pages (http://linux.die.net/man/8/logrotate) for more information on logrotate.  
Read more
  • 0
  • 0
  • 6841

article-image-php-data-objects-error-handling
Packt
22 Oct 2009
11 min read
Save for later

PHP Data Objects: Error Handling

Packt
22 Oct 2009
11 min read
In this article, we will extend our application so that we can edit existing records as well as add new records. As we will deal with user input supplied via web forms, we have to take care of its validation. Also, we may add error handling so that we can react to non-standard situations and present the user with a friendly message. Before we proceed, let's briefly examine the sources of errors mentioned above and see what error handling strategy should be applied in each case. Our error handling strategy will use exceptions, so you should be familiar with them. If you are not, you can refer to Appendix A, which will introduce you to the new object-oriented features of PHP5. We have consciously chosen to use exceptions, even though PDO can be instructed not to use them, because there is one situation where they cannot be avoided. The PDO constructors always throw an exception when the database object cannot be created, so we may as well use exceptions as our main error‑trapping method throughout the code. Sources of Errors To create an error handling strategy, we should first analyze where errors can happen. Errors can happen on every call to the database, and although this is rather unlikely, we will look at this scenario. But before doing so, let's check each of the possible error sources and define a strategy for dealing with them. This can happen on a really busy server, which cannot handle any more incoming connections. For example, there may be a lengthy update running in the background. The outcome is that we are unable to get any data from the database, so we should do the following. If the PDO constructor fails, we present a page displaying a message, which says that the user's request could not be fulfilled at this time and that they should try again later. Of course, we should also log this error because it may require immediate attention. (A good idea would be emailing the database administrator about the error.) The problem with this error is that, while it usually manifests itself before a connection is established with the database (in a call to PDO constructor), there is a small risk that it can happen after the connection has been established (on a call to a method of the PDO or PDO Statement object when the database server is being shutdown). In this case, our reaction will be the same—present the user with an error message asking them to try again later. Improper Configuration of the Application This error can only occur when we move the application across servers where database access details differ; this may be when we are uploading from a development server to production server, where database setups differ. This is not an error that can happen during normal execution of the application, but care should be taken while uploading as this may interrupt the site's operation. If this error occurs, we can display another error message like: "This site is under maintenance". In this scenario, the site maintainer should react immediately, as without correcting, the connection string the application cannot normally operate. Improper Validation of User Input This is an error which is closely related to SQL injection vulnerability. Every developer of database-driven applications must undertake proper measures to validate and filter all user inputs. This error may lead to two major consequences: Either the query will fail due to malformed SQL (so that nothing particularly bad happens), or an SQL injection may occur and application security may be compromised. While their consequences differ, both these problems can be prevented in the same way. Let's consider the following scenario. We accept some numeric value from a form and insert it into the database. To keep our example simple, assume that we want to update a book's year of publication. To achieve this, we can create a form that has two fields: A hidden field containing the book's ID, and a text field to enter the year. We will skip implementation details here, and see how using a poorly designed script to process this form could lead to errors and put the whole system at risk. The form processing script will examine two request variables:$_REQUEST['book'], which holds the book's ID and $_REQUEST['year'], which holds the year of publication. If there is no validation of these values, the final code will look similar to this: $book = $_REQUEST['book'];$year = $_REQUEST['year'];$sql = "UPDATE books SET year=$year WHERE id=$book";$conn->query($sql); Let's see what happens if the user leaves the book field empty. The final SQL would then look like: UPDATE books SET year= WHERE id=1; This SQL is malformed and will lead to a syntax error. Therefore, we should ensure that both variables are holding numeric values. If they don't, we should redisplay the form with an error message. Now, let's see how an attacker might exploit this to delete the contents of the entire table. To achieve this, they could just enter the following into the year field: 2007; DELETE FROM books; This turns a single query into three queries: UPDATE books SET year=2007; DELETE FROM books; WHERE book=1; Of course, the third query is malformed, but the first and second will execute, and the database server will report an error. To counter this problem, we could use simple validation to ensure that the year field contains four digits. However, if we have text fields, which can contain arbitrary characters, the field's values must be escaped prior to creating the SQL. Inserting a Record with a Duplicate Primary Key or Unique Index Value This problem may happen when the application is inserting a record with duplicate values for the primary key or a unique index. For example, in our database of authors and books, we might want to prevent the user from entering the same book twice by mistake. To do this, we can create a unique index of the ISBN column of the books table. As every book has a unique ISBN, any attempt to insert the same ISBN will generate an error. We can trap this error and react accordingly, by displaying an error message asking the user to correct the ISBN or cancel its addition. Syntax Errors in SQL Statements This error may occur if we haven't properly tested the application. A good application must not contain these errors, and it is the responsibility of the development team to test every possible situation and check that every SQL statement performs without syntax errors. If this type of an error occurs, then we trap it with exceptions and display a fatal error message. The developers must correct the situation at once. Now that we have learned a bit about possible sources of errors, let's examine how PDO handles errors. Types of Error Handling in PDO By default, PDO uses the silent error handling mode. This means that any error that arises when calling methods of the PDO or PDOStatement classes go unreported. With this mode, one would have to call PDO::errorInfo(), PDO::errorCode(), PDOStatement::errorInfo(), or PDOStatement::errorCode(), every time an error occurred to see if it really did occur. Note that this mode is similar to traditional database access—usually, the code calls mysql_errno(),and mysql_error() (or equivalent functions for other database systems) after calling functions that could cause an error, after connecting to a database and after issuing a query. Another mode is the warning mode. Here, PDO will act identical to the traditional database access. Any error that happens during communication with the database would raise an E_WARNING error. Depending on the configuration, an error message could be displayed or logged into a file. Finally, PDO introduces a modern way of handling database connection errors—by using exceptions. Every failed call to any of the PDO or PDOStatement methods will throw an exception. As we have previously noted, PDO uses the silent mode, by default. To switch to a desired error handling mode, we have to specify it by calling PDO::setAttribute() method. Each of the error handling modes is specified by the following constants, which are defined in the PDO class: PDO::ERRMODE_SILENT – the silent strategy. PDO::ERRMODE_WARNING – the warning strategy. PDO::ERRMODE_EXCEPTION – use exceptions. To set the desired error handling mode, we have to set the PDO::ATTR_ERRMODE attribute in the following way: $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); To see how PDO throws an exception, edit the common.inc.php file by adding the above statement after the line #46. If you want to test what will happen when PDO throws an exception, change the connection string to specify a nonexistent database. Now point your browser to the books listing page. You should see an output similar to: This is PHP's default reaction to uncaught exceptions—they are regarded as fatal errors and program execution stops. The error message reveals the class of the exception, PDOException, the error description, and some debug information, including name and line number of the statement that threw the exception. Note that if you want to test SQLite, specifying a non-existent database may not work as the database will get created if it does not exist already. To see that it does work for SQLite, change the $connStr variable on line 10 so that there is an illegal character in the database name: $connStr = 'sqlite:/path/to/pdo*.db'; Refresh your browser and you should see something like this: As you can see, a message similar to the previous example is displayed, specifying the cause and the location of the error in the source code. Defining an Error Handling Function If we know that a certain statement or block of code can throw an exception, we should wrap that code within the try…catch block to prevent the default error message being displayed and present a user-friendly error page. But before we proceed, let's create a function that will render an error message and exit the application. As we will be calling it from different script files, the best place for this function is, of course, the common.inc.php file. Our function, called showError(), will do the following: Render a heading saying "Error". Render the error message. We will escape the text with the htmlspecialchars() function and process it with the nl2br() function so that we can display multi-line messages. (This function will convert all line break characters to tags.) Call the showFooter() function to close the opening and tags. The function will assume that the application has already called the showHeader() function. (Otherwise, we will end up with broken HTML.) We will also have to modify the block that creates the connection object in common.inc.php to catch the possible exception. With all these changes, the new version of common.inc.php will look like this: <?php/*** This is a common include file* PDO Library Management example application* @author Dennis Popel*/// DB connection string and username/password$connStr = 'mysql:host=localhost;dbname=pdo';$user = 'root';$pass = 'root';/*** This function will render the header on every page,* including the opening html tag,* the head section and the opening body tag.* It should be called before any output of the/*** This function will 'close' the body and html* tags opened by the showHeader() function*/function showFooter(){?></body></html><?php}/*** This function will display an error message, call the* showFooter() function and terminate the application* @param string $message the error message*/function showError($message){echo "<h2>Error</h2>";echo nl2br(htmlspecialchars($message));showFooter();exit();}// Create the connection objecttry{$conn = new PDO($connStr, $user, $pass);$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);}catch(PDOException $e){showHeader('Error');showError("Sorry, an error has occurred. Please try your requestlatern" . $e->getMessage());} As you can see, the newly created function is pretty straightforward. The more interesting part is the try…catch block that we use to trap the exception. Now with these modifications we can test how a real exception will get processed. To do that, make sure your connection string is wrong (so that it specifies wrong databasename for MySQL or contains invalid file name for SQLite). Point your browser to books.php and you should see the following window:
Read more
  • 0
  • 0
  • 6809

article-image-rest-apis-social-network-data-using-py2neo
Packt
14 Jul 2015
20 min read
Save for later

REST APIs for social network data using py2neo

Packt
14 Jul 2015
20 min read
In this article wirtten by Sumit Gupta, author of the book Building Web Applications with Python and Neo4j we will discuss and develop RESTful APIs for performing CRUD and search operations over our social network data, using Flask-RESTful extension and py2neo extension—Object-Graph Model (OGM). Let's move forward to first quickly talk about the OGM and then develop full-fledged REST APIs over our social network data. (For more resources related to this topic, see here.) ORM for graph databases py2neo – OGM We discussed about the py2neo in Chapter 4, Getting Python and Neo4j to Talk Py2neo. In this section, we will talk about one of the py2neo extensions that provides high-level APIs for dealing with the underlying graph database as objects and its relationships. Object-Graph Mapping (http://py2neo.org/2.0/ext/ogm.html) is one of the popular extensions of py2neo and provides the mapping of Neo4j graphs in the form of objects and relationships. It provides similar functionality and features as Object Relational Model (ORM) available for relational databases py2neo.ext.ogm.Store(graph) is the base class which exposes all operations with respect to graph data models. Following are important methods of Store which we will be using in the upcoming section for mutating our social network data: Store.delete(subj): It deletes a node from the underlying graph along with its associated relationships. subj is the entity that needs to be deleted. It raises an exception in case the provided entity is not linked to the server. Store.load(cls, node): It loads the data from the database node into cls, which is the entity defined by the data model. Store.load_related(subj, rel_type, cls): It loads all the nodes related to subj of relationship as defined by rel_type into cls and then further returns the cls object. Store.load_indexed(index_name, key,value, cls): It queries the legacy index, loads all the nodes that are mapped by key-value, and returns the associated object. Store.relate(subj, rel_type, obj, properties=None): It defines the relationship between two nodes, where subj and cls are two nodes connected by rel_type. By default, all relationships point towards the right node. Store.save(subj, node=None): It save and creates a given entity/node—subj into the graph database. The second argument is of type Node, which if given will not create a new node and will change the already existing node. Store.save_indexed(index_name,key,value,subj): It saves the given entity into the graph and also creates an entry into the given index for future reference. Refer to http://py2neo.org/2.0/ext/ogm.html#py2neo.ext.ogm.Store for the complete list of methods exposed by Store class. Let's move on to the next section where we will use the OGM for mutating our social network data model. OGM supports Neo4j version 1.9, so all features of Neo4j 2.0 and above are not supported such as labels. Social network application with Flask-RESTful and OGM In this section, we will develop a full-fledged application for mutating our social network data and will also talk about the basics of Flask-RESTful and OGM. Creating object model Perform the following steps to create the object model and CRUD/search functions for our social network data: Our social network data contains two kind of entities—Person and Movies. So as a first step let's create a package model and within the model package let's define a module SocialDataModel.py with two classes—Person and Movie: class Person(object):    def __init__(self, name=None,surname=None,age=None,country=None):        self.name=name        self.surname=surname        self.age=age        self.country=country   class Movie(object):    def __init__(self, movieName=None):        self.movieName=movieName Next, let's define another package operations and two python modules ExecuteCRUDOperations.py and ExecuteSearchOperations.py. The ExecuteCRUDOperations module will contain the following three classes: DeleteNodesRelationships: It will contain one method each for deleting People nodes and Movie nodes and in the __init__ method, we will establish the connection to the graph database. class DeleteNodesRelationships(object):    '''    Define the Delete Operation on Nodes    '''    def __init__(self,host,port,username,password):        #Authenticate and Connect to the Neo4j Graph Database        py2neo.authenticate(host+':'+port, username, password)        graph = Graph('http://'+host+':'+port+'/db/data/')        store = Store(graph)        #Store the reference of Graph and Store.        self.graph=graph        self.store=store      def deletePersonNode(self,node):        #Load the node from the Neo4j Legacy Index cls = self.store.load_indexed('personIndex', 'name', node.name, Person)          #Invoke delete method of store class        self.store.delete(cls[0])      def deleteMovieNode(self,node):        #Load the node from the Neo4j Legacy Index cls = self.store.load_indexed('movieIndex',   'name',node.movieName, Movie)        #Invoke delete method of store class            self.store.delete(cls[0]) Deleting nodes will also delete the associated relationships, so there is no need to have functions for deleting relationships. Nodes without any relationship do not make much sense for many business use cases, especially in a social network, unless there is a specific need or an exceptional scenario. UpdateNodesRelationships: It will contain one method each for updating People nodes and Movie nodes and, in the __init__ method, we will establish the connection to the graph database. class UpdateNodesRelationships(object):    '''      Define the Update Operation on Nodes    '''      def __init__(self,host,port,username,password):        #Write code for connecting to server      def updatePersonNode(self,oldNode,newNode):        #Get the old node from the Index        cls = self.store.load_indexed('personIndex', 'name', oldNode.name, Person)        #Copy the new values to the Old Node        cls[0].name=newNode.name        cls[0].surname=newNode.surname        cls[0].age=newNode.age        cls[0].country=newNode.country        #Delete the Old Node form Index        self.store.delete(cls[0])       #Persist the updated values again in the Index        self.store.save_unique('personIndex', 'name', newNode.name, cls[0])      def updateMovieNode(self,oldNode,newNode):          #Get the old node from the Index        cls = self.store.load_indexed('movieIndex', 'name', oldNode.movieName, Movie)        #Copy the new values to the Old Node        cls[0].movieName=newNode.movieName        #Delete the Old Node form Index        self.store.delete(cls[0])        #Persist the updated values again in the Index        self.store.save_ unique('personIndex', 'name', newNode.name, cls[0]) CreateNodesRelationships: This class will contain methods for creating People and Movies nodes and relationships and will then further persist them to the database. As with the other classes/ module, it will establish the connection to the graph database in the __init__ method: class CreateNodesRelationships(object):    '''    Define the Create Operation on Nodes    '''    def __init__(self,host,port,username,password):        #Write code for connecting to server    '''    Create a person and store it in the Person Dictionary.    Node is not saved unless save() method is invoked. Helpful in bulk creation    '''    def createPerson(self,name,surName=None,age=None,country=None):        person = Person(name,surName,age,country)        return person      '''    Create a movie and store it in the Movie Dictionary.    Node is not saved unless save() method is invoked. Helpful in bulk creation    '''    def createMovie(self,movieName):        movie = Movie(movieName)        return movie      '''    Create a relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createFriendRelationship(self,startPerson,endPerson):        self.store.relate(startPerson, 'FRIEND', endPerson)      '''    Create a TEACHES relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createTeachesRelationship(self,startPerson,endPerson):        self.store.relate(startPerson, 'TEACHES', endPerson)    '''    Create a HAS_RATED relationships between 2 nodes and invoke a local method of Store class.    Relationship is not saved unless Node is saved or save() method is invoked.    '''    def createHasRatedRelationship(self,startPerson,movie,ratings):      self.store.relate(startPerson, 'HAS_RATED', movie,{'ratings':ratings})    '''    Based on type of Entity Save it into the Server/ database    '''    def save(self,entity,node):        if(entity=='person'):            self.store.save_unique('personIndex', 'name', node.name, node)        else:            self.store.save_unique('movieIndex','name',node.movieName,node) Next we will define other Python module operations, ExecuteSearchOperations.py. This module will define two classes, each containing one method for searching Person and Movie node and of-course the __init__ method for establishing a connection with the server: class SearchPerson(object):    '''    Class for Searching and retrieving the the People Node from server    '''      def __init__(self,host,port,username,password):        #Write code for connecting to server      def searchPerson(self,personName):        cls = self.store.load_indexed('personIndex', 'name', personName, Person)        return cls;   class SearchMovie(object):    '''    Class for Searching and retrieving the the Movie Node from server    '''    def __init__(self,host,port,username,password):        #Write code for connecting to server      def searchMovie(self,movieName):        cls = self.store.load_indexed('movieIndex', 'name', movieName, Movie)        return cls; We are done with our data model and the utility classes that will perform the CRUD and search operation over our social network data using py2neo OGM. Now let's move on to the next section and develop some REST services over our data model. Creating REST APIs over data models In this section, we will create and expose REST services for mutating and searching our social network data using the data model created in the previous section. In our social network data model, there will be operations on either the Person or Movie nodes, and there will be one more operation which will define the relationship between Person and Person or Person and Movie. So let's create another package service and define another module MutateSocialNetworkDataService.py. In this module, apart from regular imports from flask and flask_restful, we will also import classes from our custom packages created in the previous section and create objects of model classes for performing CRUD and search operations. Next we will define the different classes or services which will define the structure of our REST Services. The PersonService class will define the GET, POST, PUT, and DELETE operations for searching, creating, updating, and deleting the Person nodes. class PersonService(Resource):    '''    Defines operations with respect to Entity - Person    '''    #example - GET http://localhost:5000/person/Bradley    def get(self, name):        node = searchPerson.searchPerson(name)        #Convert into JSON and return it back        return jsonify(name=node[0].name,surName=node[0].surname,age=node[0].age,country=node[0].country)      #POST http://localhost:5000/person    #{"name": "Bradley","surname": "Green","age": "24","country": "US"}    def post(self):          jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key]=jsonData[key]            print(key,' = ',jsonData[key] )        person = createOperation.createPerson(attr['name'],attr['surname'],attr['age'],attr['country'])        createOperation.save('person',person)          return jsonify(result='success')    #POST http://localhost:5000/person/Bradley    #{"name": "Bradley1","surname": "Green","age": "24","country": "US"}    def put(self,name):        oldNode = searchPerson.searchPerson(name)        jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key] = jsonData[key]            print(key,' = ',jsonData[key] )        newNode = Person(attr['name'],attr['surname'],attr['age'],attr['country'])          updateOperation.updatePersonNode(oldNode[0],newNode)          return jsonify(result='success')      #DELETE http://localhost:5000/person/Bradley1    def delete(self,name):        node = searchPerson.searchPerson(name)        deleteOperation.deletePersonNode(node[0])        return jsonify(result='success') The MovieService class will define the GET, POST, and DELETE operations for searching, creating, and deleting the Movie nodes. This service will not support the modification of Movie nodes because, once the Movie node is defined, it does not change in our data model. Movie service is similar to our Person service and leverages our data model for performing various operations. The RelationshipService class only defines POST which will create the relationship between the person and other given entity and can either be another Person or Movie. Following is the structure of the POST method: '''    Assuming that the given nodes are already created this operation    will associate Person Node either with another Person or Movie Node.      Request for Defining relationship between 2 persons: -        POST http://localhost:5000/relationship/person/Bradley        {"entity_type":"person","person.name":"Matthew","relationship": "FRIEND"}    Request for Defining relationship between Person and Movie        POST http://localhost:5000/relationship/person/Bradley        {"entity_type":"Movie","movie.movieName":"Avengers","relationship": "HAS_RATED"          "relationship.ratings":"4"}    '''    def post(self, entity,name):        jsonData = request.get_json(cache=False)        attr={}        for key in jsonData:            attr[key]=jsonData[key]            print(key,' = ',jsonData[key] )          if(entity == 'person'):            startNode = searchPerson.searchPerson(name)            if(attr['entity_type']=='movie'):                endNode = searchMovie.searchMovie(attr['movie.movieName'])                createOperation.createHasRatedRelationship(startNode[0], endNode[0], attr['relationship.ratings'])                createOperation.save('person', startNode[0])            elif (attr['entity_type']=='person' and attr['relationship']=='FRIEND'):                endNode = searchPerson.searchPerson(attr['person.name'])                createOperation.createFriendRelationship(startNode[0], endNode[0])                createOperation.save('person', startNode[0])            elif (attr['entity_type']=='person' and attr['relationship']=='TEACHES'):                endNode = searchPerson.searchPerson(attr['person.name'])                createOperation.createTeachesRelationship(startNode[0], endNode[0])                createOperation.save('person', startNode[0])        else:            raise HTTPException("Value is not Valid")          return jsonify(result='success') At the end, we will define our __main__ method, which will bind our services with the specific URLs and bring up our application: if __name__ == '__main__':    api.add_resource(PersonService,'/person','/person/<string:name>')    api.add_resource(MovieService,'/movie','/movie/<string:movieName>')    api.add_resource(RelationshipService,'/relationship','/relationship/<string:entity>/<string:name>')    webapp.run(debug=True) And we are done!!! Execute our MutateSocialNetworkDataService.py as a regular Python module and your REST-based services are up and running. Users of this app can use any REST-based clients such as SOAP-UI and can execute the various REST services for performing CRUD and search operations. Follow the comments provided in the code samples for the format of the request/response. In this section, we created and exposed REST-based services using Flask, Flask-RESTful, and OGM and performed CRUD and search operations over our social network data model. Using Neomodel in a Django app In this section, we will talk about the integration of Django and Neomodel. Django is a Python-based, powerful, robust, and scalable web-based application development framework. It is developed upon the Model-View-Controller (MVC) design pattern where developers can design and develop a scalable enterprise-grade application within no time. We will not go into the details of Django as a web-based framework but will assume that the readers have a basic understanding of Django and some hands-on experience in developing web-based and database-driven applications. Visit https://docs.djangoproject.com/en/1.7/ if you do not have any prior knowledge of Django. Django provides various signals or triggers that are activated and used to invoke or execute some user-defined functions on a particular event. The framework invokes various signals or triggers if there are any modifications requested to the underlying application data model such as pre_save(), post_save(), pre_delete, post_delete, and a few more. All the functions starting with pre_ are executed before the requested modifications are applied to the data model, and functions starting with post_ are triggered after the modifications are applied to the data model. And that's where we will hook our Neomodel framework, where we will capture these events and invoke our custom methods to make similar changes to our Neo4j database. We can reuse our social data model and the functions defined in ExploreSocialDataModel.CreateDataModel. We only need to register our event and things will be automatically handled by the Django framework. For example, you can register for the event in your Django model (models.py) by defining the following statement: signals.pre_save.connect(preSave, sender=Male) In the previous statement, preSave is the custom or user-defined method, declared in models.py. It will be invoked before any changes are committed to entity Male, which is controlled by the Django framework and is different from our Neomodel entity. Next, in preSave you need to define the invocations to the Neomodel entities and save them. Refer to the documentation at https://docs.djangoproject.com/en/1.7/topics/signals/ for more information on implementing signals in Django. Signals in Neomodel Neomodel also provides signals that are similar to Django signals and have the same behavior. Neomodel provides the following signals: pre_save, post_save, pre_delete, post_delete, and post_create. Neomodel exposes the following two different approaches for implementing signals: Define the pre..() and post..() methods in your model itself and Neomodel will automatically invoke it. For example, in our social data model, we can define def pre_save(self) in our Model.Male class to receive all events before entities are persisted in the database or server. Another approach is using Django-style signals, where we can define the connect() method in our Neomodel Model.py and it will produce the same results as in Django-based models: signals.pre_save.connect(preSave, sender=Male) Refer to http://neomodel.readthedocs.org/en/latest/hooks.html for more information on signals in Neomodel. In this section, we discussed about the integration of Django with Neomodel using Django signals. We also talked about the signals provided by Neomodel and their implementation approach. Summary Here we learned about creating web-based applications using Flask. We also used Flasks extensions such as Flask-RESTful for creating/exposing REST APIs for data manipulation. Finally, we created a full blown REST-based application over our social network data using Flask, Flask-RESTful, and py2neo OGM. We also learned about Neomodel and its various features and APIs provided to work with Neo4j. We also discussed about the integration of Neomodel with the Django framework. Resources for Article: Further resources on this subject: Firebase [article] Developing Location-based Services with Neo4j [article] Learning BeagleBone Python Programming [article]
Read more
  • 0
  • 0
  • 6793
article-image-customizing-layout-themes-php-nuke
Packt
31 Mar 2010
23 min read
Save for later

Customizing Layout with Themes in PHP-Nuke

Packt
31 Mar 2010
23 min read
Creating a PHP-Nuke theme gives your site its own special look, distinguishing it from other PHP-Nuke-created sites and offers an effective outlet for your creative talents. Creating a theme requires some knowledge of HTML, confidence in working with CSS and PHP, but most important is some imagination and creativity! Unlike the tasks we have tackled in previous articles, where we have been working exclusively through a web browser to control and configure PHP-Nuke, working with themes is the start of a new era in your PHP-Nuke skills; editing the code files of PHP-Nuke itself. Fortunately, the design of PHP-Nuke means that our theme work won't be tampering with the inner workings of PHP-Nuke. However, becoming confident in handling the mixture of HTML and PHP code that is a PHP-Nuke theme will prepare you for the more advanced work ahead, when we really get to grips with PHP-Nuke at the code level. In this article, we will look at: Theme management Templates in themes Changing the page header Working with the stylesheet Changing blocks Changing the format of stories What Does a Theme Control? Despite the fact that we say 'themes control the look and feel of your site', a theme does not determine every aspect of the page output. PHP-Nuke is an incredibly versatile application, but it cannot produce every website imaginable. Appearance First of all, the appearance of the page can be controlled through the use of colors, fonts, font sizes, weights, and so on. This can either be done through the use of CSS styles or HTML. You can also add JavaScript for fancier effects, or even Flash animations, Java applets, or sounds—anything that you can add to a standard HTML page. Graphical aspects of the page such as the site banner, background images, and so on, are under the care of the theme. There are also some modules that allow their standard graphical icons to be overridden with images from a theme. Page Layout Roughly speaking, a PHP-Nuke page consists of three parts; the top bit, the bit in the middle, and the bit at the bottom! The top bit—the header—usually contains a site logo and such things as a horizontal navigation bar for going directly to important parts of your site. The bottom bit—the footer—contains the copyright message. In between the header and the footer, the output is usually divided into three columns. The left-hand column typically contains blocks, displayed one of top each other, the middle column contains the module output, and the right-hand column contains more blocks. The layout of these columns (their width for example) is controlled by the theme. You may have noticed that the right-hand column is generally only displayed on the homepage of a PHP-Nuke site; this too, is something that is controlled by the theme. The appearance of the blocks is controlled by the theme; PHP-Nuke provides the title of the block and its content, and the theme will generally 'frame' these to produce the familiar block look. The theme also determines how the description of stories appears on the homepage. In addition, the theme determines how the full text of the story, its extended text, is displayed. We've talked about how the theme controls the 'look' of things. The theme also allows you to add other site-related data to your page; for example the name of the site can appear, and the site slogan, and you can even add such things as the user's name with a friendly welcome message. Theme Management Basically, a theme is a folder that sits inside the themes folder in your PHP-Nuke installation. Different themes correspond to different folders in the themes folder, and adding or removing a theme is as straightforward as adding or removing the relevant folder from the themes folder. By default, you will find around 14 themes in a standard PHP-Nuke installation. DeepBlue is the default theme. Themes can be chosen in one of two ways: By the administrator: You can simply select the required theme from the General Site Info panel of the Site Preferences administration menu and save the changes. The theme selected by the administrator is the default theme for the site and will be seen by all users of the site, registered or unregistered. By the user: Users can override the default theme set by the administrator from the Themes option of the Your Account module. This sets a new, personal, theme that will be displayed to that user. Note that this isn't a theme especially customized for that user; it is just one chosen from the list of standard themes installed on your site. Unregistered visitors do not have an option to choose a theme; they have to become registered users. Theme File Structure Let's start with the default theme, DeepBlue. If you open up the DeepBlue folder within the themes folder in the root of your PHP-Nuke installation, you will find three folders and two files. The three folders are: forums: This folder contains the theme for the Forums module. This is not strictly a requirement of a PHP-Nuke theme, and not every PHP-Nuke theme has a forums theme. The Forums module (otherwise known as phpBB) has its own theme 'engine'. The purpose of including a theme for the forums is that you have consistency between the rest of your PHP-Nuke display and the phpBB display. images: This folder contains the image files used by your theme. These include the site logo, background images, and graphics for blocks among others. As mentioned earlier, within this folder can be other folders containing images to override the standard icons. style: This folder contains the CSS files for your theme. Usually, there is one CSS file in the style folder, style.css. Each theme will make use of its style.css file, and this is the file into which we will add our style definitions when the time comes. Of the two files, index.html is simply there to prevent people browsing to your themes folder and seeing what it contains; visiting this page in a browser simply produces a blank page. It is a very simple security measure. The themes.php file is a PHP code file, and is where all the action happens. This file must always exist within a theme folder. We will concentrate on this file later when we customize the theme. In other themes you will find more files; we will look at these later. Installing a New Theme Installing and uninstalling themes comes down to adding or removing folders from the themes folder, and whenever a list of available themes is presented, either in the Site Preferences menu or the Your Accounts module, PHP-Nuke refreshes this list by getting the names of the folders in the themes folder. You will find a huge range of themes on the Web. For example, there is a gallery of themes at: http://nukecops.com/modules.php?set_albumName=packs&op=modload&name=Gallery& file=index&include=view_album.php Many of these are themes written for older versions of PHP-Nuke, but most are still compatible with the newer releases. There is also a live demonstration of some themes at: http://www.portedmods.com/styles/ On this page you can select the new theme and see it applied immediately, before you download it. Removing an Existing Theme To remove a theme from your PHP-Nuke site you simply remove the corresponding folder from the themes folder, and it will no longer be available to PHP-Nuke. However, you should be careful when removing themes—what if somebody is actually using that theme? If a user has that theme selected as their personal theme, and you remove that theme, then that user's personal theme will revert to the default theme selected in Site Preferences. If you remove the site's default theme, then you will break your site! Deleting the site's default theme will produce either a blank screen or messages like the following when you attempt to view your site. Warning: head(themes/NonExistentTheme/theme.php)[function.head]: failed to create stream:No such file or directory in c:nukehtmlheader.php on line 31 The only people who can continue to use your site in this situation are those who have selected a personal theme for themselves—and only if that theme is still installed. To correct such a faux pas, make a copy of one of the other themes in your themes folder (unless you happen to have a copy of the theme you just deleted elsewhere), and rename it to the name of the theme you just deleted. In conclusion, removing themes should only be a problem if you somehow manage to remove your site's default theme. For users who have selected the theme you just removed, their theme will revert to the default theme and life goes on for them. A final caveat about the names of theme folders; do not use spaces in the names of the folders in the themes folder—this can lead to strange behavior when the list of themes is displayed in the drop-down menus for users to select from. From an Existing Theme to a New Theme We'll create a new theme for the Dinosaur Portal by making changes to an existing theme. This will not only make you feel like the theme master, but it will also serve to illustrate the nature of the theme-customization problem. We'll be making changes all over the place—adding and replacing things in HTML and PHP files—but it will be worth it. Another thing to bear in mind is that we're creating a completely different looking site without making any changes to the inner parts of PHP-Nuke. At this point, all we are changing is the theme definition. The theme for the Dinosaur Portal will have a warm, tropical feel to it to evoke the atmosphere of a steaming, tropical, prehistoric jungle, and will use lots of orange color on the page. First of all, we need a theme on which to conduct our experiments. We'll work on the 3D-Fantasy theme. Starting Off The first thing we will do is to create a new theme folder, which will be a copy of the 3D-Fantasy theme. Open up the themes folder in your file explorer, and create a copy of the 3D-Fantasy folder. Rename this copy as TheDinosaurPortal. Now log into your site as testuser, and from the Your Account module, select TheDinosaurPortal as the theme. Your site will immediately switch to this theme, but it will look exactly like 3D-Fantasy, because, at the moment, it is! You will also need some images from the code download for this article; you will find them in the SiteImages folder of this article's code. Replacing Traces of the Old Theme The theme that we are about to work on has many occurrences of 3D-Fantasy in a number of files, such as references to images. We will have to remove these first of all, or else our new theme will be looking in the wrong folder for images and other resources. Open each of the files below in your text editor, and replace every occurrence of 3D-Fantasy with TheDinosaurPortal in a text editor, we'll use Wordpad. "You can use the replace functionality of your editor to do this. For example, in Wordpad, select Edit | Replace; enter the text to be replaced, and then click on Replace All to replace all the occurrences in the open file. After making all the changes, save each file: blocks.html footer.html header.html story_home.html story_page.html theme.php tables.php Templates and PHP Files We've just encountered two types of file in the theme folder—PHP code files (theme.php and tables.php) and HTML files (blocks.html, footer.html, and so on). Before we go any further, we need to have a quick discussion of what roles these types of file play in the theme construction. PHP Files The PHP files do the main work of the theme. These files contain the definitions of some functions that handle the display of the page header and how an individual block or article is formatted, among other tasks. These functions are called from other parts of PHP-Nuke when required. We'll talk about them when they are required later in the article. Part of our customization work will be to make some changes to these functions and have them act in a different way when called. Historically, the code for a PHP-Nuke theme consisted of a single PHP file, theme.php. One major drawback of this was the difficulty you would have in editing this file in the 'design' view of an HTML editor. Instead of seeing the HTML that you wished to edit, you probably wouldn't see anything in the 'design' view of most HTML editors, since the HTML was inextricably intertwined with the PHP code. This made creating a new theme, or even editing an existing theme, not something for the faint-hearted—you had to be confident with your PHP coding to make sure you were changing the right places, and in the right way. The theme.php file consists of a number of functions that are called from other parts of PHP-Nuke when required. These functions are how the theme does its work. One of the neat appearances in recent versions of PHP-Nuke is the use of a 'mini-templating' engine for themes. Not all themes make use of this method (DeepBlue is one theme that doesn't), and that is one of the reasons we are working with 3D-Fantasy as our base theme, since it does follow the 'templating' model. Templates The HTML files that we modified above are the theme templates. They consist of HTML, without any PHP code. Each template is responsible for a particular part of the page, and is called into action by the functions of the theme when required. One advantage of using these templates is that they can be easily edited in visual HTML editors, such as Macromedia's Dreamweaver, without any PHP code to interfere with the page design. Another advantage of using these templates is to separate logic from presentation. The idea of a template is that it should determine how something is displayed (its presentation). The template makes use of some data supplied to it, but acquiring and choosing this data (the logic) is not done in the template. The template is processed or evaluated by the 'template engine', and output is generated. The template engine in this case is the theme.php file. To see how the template and PHP-Nuke 'communicate', let's look at an extract from the header.html file in the 3D-Fantasy folder: <a href="index.php"> <img src="themes/3D-Fantasy/images/logo.gif" border="0" alt="Welcome to $sitename" align="left"></a> The $sitename text (shown highlighted) is an example of what we'll call a placeholder. There is a correspondence between these placeholders and PHP variables that have the same name as the placeholder text. Themes that make use of this templating process more or less replace any text beginning with $ in the template by the value of the corresponding PHP variable. This means that you can make use of variables from PHP-Nuke itself in your themes; these could be the name of your site ($sitename), your site slogan, or even information about the user. In fact, you can add your own PHP code to create a new variable, which you can then display from within one of the templates. To complete the discussion, we will look at how the templates are processed in PHP-Nuke. The code below is a snippet from one of the themeheader() function in the theme.php file. This particular snippet is taken from the 3D-Fantasy theme. function themeheader(){ global $user, $banners, $sitename, $slogan, $cookie, $prefix, $anonymous, $db;... code continues ....$tmpl_file = "themes/3D-Fantasy/header.html";$thefile = implode("", file($tmpl_file));$thefile = addslashes($thefile);$thefile = "$r_file="".$thefile."";";eval($thefile);print $r_file;... code continues .... The processing starts with the line where the $tmpl_file variable is defined. This variable is set to the file name of the template to be processed, in this case header.html. The next line grabs the content of the file as a string. Let's suppose the header.html file contained the text You're welcomed to $sitename, thanks for coming!. Then, continuing in the code above, the $thefile variable would eventually hold this: $r_file = " You're welcomed to $sitename, thanks for coming!"; This looks very much like a PHP statement, and that is exactly what PHP-Nuke is attempting to create. The eval() function executes the statement; it defines the variable $r_file as above. This is equivalent to putting this line straight into the code: $r_file = " You're welcomed to $sitename, thanks for coming!"; If this line were in the PHP code, the value of the $sitename variable will be inserted into the string, and this is exactly how the placeholders in the templates are replaced with the values of the corresponding PHP variables. This means that the placeholders in templates can only use variables accessible at the point in the code where the template is processed with the eval() function. This means any parameters passed to the function at the time—global variables that have been announced with the global statement or any variables local to the function that have been defined before the line with the eval() function. This does mean that you will have to study the function processing the template to see what variables are available. In the examples in this article we'll look at the most relevant variables. The templates do not allow for any form of 'computation' within them; you cannot use loops or call PHP functions. You do your computations 'outside' the template in the theme.php file, and the results are 'pulled' into the template and displayed from there. Now that we're familiar with what we're going to be working with, let's get started. Changing the Page Header The first port of call will be creating a new version of the page header. We will make these customizations: Changing the site logo graphic Changing the layout of the page header Adding a welcome message to the user, and displaying the user's avatar Adding a drop-down list of topics to the header Creating a navigation bar Time For Action—Changing the Site Logo Graphic Grab the <>ilogo.gif file from the SiteImages folder in the code download. Copy it to the themes/TheDinosaurPortal/images folder, overwriting the existing logo.gif file. Refresh the page in your browser. The logo will have changed! What Just Happened? The logo.gif file in the images folder is the site logo. We replaced it with a new banner, and immediately the change came into effect. Time For Action—Changing the Site Header Layout In the theme folder is a file called header.html. Open up this file in a text editor, we'll use Wordpad. Replace all the code in this file with the following: <!-- Time For Action—Changing the Site Header Layout --><table border="0" cellspacing="0" cellpadding="6" width="100%" bgcolor="#FFCC33"> <tr valign="middle"> <td width="60%" align="right" rowspan="2"> <a href="index.php"><img src="themes/$GLOBALS[ThemeSel]/images/logo.gif" border="1" alt="Welcome to $sitename"> </a></td> <td width="40%" colspan="2"> <p align="center"><b>WELCOME TO $sitename!</b></td> </tr> <tr> <td width="20%">GAP</td> <td width="20%">GAP</td> </tr></table><!-- End of Time for Action -->$public_msg<br><table cellpadding="0" cellspacing="0" width="99%" border="0" align="center" bgcolor="#ffffff"><tr><td bgcolor="#ffffff" valign="top"> Save the header.html file. Refresh your browser. The site header now looks like this: What Just Happened? The header.html file is the template responsible for formatting the site header. Changing this file will change the format of your site header. We simply created a table that displays the site logo in the left-hand column, a welcome message in the right-hand column, and under that, two GAPs that we will add more to in a moment. We set the background color of the table to an orange color (#FFCC33). We used the $sitename placeholder to display the name of the site from the template. Note that everything after the line: <!-- End of Time for Action --> in our new header.html file is from the original file. (The characters here denote an HTML comment that is not displayed in the browser). This is because the end of the header.html file starts a new table that will continue in other templates. If we had removed these lines, the page output would have been broken. There was another interesting thing we used in the template, the $GLOBALS[ThemeSel] placeholder: <a href="index.php"><img src="themes/$GLOBALS[ThemeSel]/images/logo.gif" ThemeSel is a global variable that holds the name of the current theme—it's either the default site theme or the user's chosen theme. Although it's a global variable, using just $ThemeSel in the template would give a blank, this is because it has not been declared as global by the function in PHP-Nuke that consumes the header.html template. However, all the global variables can be accessed through the $GLOBALS array, and using $GLOBALS[ThemeSel] accesses this particular global variable. Note that this syntax is different from the way you may usually access elements of the $GLOBALS array in PHP. You might use $GLOBALS['ThemeSel'] or $GLOBALS["ThemeSel"]. Neither of these work in the template so we have to use the form without the ' or ". Time For Action—Fixing and Adding the Topics List Next we'll add the list of topics as a drop-down box to the page header. The visitor will be able to select one of the topics from the box, and then the list of stories from that topic will be displayed to them through the News module. Also, the current topic will be selected in the drop-down box to avoid confusion. This task involves fixing some bugs in the current version of the 3D-Fantasy theme. First of all, open the theme.php file and find the following line in the themeheader() function definition: $topics_list = "<select name="topic" onChange='submit()'>n"; Replace this line with these two lines: global $new_topic;$topics_list = "<select name="new_topic" onChange='submit()'>n"; If you move a few lines down in the themeheader() function, you will find this line: if ($topicid==$topic) { $sel = "selected "; } Replace $topic with $new_topic in this line to get: if ($topicid==$new_topic) { $sel = "selected "; } Save the theme.php file. Open the header.html file in your text editor, and where the second GAP is, make the modifications as shown below: <td width="20%">GAP</td> <td width="20%"><form action="modules.php?name=News&new_topic" method="post"> Select a Topic:<br>$topics_list</select></form></td></tr></table><!-- End of Time for Action --> Save the header.html file. Refresh your browser. You will see the new drop-down box in your page header: What just Happened? The themeheader() function is the function in theme.php responsible for processing the header.html template, and outputting the page header. The $topics_list variable has already been created for us in the themeheader() function, and can be used from the header.html template. It is a string of HTML that defines an HTML select drop-down list consisting of the topic titles. However, the first few steps require us to make a change to the $topics_list variable, correcting the name of the select element and also using the correct variable to ensure the current topic (if any) is selected in the drop-down box. The select element needs to have the name of new_topic, so that the News module is able to identify which topic we're after. This is all done with the changes to the theme.php file. First, we add the global statement to access the $new_topic variable, before correcting the name of the select element: global $new_topic;$topics_list = "<select name="new_topic" onChange='submit()'>n"; The next change we made is to make sure we are looking for the $new_topic variable, not the $topic variable, which isn't even defined: if ($topicid==$new_topic) { $sel = "selected "; } Now the $topics_list variable is corrected, all we have to do is add a placeholder for this variable to the header.html template, and some more HTML around it. We added the placeholder for $topics_list to display the drop-down list, and a message to go with it encouraging the reader to select a topic into one of the GAP table cells we created in the new-look header. The list of topics will be contained in a form tag, and when the user selects a topic, the form will be posted back to the server to the News module, and the stories in the selected topic will be displayed. (The extra HTML that handles submitting the form is contained with the $topics_list variable.) <form action="modules.php?name=News" method="post">Select a Topic:<br>$topics_list All that remains now is to close the select tag—the tag was opened in the $topics_list variable but not closed—and then close the form tag: </select></form> When the page is displayed, this is the HTML that PHP-Nuke produces for the topics drop-down list: <form action="modules.php?name=News&new_topic" method="post">Select a Topic:<br><select name="topic" onChange='submit()'><option value="">All Topics</option><option value="1">The Dinosaur Portal</option><option value="2">Dinosuar Hunting</option></select></form>
Read more
  • 0
  • 0
  • 6792

article-image-personalizing-vim
Packt
14 May 2010
12 min read
Save for later

Personalizing Vim

Packt
14 May 2010
12 min read
(Read more interesting articles on Hacking Vim 7.2 here.) Some of these tasks contain more than one recipe because there are different aspects for personalizing Vim for that particular task. It is you, the reader, who decides which recipes (or parts of it) you would like to read and use. Before we start working with Vim, there are some things that you need to know about your Vim installation, such as where to find the configuration files. Where are the configuration files? When working with Vim, you need to know a range of different configuration files. The location of these files is dependent on where you have installed Vim and the operating system that you are using. In general, there are three configuration files that you must know where to find: vimrc gvimrc exrc The vimrc file is the main configuration file for Vim. It exists in two versions—global and personal. The global vimrc file is placed in the folder where all of your Vim system files are installed. You can find out the location of this folder by opening Vim and executing the following command in normal mode: :echo $VIM The examples could be: Linux: /usr/share/vim/vimrc Windows: c:program filesvimvimrc The personal vimrc file is placed in your home directory. The location of the home directory is dependent on your operating system. Vim was originally meant for Unixes, so the personal vimrc file is set to be hidden by adding a dot as the first character in the filename. This normally hides files on Unix, but not on Microsoft Windows. Instead, the vimrc file is prepended with an underscore on these systems. So, examples would be: Linux: /home/kim/.vimrc Windows: c:documents and settingskim_vimrc Whatever you change in the personal vimrc file will overrule any previous setting made in the global vimrc file. This way you can modify the entire configuration without having to ever have access to the global vimrc file. You can find out what Vim considers as the home directory on your system by executing the following command in normal mode: :echo $HOME Another way of finding out exactly which vimrc file you use as your personal file is by executing the following command in the normal mode: :echo $MYVIMRC The vimrc file contains ex (vi predecessor) commands, one on each line, and is the default place to add modifications to the Vim setup. In the rest of the article, this file is just called vimrc. Your vimrc can use other files as an external source for configurations. In the vimrc file, you use the source command like this: source /path/to/external/file Use this to keep the vimrc file clean, and your settings more structured. (Learn more about how to keep your vimrc clean in Appendix B, Vim Configuration Alternatives). The gvimrc file is a configuration file specifically for Gvim. It resembles the vimrc file previously described, and is placed in the same location as a personal version as well as a global version. For example: Linux: /home/kim/.gvimrc and /usr/share/vim/gvimrc Windows: c:documents and settingskim_gvimrc and c:program filesvimgvimrc This file is used for GUI-specific settings that only Gvim will be able to use. In the rest of the article, this file is called gvimrc. The gvimrc file does not replace the vimrc file, but is simply used for configurationsthat only apply to the GUI version of Vim. In other words, there is no need to haveyour configurations duplicated in both the vimrc file and the gvimrc file. The exrc file is a configuration file that is only used for backwards compatibility with the old vi / ex editor. It is placed at the same location (both global and local) as vimrc, and is used the same way. However, it is hardly used anymore except if you want to use Vim in a vi-compatible mode. Changing the fonts In regular Vim, there is not much to do when it comes to changing the font because the font follows one of the terminals. In Gvim, however, you are given the ability to change the font as much as you like. The main command for changing the font in Linux is: :set guifont=Courier 14 Here, Courier can be exchanged with the name of any font that you have, and 14 with any font size you like (size in points—pt). For changing the font in Windows, use the following command: :set guifont=Courier:14 If you are not sure about whether a particular font is available on the computer or not, you can add another font at the end of the command by adding a comma between the two fonts. For example: :set guifont=Courier New 12, Arial 10 If the font name contains a whitespace or a comma, you will need to escape it with a backslash. For example: :set guifont=Courier New 12 This command sets the font to Courier New size 12, but only for this session. If you want to have this font every time you edit a file, the same command has to be added to your gvimrc file (without the : in front of set). In Gvim on Windows, Linux (using GTK+), Mac OS, or Photon, you can get a font selection window shown if you use this command: :set guifont=*. If you tend to use a lot of different fonts depending on what you are currently working with (code, text, logfiles, and so on.), you can set up Vim to use the correct font according to the file type. For example, if you want to set the font to Arial size 12 every time a normal text file (.txt) is opened, this can be achieved by adding the following line to your vimrc file: autocmd BufEnter *.txt set guifont=Arial 12 The window of Gvim will resize itself every time the font is changed. This means, if you use a smaller font, you will also (as a default) have a smaller window. You will notice this right away if you add several different file type commands like the one previously mentioned, and then open some files of different types. Whenever you switch to a buffer with another file type, the font will change, and hence the window size too. You can find more information about changing fonts in the Vim help system under Help | guifont. Changing color scheme Often, when working in a console environment, you only have a black background and white text in the foreground. This is, however, both dull and dark to look at. Some colors would be desirable. As a default, you have the same colors in the console Vim as in the console you opened it from. However, Vim has given its users the opportunity to change the colors it uses. This is mostly done with a color scheme file. These files are usually placed in a directory called colors wherever you have installed Vim. You can easily change the installed color schemes with the command: :colorscheme mycolors Here, mycolors is the name of one of the installed color schemes. If you don't know the names of the installed color schemes, you can place the cursor after writing: :colorscheme Now, you can browse through the names by pressing the Tab key. When you find the color scheme you want, you can press the Enter key to apply it. The color scheme not only applies to the foreground and background color, but also to the way code is highlighted, how errors are marked, and other visual markings in the text. You will find that some color schemes are very alike and only minor things have changed. The reason for this is that the color schemes are user supplied. If some user did not like one of the color settings in a scheme, he or she could just change that single setting and re-release the color scheme under a different name. Play around with the different color schemes and find the one you like. Now, test it in the situations where you would normally use it and see if you still like all the color settings. While learning Basic Vim Scripting, we will get back to how you can change a color scheme to fit your needs perfectly. Personal highlighting In Vim, the feature of highlighting things is called matching. With matching, you can make Vim mark almost any combination of letters, words, numbers, sentences, and lines. You can even select how it should be marked (errors in red, important words in green, and so on). Matching is done with the following command: :match Group /pattern/ The command has two arguments. The first one is the name of the color group that you will use in the highlight. Compared to a color scheme, which affects the entire color setup, a color group is a rather small combination of background (or foreground) colors that you can use for things such as matches. When Vim is started, a wide range of color groups are set to default colors, depending on the color scheme you have selected. To see a complete list of color groups, use the command: :so $VIMRUNTIME/syntax/hitest.vim. The second argument is the actual pattern you want to match. This pattern is a regular expression and can vary from being very simple to extremely complex, depending on what you want to match. A simple example of the match command in use would be: :match ErrorMsg /^Error/ This command looks for the word Error (marked with a ^) at the beginning of all lines. If a match is found, it will be marked with the colors in the ErrorMsg color group (typically white text on red background). If you don't like any of the available color groups, you can always define your own. The command to do this is as follows: :highlight MyGroup ctermbg=red guibg=red gctermfg=yellowguifg=yellow term=bold This command creates a color group called MyGroup with a red background and yellow text, in both the console (Vim) and the GUI (Gvim). You can change the following options according to your preferences: ctermb Background color in console guibg Background color in Gvim ctermf Text color in console guifg Text color in Gvim gui Font formatting in Gvim term Font formatting in console (for example, bold) If you use the name of an existing color group, you will alter that group for the rest of the session. When using the match command, the given pattern will be matched until you perform a new match or execute the following command: :match NONE The match command can only match one pattern at a time, so Vim has provided you with two extra commands to match up to three patterns at a time. The commands are easy to remember because their names resemble those of the match command: :2match:3match You might wonder what all this matching is good for, as it can often seem quite useless. Here are a few examples to show the strength of matching. Example 1: Mark color characters after a certain column In mails, it is a common rule that you do not write lines more than 74 characters (a rule that also applies to some older programming languages such as, Fortran-77). In a case like this, it would be nice if Vim could warn you when you reached this specific number of characters. This can simply be done with the following command: :match ErrorMsg /%>73v.+/ Here, every character after the 73rd character will be marked as an error. This match is a regular expression that when broken down consists of: %> Match after column with the number right after this 73 The column number V Tells that it should work on virtual columns only .+ Match one or more of any character Example 2: Mark tabs not used for indentation in code When coding, it is generally a good rule of thumb to use tabs only to indent code, and not anywhere else. However, for some it can be hard to obey this rule. Now, with the help of a simple match command, this can easily be prevented. The following command will mark any tabs that are not at the beginning of the line (indentation) as an error: :match errorMsg /[^t]zst+/ Now, you can check if you have forgotten the rule and used the Tab key inside the code. Broken down, the match consists of the following parts: [^ Begin a group of characters that should not be matched t The tab character ] End of the character group zs A zero-width match that places the 'matching' at the beginning of the line ignoring any whitespaces t+ One or more tabs in a row This command says: Don't match all the tab characters; match only the ones that are not used at the beginning of the line (ignoring any whitespaces around it). If instead of using tabs you want to use the space character for indentation, you can change the command to: :match errorMsg /[t]/ This command just says: Match all the tab characters. Example 3: Preventing errors caused by IP addresses If you write a lot of IP addresses in your text, sometimes you tend to enter a wrong value in one (such as 123.123.123.256). To prevent this kind of an error, you can add the following match to your vimrc file: match errorMsg /(2[5][6-9]|2[6-9][0-9]|[3-9][0-9][0-9])[.][0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}|[0-9]{1,3}[.](2[5][6-9]|2[6-9][0-9]| [3-9][0-9][0-9])[.][0-9]{1,3}[.][0-9]{1,3}|[0-9]{1,3}[.][0-9]{1,3}[.](2[5] [6-9]|2[6-9][0-9]|[3-9][0-9][0-9])[.][0-9]{1,3}|[0-9]{1,3}[.][0-9]{1,3}[.][0-9]{1,3}[.](2[5][6-9]|2[6-9][0-9]|[3-9][0-9][0-9])/ Even though this seems a bit too complex for solving a small possible error, you have to remember that even if it helps you just once, it is worth adding. If you want to match valid IP addresses, you can use this, which is a much simpler command: match todo /((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?).) {3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)/
Read more
  • 0
  • 0
  • 6786
Modal Close icon
Modal Close icon