Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-moodle-history-teaching-using-chats-books-and-plugins
Packt
29 Jun 2011
4 min read
Save for later

Moodle: History Teaching using Chats, Books and Plugins

Packt
29 Jun 2011
4 min read
The Chat Module Students naturally gravitate towards the Chat module in Moodle. It is one of the modules that they effortlessly use whilst working on another task. I often find that they have logged in and are discussing work related tasks in a way that enables them to move forward on a particular task. Another use for the Chat module is to conduct a discussion outside the classroom timetabled lesson when students know that you are available to help them with issues. This is especially relevant to students who embark on study leave in preparation for examinations. It can be a lonely and stressful period. Knowing that they can log in to a chat that has been planned in advance means that they can prepare issues that they wish to discuss about their workload and find out how their peers are tackling the same issues. The teacher can ensure that the chat stays on message and provide useful input at the same time. Setting up a Chatroom We want to set up a chat with students who are on holiday but have some examination preparation to do for a lesson that will take place straight after their return to school. Ideally we would have informed the students prior to starting their holiday that this session would be available to anyone who wished to take part. Log in to the Year 7 History course and turn on editing In the Introduction section, click the Add an activity dropdown Select Chat Enter an appropriate name for the chat Enter some relevant information in the Introduction text Select the date and time for the chat to begin Beside Repeat sessions select No repeats – publish the specified time only Leave other elements at their default settings Click Save changes The following screenshot is the result of clicking Add an activity from the drop-down menu: If we wanted to set up the chatroom so that the chat took place at the same time each day or each week then it is possible to select the appropriate option from the Repeat sessions dropdown. The remaining options make it possible for students to go back and view sessions that they have taken part in. Entering the chatroom When a student or teacher logs in to the course for the appointed chat they will see the chat symbol in the Introduction section. Clicking on the symbol enables them to enter the chatroom via a simple chat window or a more accessible version where checking the box ensures that only new messages appear on the screen as shown in the following screenshot: As long as another student or teacher has entered the chatroom, a chat can begin when users type a message and await a response. The Chat module is a useful way for students to collaborate with each other and with their teacher if they need to. It comes into its own when students are logging in to discuss how to make progress with their collaborative wiki story about a murder in the monastery or when students preparing for an examination share tips and advice to help each other through the experience. Collaboration is the key to effective use of the chat module and teachers need not fear its potential for timewasting if this point is emphasized in the activities that they are working on. Plugins A brief visit to www.moodle.org and a search for ‘plugins’ reveals an extensive list of modules that are available for use with Moodle but stand outside the standard installation. If you have used a blogging tool such as Wordpress you will be familiar with the concept of plugins. Over the last few years, developers have built up a library of plugins which can be used to enhance your Moodle experience. Every teacher has different ways of doing things and it is well worth exploring the plugins database and related forums to find out what teachers are using and how they are using it. There is for example a plugin for writing individual learning plans for students and another plugin called Quickmail which enables you to send an email to everyone on your course even more quickly than the conventional way. Installing plugins Plugins need to be installed and they need administrator rights to run at all. The Book module for example, requires a zip file to be downloaded from the plugins database onto your computer and the files then need to be extracted to a folder in the Mod folder of your Moodle’s software directory. Once it is in the correct folder, the administrator then needs to run the installation. Installation has been successful if you are able to log in to the course and see the Book module as an option in the Add a resource dropdown.
Read more
  • 0
  • 0
  • 4653

article-image-deploy-toshi-bitcoin-node-docker-aws
Alex Leishman
05 Aug 2015
8 min read
Save for later

Deploy Toshi Bitcoin Node with Docker on AWS

Alex Leishman
05 Aug 2015
8 min read
Toshi is an implementation of the Bitcoin protocol, written in Ruby and built by Coinbase in response to their fast growth and need to build Bitcoin infrastructure at scale. This post will cover: How to deploy Toshi to an Amazon AWS instance with Redis and PostgreSQL using Docker. How to query the data to gain insights into the Blockchain To get the most out of this post you will need some basic familiarity with Linux, SQL and AWS. Most Bitcoin nodes run “Bitcoin Core”, which is written in C++ and serves as the de-facto standard implementation of the Bitcoin protocol. Its advantages are that it is fast for light-medium use and efficiently stores the transaction history of the network (the blockchain) in LevelDB, a key-value datastore developed at Google. It has wallet management features and an easy-to-use JSON RPC interface for communicating with other applications. However, Bitcoin Core has some shortcomings that make it difficult to use for wallet/address management in at-scale applications. Its database, although efficient, makes it impossible or very difficult to perform certain queries on the blockchain. For example, if you wanted to get the balance of any bitcoin address, you would have to write a script to parse the blockchain separately to find the answer. Additionally, Bitcoin Core starts to significantly slow down when it has to manage and monitor large amounts of addresses (> ~10^7). For a web app with hundreds of thousands of users, each regularly generating new addresses, Bitcoin Core is not ideal. Toshi attempts to address the flexibility and scalability issues facing Bitcoin Core by parsing and storing the entire blockchain in an easily-queried PostgreSQL database. Here is a list of tables in Toshi’s DB: schema.txt We will see the direct benefit of this structure when we start querying our data to gain insights from the blockchain. Since Toshi is written in Ruby it has the added advantage of being developer friendly and easy to customize. The main downside of Toshi is the need for ~10x more storage than Bitcoin core, as storing and indexing the blockchain in well-indexed relational DB requires significantly more disk space. First we will create an instance on Amazon AWS. You will need at least 300GB of storage for the Postgres database. Be sure to auto assign a public IP and allow TLS incoming connections on Port 5000, as this is how we will access the Toshi web interface. Once you get your instance up and running, SSH into the instance using the commands given by Amazon. First we will set up a user for Toshi: ubuntu@ip-172-31-62-77:~$ sudo adduser toshi Adding user `toshi' ... Adding new group `toshi' (1001) ... Adding new user `toshi' (1001) with group `toshi' ... Creating home directory `/home/toshi' ... Copying files from `/etc/skel' ... Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully Changing the user information for toshi Enter the new value, or press ENTER for the default Full Name []: Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [Y/n] Y Then we will add the new user to the sudoers group and switch to that user: ubuntu@ip-172-31-62-77:~$ sudo adduser toshi sudo Adding user `toshi' to group `sudo' ... Adding user toshi to group sudo Done. ubuntu@ip-172-31-62-77:~$ su – toshi toshi@ip-172-31-62-77:~$ Next, we will install Docker and all of its dependencies through an automated script available on the Docker website. This will provision our instance with the necessary software packages. toshi@ip-172-31-62-77:~$ curl -sSL https://get.docker.com/ubuntu/ | sudo sh Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir ..... Then we will clone the Toshi repo from Github and move into the new directory: toshi@ip-172-31-62-77:~$ git clone https://github.com/coinbase/toshi.gittoshi@ip-172-31-62-77:~$ cd toshi/ Next, build the coinbase/toshi Docker image from the Dockerfile located in the /toshi directory. Don’t forget the dot at the end of the command!! toshi@ip-172-31-62-77:~/toshi$ sudo docker build -t=coinbase/toshi .Sending build context to Docker daemon 13.03 MB Sending build context to Docker daemon … … … Removing intermediate container c15dd6c961c2 Step 3 : ADD Gemfile /toshi/Gemfile INFO[0120] Error getting container dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465 from driver devicemapper: Error mounting '/dev/mapper/docker-202:1-524950-dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465' on '/var/lib/docker/devicemapper/mnt/dbc7c41625c49d99646e32c430b00f5d15ef867b26c7ca68ebda6aedebf3f465': no such file or directory Note, you might see ‘Error getting container’ when this runs. If so don’t worry about it at this point. Next, we will build and run our Redis and Postgres containers. toshi@ip-172-31-62-77:~/toshi$ sudo docker run --name toshi_db -d postgres toshi@ip-172-31-62-77:~/toshi$ sudo docker run --name toshi_redis -d redis This will build and run Docker containers named toshi_db and toshi_redis based on standard postgres and redis images pulled from Dockerhub. The ‘-d’ flag indicates that the container will run in the background (daemonized). If you see ‘Error response from daemon: Cannot start container’ error while running either of these commands, simply run ‘sudo docker start toshi_redis [or toshi_postgres]’ again. To ensure that our containers are running properly, run: toshi@ip-172-31-62-77:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4de43ccc8e80 redis:latest "/entrypoint.sh redi 7 minutes ago Up 3 minutes 6379/tcp toshi_redis 6de0418d4e91 postgres:latest "/docker-entrypoint. 8 minutes ago Up 2 minutes 5432/tcp toshi_db You should see both containers running, along with their port numbers. When we run our Toshi container we need to tell it where to find the Postgres and Redis containers, so we must find the toshi_db and toshi_redis IP addresses. Remember we have not run a Toshi container yet, we only built the image from the Dockerfile. You can think of a container as a running version of an image. To learn more about Docker see the docs. toshi@ip-172-31-62-77:~$ sudo docker inspect toshi_db | grep IPAddress "IPAddress": "172.17.0.3", toshi@ip-172-31-62-77:~$ sudo docker inspect toshi_redis | grep IPAddress "IPAddress": "172.17.0.2", Now we have everything we need to get our Toshi container up and running. To do this run: sudo docker run --name toshi_main -d -p 5000:5000 -e REDIS_URL=redis://172.17.0.2:6379 -e DATABASE_URL=postgres://postgres:@172.17.0.3:5432 -e TOSHI_ENV=production coinbase/toshi sh -c 'bundle exec rake db:create db:migrate; foreman start' Be sure to replace the IP addresses in the above command with your own. This creates a container named ‘toshi_main’, runs it as a daemon (-d) and sets three environment variables in the container (-e) which are required for Toshi to run. It also maps port 5000 inside the container to port 5000 of our host (-p). Lastly it runs a shell script in the container (sh –c) which creates and migrates the database, then starts the Toshi web server. To see that it has started properly run: toshi@ip-172-31-62-77:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 017c14cbf432 coinbase/toshi:latest "sh -c 'bundle exec 6 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp toshi_main 4de43ccc8e80 redis:latest "/entrypoint.sh redi 43 minutes ago Up 38 minutes 6379/tcp toshi_redis 6de0418d4e91 postgres:latest "/docker-entrypoint. 43 minutes ago Up 38 minutes 5432/tcp toshi_db If you have set your AWS security settings properly, you should be able to see the syncing progress of Toshi in your browser. Find your instance’s public IP address from the AWS console and then point your browser there using port 5000. For example: ‘http://54.174.195.243:5000/’. You can also see the logs of our Toshi container by running: toshi@ip-172-31-62-77:~$ sudo docker logs –f toshi_main That’s it! We’re all up and running. Be prepared to wait a long time for the blockchain to finish syncing. This could take more than a week or two, but you can start playing around with the data right away through the GUI to get a sense of the power you now have. About the Author Alex Leishman is a software engineer who is passionate about Bitcoin and other digital currencies. He works at MaiCoin.com where he is helping to build the future of money.
Read more
  • 0
  • 0
  • 4602

article-image-json-jsonnet
Packt
25 Jun 2015
16 min read
Save for later

JSON with JSON.Net

Packt
25 Jun 2015
16 min read
In this article by Ray Rischpater, author of the book JavaScript JSON Cookbook, we show you how you can use strong typing in your applications with JSON using C#, Java, and TypeScript. You'll find the following recipes: How to deserialize an object using Json.NET How to handle date and time objects using Json.NET How to deserialize an object using gson for Java How to use TypeScript with Node.js How to annotate simple types using TypeScript How to declare interfaces using TypeScript How to declare classes with interfaces using TypeScript Using json2ts to generate TypeScript interfaces from your JSON (For more resources related to this topic, see here.) While some say that strong types are for weak minds, the truth is that strong typing in programming languages can help you avoid whole classes of errors in which you mistakenly assume that an object of one type is really of a different type. Languages such as C# and Java provide strong types for exactly this reason. Fortunately, the JSON serializers for C# and Java support strong typing, which is especially handy once you've figured out your object representation and simply want to map JSON to instances of classes you've already defined. We use Json.NET for C# and gson for Java to convert from JSON to instances of classes you define in your application. Finally, we take a look at TypeScript, an extension of JavaScript that provides compile-time checking of types, compiling to plain JavaScript for use with Node.js and browsers. We'll look at how to install the TypeScript compiler for Node.js, how to use TypeScript to annotate types and interfaces, and how to use a web page by Timmy Kokke to automatically generate TypeScript interfaces from JSON objects. How to deserialize an object using Json.NET In this recipe, we show you how to use Newtonsoft's Json.NET to deserialize JSON to an object that's an instance of a class. We'll use Json.NET because although this works with the existing .NET JSON serializer, there are other things that I want you to know about Json.NET, which we'll discuss in the next two recipes. Getting ready To begin, you need to be sure you have a reference to Json.NET in your project. The easiest way to do this is to use NuGet; launch NuGet, search for Json.NET, and click on Install, as shown in the following screenshot: You'll also need a reference to the Newonsoft.Json namespace in any file that needs those classes with a using directive at the top of your file: usingNewtonsoft.Json; How to do it… Here's an example that provides the implementation of a simple class, converts a JSON string to an instance of that class, and then converts the instance back into JSON: using System; usingNewtonsoft.Json;   namespaceJSONExample {   public class Record {    public string call;    public double lat;    public double lng; } class Program {    static void Main(string[] args)      {        String json = @"{ 'call': 'kf6gpe-9',        'lat': 21.9749, 'lng': 159.3686 }";          var result = JsonConvert.DeserializeObject<Record>(          json, newJsonSerializerSettings            {        MissingMemberHandling = MissingMemberHandling.Error          });        Console.Write(JsonConvert.SerializeObject(result));          return;        } } } How it works… In order to deserialize the JSON in a type-safe manner, we need to have a class that has the same fields as our JSON. The Record class, defined in the first few lines does this, defining fields for call, lat, and lng. The Newtonsoft.Json namespace provides the JsonConvert class with static methods SerializeObject and DeserializeObject. DeserializeObject is a generic method, taking the type of the object that should be returned as a type argument, and as arguments the JSON to parse, and an optional argument indicating options for the JSON parsing. We pass the MissingMemberHandling property as a setting, indicating with the value of the enumeration Error that in the event that a field is missing, the parser should throw an exception. After parsing the class, we convert it again to JSON and write the resulting JSON to the console. There's more… If you skip passing the MissingMember option or pass Ignore (the default), you can have mismatches between field names in your JSON and your class, which probably isn't what you want for type-safe conversion. You can also pass the NullValueHandling field with a value of Include or Ignore. If Include, fields with null values are included; if Ignore, fields with Null values are ignored. See also The full documentation for Json.NET is at http://www.newtonsoft.com/json/help/html/Introduction.htm. Type-safe deserialization is also possible with JSON support using the .NET serializer; the syntax is similar. For an example, see the documentation for the JavaScriptSerializer class at https://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer(v=vs.110).aspx. How to handle date and time objects using Json.NET Dates in JSON are problematic for people because JavaScript's dates are in milliseconds from the epoch, which are generally unreadable to people. Different JSON parsers handle this differently; Json.NET has a nice IsoDateTimeConverter that formats the date and time in ISO format, making it human-readable for debugging or parsing on platforms other than JavaScript. You can extend this method to converting any kind of formatted data in JSON attributes, too, by creating new converter objects and using the converter object to convert from one value type to another. How to do it… Simply include a new IsoDateTimeConverter object when you call JsonConvert.Serialize, like this: string json = JsonConvert.SerializeObject(p, newIsoDateTimeConverter()); How it works… This causes the serializer to invoke the IsoDateTimeConverter instance with any instance of date and time objects, returning ISO strings like this in your JSON: 2015-07-29T08:00:00 There's more… Note that this can be parsed by Json.NET, but not JavaScript; in JavaScript, you'll want to use a function like this: Function isoDateReviver(value) { if (typeof value === 'string') { var a = /^(d{4})-(d{2})-(d{2})T(d{2}):(d{2}):(d{2}(?:.d*)?)(?:([+-])(d{2}):(d{2}))?Z?$/ .exec(value); if (a) {      var utcMilliseconds = Date.UTC(+a[1],          +a[2] - 1,          +a[3],          +a[4],          +a[5],          +a[6]);        return new Date(utcMilliseconds);    } } return value; } The rather hairy regular expression on the third line matches dates in the ISO format, extracting each of the fields. If the regular expression finds a match, it extracts each of the date fields, which are then used by the Date class's UTC method to create a new date. Note that the entire regular expression—everything between the/characters—should be on one line with no whitespace. It's a little long for this page, however! See also For more information on how Json.NET handles dates and times, see the documentation and example at http://www.newtonsoft.com/json/help/html/SerializeDateFormatHandling.htm. How to deserialize an object using gson for Java Like Json.NET, gson provides a way to specify the destination class to which you're deserializing a JSON object. Getting ready You'll need to include the gson JAR file in your application, just as you would for any other external API. How to do it… You use the same method as you use for type-unsafe JSON parsing using gson using fromJson, except you pass the class object to gson as the second argument, like this: // Assuming we have a class Record that looks like this: /* class Record { private String call; private float lat; private float lng;    // public API would access these fields } */   Gson gson = new com.google.gson.Gson(); String json = "{ "call": "kf6gpe-9", "lat": 21.9749, "lng": 159.3686 }"; Record result = gson.fromJson(json, Record.class); How it works… The fromGson method always takes a Java class. In the example in this recipe, we convert directly to a plain old Java object that our application can use without needing to use the dereferencing and type conversion interface of JsonElement that gson provides. There's more… The gson library can also deal with nested types and arrays as well. You can also hide fields from being serialized or deserialized by declaring them transient, which makes sense because transient fields aren't serialized. See also The documentation for gson and its support for deserializing instances of classes is at https://sites.google.com/site/gson/gson-user-guide#TOC-Object-Examples. How to use TypeScript with Node.js Using TypeScript with Visual Studio is easy; it's just part of the installation of Visual Studio for any version after Visual Studio 2013 Update 2. Getting the TypeScript compiler for Node.js is almost as easy—it's an npm install away. How to do it… On a command line with npm in your path, run the following command: npm install –g typescript The npm option –g tells npm to install the TypeScript compiler globally, so it's available to every Node.js application you write. Once you run it, npm downloads and installs the TypeScript compiler binary for your platform. There's more… Once you run this command to install the compiler, you'll have the TypeScript compiler tsc available on the command line. Compiling a file with tsc is as easy as writing the source code and saving in a file that ends in .ts extension, and running tsc on it. For example, given the following TypeScript saved in the file hello.ts: function greeter(person: string) { return "Hello, " + person; }   var user: string = "Ray";   console.log(greeter(user)); Running tschello.ts at the command line creates the following JavaScript: function greeter(person) { return "Hello, " + person; }   var user = "Ray";   console.log(greeter(user)); Try it! As we'll see in the next section, the function declaration for greeter contains a single TypeScript annotation; it declares the argument person to be string. Add the following line to the bottom of hello.ts: console.log(greeter(2)); Now, run the tschello.ts command again; you'll get an error like this one: C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2082: Supplied parameters do not match any signature of call target:        Could not apply type 'string' to argument 1 which is         of type 'number'. C:UsersrarischpDocumentsnode.jstypescripthello.ts(8,13): error TS2087: Could not select overload for 'call' expression. This error indicates that I'm attempting to call greeter with a value of the wrong type, passing a number where greeter expects a string. In the next recipe, we'll look at the kinds of type annotations TypeScript supports for simple types. See also The TypeScript home page, with tutorials and reference documentation, is at http://www.typescriptlang.org/. How to annotate simple types using TypeScript Type annotations with TypeScript are simple decorators appended to the variable or function after a colon. There's support for the same primitive types as in JavaScript, and to declare interfaces and classes, which we will discuss next. How to do it… Here's a simple example of some variable declarations and two function declarations: function greeter(person: string): string { return "Hello, " + person; }   function circumference(radius: number) : number { var pi: number = 3.141592654; return 2 * pi * radius; }   var user: string = "Ray";   console.log(greeter(user)); console.log("You need " + circumference(2) + " meters of fence for your dog."); This example shows how to annotate functions and variables. How it works… Variables—either standalone or as arguments to a function—are decorated using a colon and then the type. For example, the first function, greeter, takes a single argument, person, which must be a string. The second function, circumference, takes a radius, which must be a number, and declares a single variable in its scope, pi, which must be a number and has the value 3.141592654. You declare functions in the normal way as in JavaScript, and then add the type annotation after the function name, again using a colon and the type. So, greeter returns a string, and circumference returns a number. There's more… TypeScript defines the following fundamental type decorators, which map to their underlying JavaScript types: array: This is a composite type. For example, you can write a list of strings as follows: var list:string[] = [ "one", "two", "three"]; boolean: This type decorator can contain the values true and false. number: This type decorator is like JavaScript itself, can be any floating-point number. string: This type decorator is a character string. enum: An enumeration, written with the enum keyword, like this: enumColor { Red = 1, Green, Blue }; var c : Color = Color.Blue; any: This type indicates that the variable may be of any type. void: This type indicates that the value has no type. You'll use void to indicate a function that returns nothing. See also For a list of the TypeScript types, see the TypeScript handbook at http://www.typescriptlang.org/Handbook. How to declare interfaces using TypeScript An interface defines how something behaves, without defining the implementation. In TypeScript, an interface names a complex type by describing the fields it has. This is known as structural subtyping. How to do it… Declaring an interface is a little like declaring a structure or class; you define the fields in the interface, each with its own type, like this: interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686};   printLocation(myObj); How it works… The interface keyword in TypeScript defines an interface; as I already noted, an interface consists of the fields it declares with their types. In this listing, I defined a plain JavaScript object, myObj and then called the function printLocation, that I previously defined, which takes a Record. When calling printLocation with myObj, the TypeScript compiler checks the fields and types each field and only permits a call to printLocation if the object matches the interface. There's more… Beware! TypeScript can only provide compile-type checking. What do you think the following code does? interface Record { call: string; lat: number; lng: number; }   Function printLocation(r: Record) { console.log(r.call + ': ' + r.lat + ', ' + r.lng); }   var myObj = {call: 'kf6gpe-7', lat: 21.9749, lng: 159.3686}; printLocation(myObj);   var json = '{"call":"kf6gpe-7","lat":21.9749}'; var myOtherObj = JSON.parse(json); printLocation(myOtherObj); First, this compiles with tsc just fine. When you run it with node, you'll see the following: kf6gpe-7: 21.9749, 159.3686 kf6gpe-7: 21.9749, undefined What happened? The TypeScript compiler does not add run-time type checking to your code, so you can't impose an interface on a run-time created object that's not a literal. In this example, because the lng field is missing from the JSON, the function can't print it, and prints the value undefined instead. This doesn't mean that you shouldn't use TypeScript with JSON, however. Type annotations serve a purpose for all readers of the code, be they compilers or people. You can use type annotations to indicate your intent as a developer, and readers of the code can better understand the design and limitation of the code you write. See also For more information about interfaces, see the TypeScript documentation at http://www.typescriptlang.org/Handbook#interfaces. How to declare classes with interfaces using TypeScript Interfaces let you specify behavior without specifying implementation; classes let you encapsulate implementation details behind an interface. TypeScript classes can encapsulate fields or methods, just as classes in other languages. How to do it… Here's an example of our Record structure, this time as a class with an interface: class RecordInterface { call: string; lat: number; lng: number;   constructor(c: string, la: number, lo: number) {} printLocation() {}   }   class Record implements RecordInterface { call: string; lat: number; lng: number; constructor(c: string, la: number, lo: number) {    this.call = c;    this.lat = la;    this.lng = lo; }   printLocation() {    console.log(this.call + ': ' + this.lat + ', ' + this.lng); } }   var myObj : Record = new Record('kf6gpe-7', 21.9749, 159.3686);   myObj.printLocation(); How it works… The interface keyword, again, defines an interface just as the previous section shows. The class keyword, which you haven't seen before, implements a class; the optional implements keyword indicates that this class implements the interface RecordInterface. Note that the class implementing the interface must have all of the same fields and methods that the interface prescribes; otherwise, it doesn't meet the requirements of the interface. As a result, our Record class includes fields for call, lat, and lng, with the same types as in the interface, as well as the methods constructor and printLocation. The constructor method is a special method called when you create a new instance of the class using new. Note that with classes, unlike regular objects, the correct way to create them is by using a constructor, rather than just building them up as a collection of fields and values. We do that on the second to the last line of the listing, passing the constructor arguments as function arguments to the class constructor. See also There's a lot more you can do with classes, including defining inheritance and creating public and private fields and methods. For more information about classes in TypeScript, see the documentation at http://www.typescriptlang.org/Handbook#classes. Using json2ts to generate TypeScript interfaces from your JSON This last recipe is more of a tip than a recipe; if you've got some JSON you developed using another programming language or by hand, you can easily create a TypeScript interface for objects to contain the JSON by using Timmy Kokke's json2ts website. How to do it… Simply go to http://json2ts.com and paste your JSON in the box that appears, and click on the generate TypeScript button. You'll be rewarded with a second text-box that appears and shows you the definition of the TypeScript interface, which you can save as its own file and include in your TypeScript applications. How it works… The following figure shows a simple example: You can save this typescript as its own file, a definition file, with the suffix .d.ts, and then include the module with your TypeScript using the import keyword, like this: import module = require('module'); Summary In this article we looked at how you can adapt the type-free nature of JSON with the type safety provided by languages such as C#, Java, and TypeScript to reduce programming errors in your application. Resources for Article: Further resources on this subject: Playing with Swift [article] Getting Started with JSON [article] Top two features of GSON [article]
Read more
  • 0
  • 0
  • 4590

article-image-features-raphaeljs
Packt
12 Sep 2013
16 min read
Save for later

Features of RaphaelJS

Packt
12 Sep 2013
16 min read
(For more resources related to this topic, see here.) Creating a Raphael element Creating a Raphael element is very easy. To make it better, there are predefined methods to create basic geometrical shapes. Basic shape There are three basic shapes in RaphaelJS, namely circle, ellipse, and rectangle. Rectangle We can create a rectangle using the rect() method. This method takes four required parameters and a fifth optional parameter, border-radius. The border-radius parameter will make the rectangle rounded (rounded corners) by the number of pixels specified. The syntax for this method is: paper.rect(X,Y,Width,Height,border-radius(optional)); A normal rectangle can be created using the following code snippet: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); // creating a rectangle with the rect() method. The four required parameters are X,Y,Width & Height var rect = paper.rect(35,25,170,100).attr({ "fill":"#17A9C6", //filling with background color "stroke":"#2A6570", // border color of the rectangle "stroke-width":2 // the width of the border }); The output for the preceding code snippet is shown in the following screenshot: Plain rectangle Rounded rectangle The following code will create a basic rectangle with rounded corners: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); //The fifth parameter will make the rectangle rounded by the number of pixels specified – A rectangle with rounded corners var rect = paper.rect(35,25,170,100,20).attr({ "fill":"#17A9C6",//background color of the rectangle "stroke":"#2A6570",//border color of the rectangle "stroke-width":2 // width of the border }); //in the preceding code 20(highlighted) is the border-radius of the rectangle. The output for the preceding code snippet is a rectangle with rounded corners, as shown in the following screenshot: Rectangle with rounded corners We can create other basic shapes in the same way. Let's create an ellipse with our magic wand. Ellipse An ellipse is created using the ellipse() method and it takes four required parameters, namely x,y, horizontal radius, and vertical radius. The horizontal radius will be the width of the ellipse divided by two and the vertical radius will be the height of the ellipse divided by two. The syntax for creating an ellipse is: paper.ellipse(X,Y,rX,rY); //rX is the horizontal radius & rY is the vertical radius of the ellipse Let's consider the following example for creating an ellipse: // creating a raphael paperin 'paperDiv' var paper = Raphael ("paperDiv", 650,400); //The ellipse() method takes four required parameters: X,Y, horizontal radius & vertical Radius var ellipse = paper.ellipse(195,125,170,100).attr({ "fill":"#17A9C6", // background color of the ellipse "stroke":"#2A6570", // ellipse's border color "stroke-width":2 // border width }); The preceding code will create an ellipse of width 170 x 2 and height 100 x 2. An ellipse created using the ellipse() method is shown in the following screenshot: An Ellipse Complex shapes It's pretty easy to create basic shapes, but what about complex shapes such as stars, octagons, or any other shape which isn't a circle, rectangle, or an ellipse. It's time for the next step of Raphael wizardry. Complex shapes are created using the path() method which has only one parameter called pathString. Though the path string may look like a long genetic sequence with alphanumeric characters, it's actually very simple to read, understand, and draw with. Before we get into path drawing, it's essential that we know how it's interpreted and the simple logic behind those complex shapes. Imagine that you are drawing on a piece of paper with a pencil. To draw something, you will place the pencil at a point in the paper and begin to draw a line or a curve and then move the pencil to another point on the paper and start drawing a line or curve again. After several such cycles, you will have a masterpiece—at least, you will call it a masterpiece. Raphael uses a similar method to draw and it does so with a path string. A typical path string may look like this: M0,0L26,0L13,18L0,0. Let's zoom into this path string a bit. The first letter says M followed by 0,0. That's right genius, you've guessed it correctly. It says move to 0,0 position, the next letter L is line to 26,0. RaphaelJS will move to 0,0 and from there draw a line to 26,0. This is how the path string is understood by RaphaelJS and paths are drawn using these simple notations. Here is a comprehensive list of commands and their respective meanings: Command Meaning expansion Attributes M move to (x, y) Z close path (none) L line to (x, y) H horizontal line to x V vertical line to y C curve to (x1, y1, x2, y2, x, y) S smooth curve to (x2, y2, x, y) Q quadratic Bézier curve to (x1, y1, x, y) T smooth quadratic Bézier curve to (x, y) A elliptical arc (rx, ry, x axis-rotation, large-arc-flag, sweep-flag, x, y) R Catmull-Rom-curve to* x1, y1 (x y) The uppercase commands are absolute (M20, 20); they are calculated from the 0,0 position of the drawing area (paper). The lowercase commands are relative (m20, 20); they are calculated from the last point where the pen left off. There are so many commands, which might feel like too much to take in—don't worry; there is no need to remember every command and its format. Because we'll be using vector graphics editors to extract paths, it's essential that you understand the meaning of each and every command so that when someone asks you "hey genius, what does this mean?", you shouldn't be standing there clueless pretending to have not heard it. The syntax for the path() method is as follows: paper.path("pathString"); Let's consider the following example: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 350,200); // Creating a shape using the path() method and a path string var tri = paper.path("M0,0L26,0L13,18L0,0").attr({ "fill":"#17A9C6", // filling the background color "stroke":"#2A6570", // the color of the border "stroke-width":2 // the size of the border }); All these commands ("M0,0L26,0L13,18L0,0") use uppercase letters. They are therefore absolute values. The output for the previous example is shown in the following screenshot: A triangle shape drawn using the path string Extracting and using paths from an editor Well, a triangle may be an easy shape to put into a path string. How about a complex shape such as a star? It's not that easy to guess and manually find the points. It's also impossible to create a fairly more complex shape like a simple flower or a 2D logo. Here in this section, we'll see a simple but effective method of drawing complex shapes with minimal fuss and sharp accuracy. Vector graphics editors The vector graphics editors are meant for creating complex shapes with ease and they have some powerful tools in their disposal to help us draw. For this example, we'll create a star shape using an open source editor called Inkscape, and then extract those paths and use Raphael to get out the shape! It is as simple as it sounds, and it can be done in four simple steps. Step 1 – Creating the shape in the vector editor Let's create some star shapes in Inkscape using the built-in shapes tool. Star shapes created using the built-in shapes tool Step 2 – Saving the shape as SVG The paths used by SVG and RaphaelJS are similar. The trick is to use the paths generated by the vector graphics editor in RaphaelJS. For this purpose, the shape must be saved as an SVG file. Saving the shape as an SVG file Step 3 – Copying the SVG path string The next step is to copy the path from SVG and paste it into Raphael's path() method. SVG is a markup language, and therefore it's nested in tags. The SVG path can be found in the <path> and </path> tags. After locating the path tag, look for the d attribute. This will contain a long path sequence. You've now hit the bullseye. The path string is highlighted Step 4 – Using the copied path as a Raphael path string After copying the path string from SVG, paste it into Raphael's path() method. var newpath=paper.path("copied path string from SVG").attr({ "fill":"#5DDEF4", "stroke":"#2A6570", "stroke-width":2 }); That's it! We have created a complex shape in RaphaelJS with absolute simplicity. Using this technique, we can only extract the path, not the styles. So the background color, shadow, or any other style in the SVG won't apply. We need to add our own styles to the path objects using the attr() method. A screenshot depicting the complex shapes created in RaphaelJS using the path string copied from an SVG file is shown here: Complex shapes created in RaphaelJS using path string Creating text Text can be created using the text() method. Raphael gives us a way to add a battery of styles to the text object, right from changing colors to animating physical properties like position and size. The text() method takes three required parameters, namely, x,y, and the text string. The syntax for the text() method is as follows: paper.text(X,Y,"Raphael JS Text"); // the text method with X,Y coordinates and the text string Let's consider the following example: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); // creating text var text = paper.text(40,55,"Raphael Text").attr({ "fill":"#17A9C6", // font-color "font-size":75, // font size in pixels //text-anchor indicates the starting position of the text relative to the X, Y position.It can be "start", "middle" or "end" default is "middle" "text-anchor":"start", "font-family":"century gothic" // font family of the text }); I am pretty sure that the text-anchor property is a bit heavy to munch. Well, there is a saying that a picture is worth a thousand words. The following diagram clearly explains the text-anchor property and its usage. A brief explanation of text-anchor property A screenshot of the text rendered using the text() method is as follows: Rendering text using the text() method Manipulating the style of the element The attr() method not only adds styles to an element, but it also modifies an existing style of an element. The following example explains the attr() method: rect.attr('fill','#ddd'); // This will update the background color of the rectangle to gray Transforming an element RaphaelJS not only creates elements, but it also allows the manipulating or transforming of any element and its properties dynamically. Manipulating a shape By the end of this section, you would know how to transform a shape. There might be many scenarios wherein you might need to modify a shape dynamically. For example, when the user mouse-overs a circle, you might want to scale up that circle just to give a visual feedback to the user. Shapes can be manipulated in RaphaelJS using the transform() method. Transformation is done through the transform() method, and it is similar to the path() method where we add the path string to the method. transform() works in the same way, but instead of the path string, it's the transformation string. There is only a moderate difference between a transformation string and a path string. There are four commands in the transformation string: T Translate S Scale R Rotate in degrees M Matrix The fourth command, M, is of little importance and let's keep it out of the way, to avoid confusion. The transformation string might look similar to a path string. In reality, they are different, not entirely but significantly, sharing little in common. The M in a path string means move to , whereas the same in a transformation string means Matrix . The path string is not to be confused with a transformation string. As with the path string, the uppercase letters are for absolute transformations and the lowercase for relative transformation. If the transformation string reads r90T100,0, then the element will rotate 90 degrees and move 100 px in the x axis (left). If the same reads r90t100,0, then the element will rotate 90 degrees and since the translation is relative, it will actually move vertically down 100px, as the rotation has tilted its axis. I am sure the previous point will confuse most, so let me break it up. Imagine a rectangle with a head and now this head is at the right side of the rectangle. For the time being, let's forget about absolute and relative transformation; our objective is to: Rotate the rectangle by 90 degrees. Move the rectangle 100px on the x axis (that is, 100px to the right). It's critical to understand that the elements' original values don't change when we translate it, meaning its x and y values will remain the same, no matter how we rotate or move the element. Now our first requirement is to rotate the rectangle by 90 degrees. The code for that would be rect.transform("r90") where r stands for rotation—fantastic, the rectangle is rotated by 90 degrees. Now pay attention to the next important step. We also need the rectangle to move 100px in the x axis and so we update our previous code to rect.transform("r90t100,0"), where t stands for translation. What happens next is interesting—the translation is done through a lowercase t, which means it's relative. One thing about relative translations is that they take into account any previous transformation applied to the element, whereas absolute translations simply reset any previous transformations before applying their own. Remember the head of the rectangle on the right side? Well, the rectangle's x axis falls on the right side. So when we say, move 100px on the x axis, it is supposed to move 100px towards its right side, that is, in the direction where its head is pointing. Since we have rotated the rectangle by 90 degrees, its head is no longer on the right side but is facing the bottom. So when we apply the relative translation, the rectangle will still move 100px to its x axis, but the x axis is now pointing down because of the rotation. That's why the rectangle will move 100px down when you expect it to move to the right. What happens when we apply absolute translation is something that is entirely different from the previous one. When we again update our code for absolute translation to rect.transform("r90T100,0"), the axis of the rectangle is not taken into consideration. However, the axis of the paper is used, as absolute transformations don't take previous transformations into account, and they simply reset them before applying their own. Therefore, the rectangle will move 100px to the right after rotating 90 degrees, as intended. Absolute transformations will ignore all the previous transformations on that element, but relative transformations won't. Getting a grip on this simple logic will save you a lot of frustration in the future while developing as well as while debugging. The following is a screenshot depicting relative translation: Using relative translation The following is a screenshot depicting absolute translation: Using absolute translation Notice the gap on top of the rotated rectangle; it's moved 100px on the one with relative translation and there is no such gap on top of the rectangle with absolute translation. By default, the transform method will append to any transformation already applied to the element. To reset all transformations, use element.transform(""). Adding an empty string to the transform method will reset all the previous transformations on that element. It's also important to note that the element's original x,y position will not change when translated. The element will merely assume a temporary position but its original position will remain unchanged. Therefore after translation, if we call for the element's position programmatically, we will get the original x,y, not the translated one, just so we don't jump from our seats and call RaphaelJS dull! The following is an example of scaling and rotating a triangle: //creating a Triangle using the path string var tri = paper.path("M0,0L104,0L52,72L0,0").attr({ "fill":"#17A9C6", "stroke":"#2A6570", "stroke-width":2 }); //transforming the triangle. tri.animate({ "transform":"r90t100,0,s1.5" },1000); //the transformation string should be read as rotating the element by 90 degrees, translating it to 100px in the X-axis and scaling up by 1.5 times The following screenshot depicts the output of the preceding code: Scaling and rotating a triangle The triangle is transformed using relative translation (t). Now you know the reason why the triangle has moved down rather than moving to its right. Animating a shape What good is a magic wand if it can't animate inanimate objects! RaphaelJS can animate as smooth as butter almost any property from color, opacity, width, height, and so on with little fuss. Animation is done through the animate() method. This method takes two required parameters, namely final values and milliseconds, and two optional parameters, easing and callback. The syntax for the animate() method is as follows: Element.animate({ Animation properties in key value pairs },time,easing,callback_function); Easing is that special effect with which the animation is done, for example, if the easing is bounce, the animation will appear like a bouncing ball. The following are the several easing options available in RaphaelJS: linear < or easeIn or ease-in > or easeOut or ease-out <> or easeInOut or ease-in-out backIn or back-in backOut or back-out elastic bounce Callbacks are functions that will execute when the animation is complete, allowing us to perform some tasks after the animation. Let's consider the example of animating the width and height of a rectangle: // creating a raphael paper in 'paperDiv' var paper = Raphael ("paperDiv", 650,400); rect.animate({ "width":200, // final width "height":200 // final height },300,"bounce',function(){ // something to do when the animation is complete – this callback function is optional // Print 'Animation complete' when the animation is complete $("#animation_status").html("Animation complete") }) The following screenshot shows a rectangle before animation: Rectangle before animation A screenshot demonstrating the use of a callback function when the animation is complete is as follows. The text Animation complete will appear in the browser after completing the animation. Use of a callback function The following code animates the background color and opacity of a rectangle: rect.animate({ "fill":"#ddd", // final color, "fill-opacity":0.7 },300,"easeIn",function(){ // something to do when the animation is complete – this call back function is optional // Alerts done when the animation is complete alert("done"); }) Here the rectangle is animated from blue to gray and with an opacity from 1 to 0.7 over a duration of 300 milliseconds. Opacity in RaphaelJS is the same as in CSS, where 1 is opaque and 0 is transparent.
Read more
  • 0
  • 0
  • 4563

article-image-php-5-social-networking-integrating-media-profile-posts
Packt
22 Oct 2010
13 min read
Save for later

PHP 5 Social Networking: Integrating Media in Profile Posts

Packt
22 Oct 2010
13 min read
Since different status types will use different status tables, we should use a left join to connect the tables, so we can keep just a single query to look up the statuses. It also pulls in the extra information when it is required. Let's get started with extending our profiles and the status stream! Changes to the view Since all of the media types we are going to support require at least one additional database field in a table that extends the statuses table, we are going to need to display any additional fields on the post status form. The standard type of status doesn't require additional fields, and new media types that we haven't discussed, which we may wish to support in the future, may require more than one additional field. To support a varying number of additional fields depending on the type, we could use some JavaScript (in this case, we will use the jQuery framework) to change the form depending on the context of the status. Beneath the main status box, we can add radio buttons for each of the status types, and depending on the one the user selects, the JavaScript can show or hide the additional fields, making the form more relevant. Template Our update status template needs a few changes: We need to set the enctype on the form, so that we can upload files (for posting images) We need radio buttons for the new types of statuses We need additional fields for those statuses The changes are highlighted in the following code segment: <p>Tell your network what you are up to</p> <form action="profile/statuses/{profile_user_id}" method="post" enctype="multipart/form-data"> <textarea id="status" name="status"></textarea> <br /> <input type="radio" name="status_type" id="status_checker_update" class="status_checker" value="update" />Update <input type="radio" name="status_type" id="status_checker_video" class="status_checker" value="video" />Video <input type="radio" name="status_type" id="status_checker_image" class="status_checker" value="image" />Image <input type="radio" name="status_type" id="status_checker_link" class="status_checker" value="link" />Link <br /> <div class="video_input extra_field"> <label for="video_url" class="">YouTube URL</label> <input type="text" id="" name="video_url" class="" /><br /> </div> <div class="image_input extra_field"> <label for="image_file" class="">Upload image</label> <input type="file" id="" name="image_file" class="" /><br /> </div> <div class="link_input extra_field"> <label for="link_url" class="">Link</label> <input type="text" id="" name="link_url" class="" /><br /> <label for="link_description" class="">Description</label> <input type="text" id="" name="link_description" class="" /><br /> </div> <input type="submit" id="updatestatus" name="updatestatus" value="Update" /> </form> These changes also need to be made to the post template, for posting on another user's profile. jQuery to enhance the user experience For accessibility purposes, we need this form to function regardless of whether the user has JavaScript enabled on their browser. To that end, we should use JavaScript to hide the unused form elements. So, even if the user has JavaScript disabled, they can still use all aspects of the form. We can then use JavaScript to enhance the user experience, toggling which aspects of the form are hidden or shown. <script type="text/javascript"> $(function() { First, we hide all of the extended status fields. $('.extra_field').hide(); $("input[name='status_type']").change(function(){ When the user changes the type of status, we hide all of the extended fields. $('.extra_field').hide(); We then show the fields directly related to the status type they have chosen. $('.'+ $("input[name='status_type']:checked").val() + '_input').show(); }); }); </script> View in action If we now take a look at our status updates page for our profile, we have some radio buttons that we can use to toggle elements of the form. Images To process images as a new status type, we will need a new database table and a new model to extend from the main status model. We will also need some new views, and to change the profile and status stream controllers (though we will make those changes after adding the three new status types). Database table The database table for images simply needs two fields:     Field Type Description ID Integer, Primary key To relate to the main statuses table Image Varchar The image filename These two fields will be connected to the statuses table via a left join, to bring in the image filename for statuses that are images. Model The model needs to extend our statuses model, providing setters for any new fields, call the parent constructor, call the parent setTypeReference method to inform that it is an image, call the parent save method to save the status, and then insert a new record into the image status table with the image information. Class, variable, and constructor Firstly, we define the class as an extension of the status class. We then define a variable for the image, and construct the object. The constructor calls the parent setTypeReference method to ensure it generates the correct type ID for an image, and then calls the parent constructor so it too has reference to the registry object. This file is saved as /models/imagestatus.php. <?php /** * Image status object * extends the base status object */ class Imagestatus extends status { private $image; /** * Constructor * @param Registry $registry * @param int $id * @return void */ public function __construct( Registry $registry, $id = 0 ) { $this->registry = $registry; parent::setTypeReference('image'); parent::__construct( $this->registry, $id ); } To call a method from an object's parent class, we use the parent keyword, followed by the scope resolution operator, followed by the method we wish to call. Processing the image upload When dealing with image uploads, resizing, and saving, there are different PHP functions that should be used depending on the type of the image. To make this easier and to provide a centralized place for dealing with image uploads and other image-related tasks, we should create a library file (lib/images/imagemanager. class.php) to make this easier. Let's discuss what an image manager library file should do to make our lives easier: Process uploading of an image from $_POST data Verify the type of file and the file extension Process images from the file system so that we can modify them Display an image to the browser Resize an image Rescale an image by resizing either the x or y co-ordinate, and scaling the other co-ordinate proportionally Get image information such as size and name Save the changes to the image The following is the code required to perform the above-mentioned tasks: <?php /** * Image manager class * @author Michael Peacock */ class Imagemanager { /** * Type of the image */ private $type = ''; /** * Extensions that the user can upload */ private $uploadExtentions = array( 'png', 'jpg', 'jpeg', 'gif' ); /** * Mime types of files the user can upload */ private $uploadTypes = array( 'image/gif', 'image/jpg', 'image/jpeg', 'image/pjpeg', 'image/png' ); /** * The image itself */ private $image; /** * The image name */ private $name; public function __construct(){} We need a method to load a local image, so that we can work with images saved on the servers file system. /** * Load image from local file system * @param String $filepath * @return void */ public function loadFromFile( $filepath ) { Based on the path to the image, we can get information on the image including the type of image (getimagesize gives us an array of information on the image; the second element in the array is the type). $info = getimagesize( $filepath ); $this->type = $info[2]; We can then compare the image type to various PHP constants, and depending on the image type (JPEG, GIF, or PNG) we use the appropriate imagecreatefrom function. if( $this->type == IMAGETYPE_JPEG ) { $this->image = imagecreatefromjpeg($filepath); } elseif( $this->type == IMAGETYPE_GIF ) { $this->image = imagecreatefromgif($filepath); } elseif( $this->type == IMAGETYPE_PNG ) { $this->image = imagecreatefrompng($filepath); } } We require a couple of getter methods to return the height or width of the image. /** * Get the image width * @return int */ public function getWidth() { return imagesx($this->image); } /** * Get the height of the image * @return int */ public function getHeight() { return imagesy($this->image); } We use a simple resize method that resizes the image to the dimensions we request. /** * Resize the image * @param int $x width * @param int $y height * @return void */ public function resize( $x, $y ) { $new = imagecreatetruecolor($x, $y); imagecopyresampled($new, $this->image, 0, 0, 0, 0, $x, $y, $this->getWidth(), $this->getHeight()); $this->image = $new; } Here we use a scaling function that takes a height parameter to resize to and scales the width accordingly. /** * Resize the image, scaling the width, based on a new height * @param int $height * @return void */ public function resizeScaleWidth( $height ) { $width = $this->getWidth() * ( $height / $this->getHeight() ); $this->resize( $width, $height ); } Similar to the above method, this method takes a width parameter, resizes the width, and rescales the height based on the width. /** * Resize the image, scaling the height, based on a new width * @param int $width * @return void */ public function resizeScaleHeight( $width ) { $height = $this->getHeight() * ( $width / $this->getWidth() ); $this->resize( $width, $height ); } The following is another scaling function, this time to rescale the image to a percentage of its current size: /** * Scale an image * @param int $percentage * @return void */ public function scale( $percentage ) { $width = $this->getWidth() * $percentage / 100; $height = $this->getheight() * $percentage / 100; $this->resize( $width, $height ); } To output the image to the browser from PHP, we need to check the type of the image, set the appropriate header based off the type, and then use the appropriate image function to render the image. After calling this method, we need to call exit() to ensure the image is displayed correctly. /** * Display the image to the browser - called before output is sent, exit() should be called straight after. * @return void */ public function display() { if( $this->type == IMAGETYPE_JPEG ) { $type = 'image/jpeg'; } elseif( $this->type == IMAGETYPE_GIF ) { $type = 'image/gif'; } elseif( $this->type == IMAGETYPE_PNG ) { $type = 'image/png'; } header('Content-Type: ' . $type ); if( $this->type == IMAGETYPE_JPEG ) { imagejpeg( $this->image ); } elseif( $this->type == IMAGETYPE_GIF ) { imagegif( $this->image ); } elseif( $this->type == IMAGETYPE_PNG ) { imagepng( $this->image ); } } To load an image from $_POST data, we need to know the post field the image is being sent through, the directory we wish to place the image in, and any additional prefix we may wish to add to the image's name (to prevent conflicts with images with the same name). /** * Load image from postdata * @param String $postfield the field the image was uploaded via * @param String $moveto the location for the upload * @param String $name_prefix a prefix for the filename * @return boolean */ public function loadFromPost( $postfield, $moveto, $name_prefix='' ) { Before doing anything, we should check that the file requested is actually a file that has been uploaded (and that this isn't a malicious user trying to access other files). if( is_uploaded_file( $_FILES[ $postfield ]['tmp_name'] ) ) { $i = strrpos( $_FILES[ $postfield ]['name'], '.'); if (! $i ) { //'no extention'; return false; } else { We then check that the extension of the file is in our allowed extensions array. $l = strlen( $_FILES[ $postfield ]['name'] ) - $i; $ext = strtolower ( substr( $_FILES[ $postfield ]['name'], $i+1, $l ) ); if( in_array( $ext, $this->uploadExtentions ) ) { Next, we check if the file type is an allowed file type. if( in_array( $_FILES[ $postfield ]['type'], $this->uploadTypes ) ) { Then, we move the file, as it has already been uploaded to our server's temp folder, to our own uploads directory and load it into our image manager class for any further processing we wish to make. $name = str_replace( ' ', '', $_FILES[ $postfield ]['name'] ); $this->name = $name_prefix . $name; $path = $moveto . $name_prefix.$name; move_uploaded_file( $_FILES[ $postfield ]['tmp_name'] , $path ); $this->loadFromFile( $path ); return true; } else { // 'invalid type'; return false; } } else { // 'invalid extention'; return false; } } } else { // 'not uploaded file'; return false; } } The following getter method is used to return the name of the image we are working with: /** * Get the image name * @return String */ public function getName() { return $this->name; } Finally, we have our save method, which again must detect the type of image, to work out which function to use. /** * Save changes to an image e.g. after resize * @param String $location location of image * @param String $type type of the image * @param int $quality image quality /100 * @return void */ public function save( $location, $type='', $quality=100 ) { $type = ( $type == '' ) ? $this->type : $type; if( $type == IMAGETYPE_JPEG ) { imagejpeg( $this->image, $location, $quality); } elseif( $type == IMAGETYPE_GIF ) { imagegif( $this->image, $location ); } elseif( $type == IMAGETYPE_PNG ) { imagepng( $this->image, $location ); } } } ?> Using the image manager library to process the file upload Now that we have a simple, centralized way of processing file uploads and resizing them, we can process the image the user is trying to upload as their extended status. /** * Process an image upload and set the image * @param String $postfield the $_POST field the image was uploaded through * @return boolean */ public function processImage( $postfield ) { require_once( FRAMEWORK_PATH . 'lib/images/imagemanager.class.php' ); $im = new Imagemanager(); $prefix = time() . '_'; if( $im->loadFromPost( $postfield, $this->registry- >getSetting('upload_path') . 'statusimages/', $prefix ) ) { $im->resizeScaleWidth( 150 ); $im->save( $this->registry->getSetting('upload_path') . 'statusimages/' . $im->getName() ); $this->image = $im->getName(); return true; } else { return false; } } Saving the status This leaves us with the final method for saving the status. This calls the parent object's save method to create the record in the statuses table. Then it gets the ID, and inserts a new record into the images table with this ID as the ID. /** * Save the image status * @return void */ public function save() { // save the parent object and thus the status table parent::save(); // grab the newly inserted status ID $id = $this->getID(); // insert into the images status table, using the same ID $extended = array(); $extended['id'] = $id; $extended['image'] = $this->image; $this->registry->getObject('db')->insertRecords( 'statuses_images', $extended ); } } ?>
Read more
  • 0
  • 0
  • 4552

article-image-using-phpstorm-team
Packt
26 Dec 2014
11 min read
Save for later

Using PhpStorm in a Team

Packt
26 Dec 2014
11 min read
In this article by Mukund Chaudhary and Ankur Kumar, authors of the book PhpStorm Cookbook, we will cover the following recipes: Getting a VCS server Creating a VCS repository Connecting PhpStorm to a VCS repository Storing a PhpStorm project in a VCS repository (For more resources related to this topic, see here.) Getting a VCS server The first action that you have to undertake is to decide which version of VCS you are going to use. There are a number of systems available, such as Git and Subversion (commonly known as SVN). It is free and open source software that you can download and install on your development server. There is another system named concurrent versions system (CVS). Both are meant to provide a code versioning service to you. SVN is newer and supposedly faster than CVS. Since SVN is the newer system and in order to provide information to you on the latest matters, this text will concentrate on the features of Subversion only. Getting ready So, finally that moment has arrived when you will start off working in a team by getting a VCS system for you and your team. The installation of SVN on the development system can be done in two ways: easy and difficult. The difficult step can be skipped without consideration because that is for the developers who want to contribute to the Subversion system. Since you are dealing with PhpStorm, you need to remember the easier way because you have a lot more to do. How to do it... The installation step is very easy. There is this aptitude utility available with Debian-based systems, and there is the Yum utility available with Red Hat-based systems. Perform the following steps: You just need to issue the command apt-get install subversion. The operating system's package manager will do the remaining work for you. In a very short time, after flooding the command-line console with messages, you will have the Subversion system installed. To check whether the installation was successful, you need to issue the command whereis svn. If there is a message, it means that you installed Subversion successfully. If you do not want to bear the load of installing Subversion on your development system, you can use commercial third-party servers. But that is more of a layman's approach to solving problems, and no PhpStorm cookbook author will recommend that you do that. You are a software engineer; you should not let go easily. How it works... When you install the version control system, you actually install a server that provides the version control service to a version control client. The subversion control service listens for incoming connections from remote clients on port number 3690 by default. There's more... If you want to install the older companion, CVS, you can do that in a similar way, as shown in the following steps: You need to download the archive for the CVS server software. You need to unpack it from the archive using your favorite unpacking software. You can move it to another convenient location since you will not need to disturb this folder in the future. You then need to move into the directory, and there will start your compilation process. You need to do #. /configure to create the make targets. Having made the target, you need to enter #make install to complete the installation procedure. Due to it being older software, you might have to compile from the source code as the only alternative. Creating a VCS repository More often than not, a PHP programmer is expected to know some system concepts because it is often required to change settings for the PHP interpreter. The changes could be in the form of, say, changing the execution time or adding/removing modules, and so on. In order to start working in a team, you are going to get your hands dirty with system actions. Getting ready You will have to create a new repository on the development server so that PhpStorm can act as a client and get connected. Here, it is important to note the difference between an SVN client and an SVN server—an SVN client can be any of these: a standalone client or an embedded client such as an IDE. The SVN server, on the other hand, is a single item. It is a continuously running process on a server of your choice. How to do it... You need to be careful while performing this activity as a single mistake can ruin your efforts. Perform the following steps: There is a command svnadmin that you need to know. Using this command, you can create a new directory on the server that will contain the code base in it. Again, you should be careful when selecting a directory on the server as it will appear in your SVN URL for the rest part of your life. The command should be executed as: svnadmin create /path/to/your/repo/ Having created a new repository on the server, you need to make certain settings for the server. This is just a normal phenomenon because every server requires a configuration. The SVN server configuration is located under /path/to/your/repo/conf/ with the name svnserve.conf. Inside the file, you need to make three changes. You need to add these lines at the bottom of the file: anon-access = none auth-access = write password-db = passwd There has to be a password file to authorize a list of users who will be allowed to use the repository. The password file in this case will be named passwd (the default filename). The contents in the file will be a number of lines, each containing a username and the corresponding password in the form of username = password. Since these files are scanned by the server according to a particular algorithm, you don't have the freedom to leave deliberate spaces in the file—there will be error messages displayed in those cases. Having made the appropriate settings, you can now make the SVN service run so that an SVN client can access it. You need to issue the command svnserve -d to do that. It is always good practice to keep checking whether what you do is correct. To validate proper installation, you need to issue the command svn ls svn://user@host/path/to/subversion/repo/. The output will be as shown in the following screenshot:   How it works... The svnadmin command is used to perform admin tasks on the Subversion server. The create option creates a new folder on the server that acts as the repository for access from Subversion clients. The configuration file is created by default at the time of server installation. The contents that are added to the file are actually the configuration directives that control the behavior of the Subversion server. Thus, the settings mentioned prevent anonymous access and restrict the write operations to certain users whose access details are mentioned in a file. The command svnserve is again a command that needs to be run on the server side and which starts the instance of the server. The -d switch mentions that the server should be run as a daemon (system process). This also means that your server will continue running until you manually stop it or the entire system goes down. Again, you can skip this section if you have opted for a third-party version control service provider. Connecting PhpStorm to a VCS repository The real utility of software is when you use it. So, having installed the version control system, you need to be prepared to use it. Getting ready With SVN being client-server software, having installed the server, you now need a client. Again, you will have difficulty searching for a good SVN client. Don't worry; the client has been factory-provided to you inside PhpStorm. The PhpStorm SVN client provides you with features that accelerate your development task by providing you detailed information about the changes made to the code. So, go ahead and connect PhpStorm to the Subversion repository you created. How to do it... In order to connect PhpStorm to the Subversion repository, you need to activate the Subversion view. It is available at View | Tool Windows | Svn Repositories. Perform the following steps to activate the Subversion view: 1. Having activated the Subversion view, you now need to add the repository location to PhpStorm. To do that, you need to use the + symbol in the top-left corner in the view you have opened, as shown in the following screenshot: Upon selecting the Add option, there is a question asked by PhpStorm about the location of the repository. You need to provide the full location of the repository. Once you provide the location, you will be able to see the repository in the same Subversion view in which you have pressed the Add button. Here, you should always keep in mind the correct protocol to use. This depends on the way you installed the Subversion system on the development machine. If you used the default installation by installing from the installer utility (apt-get or aptitude), you need to specify svn://. If you have configured SVN to be accessible via SSH, you need to specify svn+ssh://. If you have explicitly configured SVN to be used with the Apache web server, you need to specify http://. If you configured SVN with Apache over the secure protocol, you need to specify https://. Storing a PhpStorm project in a VCS repository Here comes the actual start of the teamwork. Even if you and your other team members have connected to the repository, what advantage does it serve? What is the purpose solved by merely connecting to the version control repository? Correct. The actual thing is the code that you work on. It is the code that earns you your bread. Getting ready You should now store a project in the Subversion repository so that the other team members can work and add more features to your code. It is time to add a project to version control. It is not that you need to start a new project from scratch to add to the repository. Any project, any work that you have done and you wish to have the team work on now can be added to the repository. Since the most relevant project in the current context is the cooking project, you can try adding that. There you go. How to do it... In order to add a project to the repository, perform the following steps: You need to use the menu item provided at VCS | Import into version control | Share project (subversion). PhpStorm will ask you a question, as shown in the following screenshot: Select the correct hierarchy to define the share target—the correct location where your project will be saved. If you wish to create the tags and branches in the code base, you need to select the checkbox for the same. It is good practice to provide comments to the commits that you make. The reason behind this is apparent when you sit down to create a release document. It also makes the change more understandable for the other team members. PhpStorm then asks you the format you want the working copy to be in. This is related to the version of the version control software. You just need to smile and select the latest version number and proceed, as shown in the following screenshot:   Having done that, PhpStorm will now ask you to enter your credentials. You need to enter the same credentials that you saved in the configuration file (see the Creating a VCS repository recipe) or the credentials that your service provider gave you. You can ask PhpStorm to save the credentials for you, as shown in the following screenshot:   How it works... Here it is worth understanding what is going on behind the curtains. When you do any Subversion related task in PhpStorm, there is an inbuilt SVN client that executes the commands for you. Thus, when you add a project to version control, the code is given a version number. This makes the version system remember the state of the code base. In other words, when you add the code base to version control, you add a checkpoint that you can revisit at any point in future for the time the code base is under the same version control system. Interesting phenomenon, isn't it? There's more... If you have installed the version control software yourself and if you did not make the setting to store the password in encrypted text, PhpStorm will provide you a warning about it, as shown in the following screenshot: Summary We got to know about version control systems, step-by-step process to create a VCS repository, and connecting PhpStorm to a VCS repository. Resources for Article:  Further resources on this subject: FuelPHP [article] A look into the high-level programming operations for the PHP language [article] PHP Web 2.0 Mashup Projects: Your Own Video Jukebox: Part 1 [article]
Read more
  • 0
  • 0
  • 4515
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-forms-and-views
Packt
13 Jan 2016
12 min read
Save for later

Forms and Views

Packt
13 Jan 2016
12 min read
In this article by Aidas Bendoraitis, author of the book Web Development with Django Cookbook - Second Edition we will cover the following topics: Passing HttpRequest to the form Utilizing the save method of the form (For more resources related to this topic, see here.) Introduction When the database structure is defined in the models, we need some views to let the users enter data or show the data to the people. In this chapter, we will focus on the views managing forms, the list views, and views generating an alternative output than HTML. For the simplest examples, we will leave the creation of URL rules and templates up to you. Passing HttpRequest to the form The first argument of every Django view is the HttpRequest object that is usually named request. It contains metadata about the request. For example, current language code, current user, current cookies, and current session. By default, the forms that are used in the views accept the GET or POST parameters, files, initial data, and other parameters; however, not the HttpRequest object. In some cases, it is useful to additionally pass HttpRequest to the form, especially when you want to filter out the choices of form fields using the request data or handle saving something such as the current user or IP in the form. In this recipe, we will see an example of a form where a person can choose a user and write a message for them. We will pass the HttpRequest object to the form in order to exclude the current user from the recipient choices; we don't want anybody to write a message to themselves. Getting ready Let's create a new app called email_messages and put it in INSTALLED_APPS in the settings. This app will have no models, just forms and views. How to do it... To complete this recipe, execute the following steps: Add a new forms.py file with the message form containing two fields: the recipient selection and message text. Also, this form will have an initialization method, which will accept the request object and then, modify QuerySet for the recipient's selection field: # email_messages/forms.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django import forms from django.utils.translation import ugettext_lazy as _ from django.contrib.auth.models import User class MessageForm(forms.Form): recipient = forms.ModelChoiceField( label=_("Recipient"), queryset=User.objects.all(), required=True, ) message = forms.CharField( label=_("Message"), widget=forms.Textarea, required=True, ) def __init__(self, request, *args, **kwargs): super(MessageForm, self).__init__(*args, **kwargs) self.request = request self.fields["recipient"].queryset = self.fields["recipient"].queryset. exclude(pk=request.user.pk) Then, create views.py with the message_to_user() view in order to handle the form. As you can see, the request object is passed as the first parameter to the form, as follows: # email_messages/views.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django.contrib.auth.decorators import login_required from django.shortcuts import render, redirect from .forms import MessageForm @login_required def message_to_user(request): if request.method == "POST": form = MessageForm(request, data=request.POST) if form.is_valid(): # do something with the form return redirect("message_to_user_done") else: form = MessageForm(request) return render(request, "email_messages/message_to_user.html", {"form": form} ) How it works... In the initialization method, we have the self variable that represents the instance of the form itself, we also have the newly added request variable, and then we have the rest of the positional arguments (*args) and named arguments (**kwargs). We call the super() initialization method passing all the positional and named arguments to it so that the form is properly initiated. We will then assign the request variable to a new request attribute of the form for later access in other methods of the form. Then, we modify the queryset attribute of the recipient's selection field, excluding the current user from the request. In the view, we will pass the HttpRequest object as the first argument in both situations: when the form is posted as well as when it is loaded for the first time. See also The Utilizing the save method of the form recipe Utilizing the save method of the form To make your views clean and simple, it is good practice to move the handling of the form data to the form itself whenever possible and makes sense. The common practice is to have a save() method that will save the data, perform search, or do some other smart actions. We will extend the form that is defined in the previous recipe with the save() method, which will send an e-mail to the selected recipient. Getting ready We will build upon the example that is defined in the Passing HttpRequest to the form recipe. How to do it... To complete this recipe, execute the following two steps: From Django, import the function in order to send an e-mail. Then, add the save() method to MessageForm. It will try to send an e-mail to the selected recipient and will fail quietly if any errors occur: # email_messages/forms.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django import forms from django.utils.translation import ugettext, ugettext_lazy as _ from django.core.mail import send_mail from django.contrib.auth.models import User class MessageForm(forms.Form): recipient = forms.ModelChoiceField( label=_("Recipient"), queryset=User.objects.all(), required=True, ) message = forms.CharField( label=_("Message"), widget=forms.Textarea, required=True, ) def __init__(self, request, *args, **kwargs): super(MessageForm, self).__init__(*args, **kwargs) self.request = request self.fields["recipient"].queryset = self.fields["recipient"].queryset. exclude(pk=request.user.pk) def save(self): cleaned_data = self.cleaned_data send_mail( subject=ugettext("A message from %s") % self.request.user, message=cleaned_data["message"], from_email=self.request.user.email, recipient_list=[ cleaned_data["recipient"].email ], fail_silently=True, ) Then, call the save() method from the form in the view if the posted data is valid: # email_messages/views.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django.contrib.auth.decorators import login_required from django.shortcuts import render, redirect from .forms import MessageForm @login_required def message_to_user(request): if request.method == "POST": form = MessageForm(request, data=request.POST) if form.is_valid(): form.save() return redirect("message_to_user_done") else: form = MessageForm(request) return render(request, "email_messages/message_to_user.html", {"form": form} ) How it works... Let's take a look at the form. The save() method uses the cleaned data from the form to read the recipient's e-mail address and the message. The sender of the e-mail is the current user from the request. If the e-mail cannot be sent due to an incorrect mail server configuration or another reason, it will fail silently; that is, no error will be raised. Now, let's look at the view. When the posted form is valid, the save() method of the form will be called and the user will be redirected to the success page. See also The Passing HttpRequest to the form recipe Uploading images In this recipe, we will take a look at the easiest way to handle image uploads. You will see an example of an app, where the visitors can upload images with inspirational quotes. Getting ready Make sure to have Pillow or PIL installed in your virtual environment or globally. Then, let's create a quotes app and put it in INSTALLED_APPS in the settings. Then, we will add an InspirationalQuote model with three fields: the author, quote text, and picture, as follows: # quotes/models.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals import os from django.db import models from django.utils.timezone import now as timezone_now from django.utils.translation import ugettext_lazy as _ from django.utils.encoding import python_2_unicode_compatible def upload_to(instance, filename): now = timezone_now() filename_base, filename_ext = os.path.splitext(filename) return "quotes/%s%s" % ( now.strftime("%Y/%m/%Y%m%d%H%M%S"), filename_ext.lower(), ) @python_2_unicode_compatible class InspirationalQuote(models.Model): author = models.CharField(_("Author"), max_length=200) quote = models.TextField(_("Quote")) picture = models.ImageField(_("Picture"), upload_to=upload_to, blank=True, null=True, ) class Meta: verbose_name = _("Inspirational Quote") verbose_name_plural = _("Inspirational Quotes") def __str__(self): return self.quote In addition, we created an upload_to function, which sets the path of the uploaded picture to be something similar to quotes/2015/04/20150424140000.png. As you can see, we use the date timestamp as the filename to ensure its uniqueness. We pass this function to the picture image field. How to do it... Execute these steps to complete the recipe: Create the forms.py file and put a simple model form there: # quotes/forms.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django import forms from .models import InspirationalQuote class InspirationalQuoteForm(forms.ModelForm): class Meta: model = InspirationalQuote fields = ["author", "quote", "picture", "language"] In the views.py file, put a view that handles the form. Don't forget to pass the FILES dictionary-like object to the form. When the form is valid, trigger the save() method as follows: # quotes/views.py # -*- coding: UTF-8 -*- from __future__ import unicode_literals from django.shortcuts import redirect from django.shortcuts import render from .forms import InspirationalQuoteForm def add_quote(request): if request.method == "POST": form = InspirationalQuoteForm( data=request.POST, files=request.FILES, ) if form.is_valid(): quote = form.save() return redirect("add_quote_done") else: form = InspirationalQuoteForm() return render(request, "quotes/change_quote.html", {"form": form} ) Lastly, create a template for the view in templates/quotes/change_quote.html. It is very important to set the enctype attribute to multipart/form-data for the HTML form, otherwise the file upload won't work: {# templates/quotes/change_quote.html #} {% extends "base.html" %} {% load i18n %} {% block content %} <form method="post" action="" enctype="multipart/form-data"> {% csrf_token %} {{ form.as_p }} <button type="submit">{% trans "Save" %}</button> </form> {% endblock %} How it works... Django model forms are forms that are created from models. They provide all the fields from the model so you don't need to define them again. In the preceding example, we created a model form for the InspirationalQuote model. When we save the form, the form knows how to save each field in the database as well as upload the files and save them in the media directory. There's more As a bonus, we will see an example of how to generate a thumbnail out of the uploaded image. Using this technique, you could also generate several other specific versions of the image, such as the list version, mobile version, and desktop computer version. We will add three methods to the InspirationalQuote model (quotes/models.py). They are save(), create_thumbnail(), and get_thumbnail_picture_url(). When the model is being saved, we will trigger the creation of the thumbnail. When we need to show the thumbnail in a template, we can get its URL using {{ quote.get_thumbnail_picture_url }}. The method definitions are as follows: # quotes/models.py # … from PIL import Image from django.conf import settings from django.core.files.storage import default_storage as storage THUMBNAIL_SIZE = getattr( settings, "QUOTES_THUMBNAIL_SIZE", (50, 50) ) class InspirationalQuote(models.Model): # ... def save(self, *args, **kwargs): super(InspirationalQuote, self).save(*args, **kwargs) # generate thumbnail picture version self.create_thumbnail() def create_thumbnail(self): if not self.picture: return "" file_path = self.picture.name filename_base, filename_ext = os.path.splitext(file_path) thumbnail_file_path = "%s_thumbnail.jpg" % filename_base if storage.exists(thumbnail_file_path): # if thumbnail version exists, return its url path return "exists" try: # resize the original image and # return URL path of the thumbnail version f = storage.open(file_path, 'r') image = Image.open(f) width, height = image.size if width > height: delta = width - height left = int(delta/2) upper = 0 right = height + left lower = height else: delta = height - width left = 0 upper = int(delta/2) right = width lower = width + upper image = image.crop((left, upper, right, lower)) image = image.resize(THUMBNAIL_SIZE, Image.ANTIALIAS) f_mob = storage.open(thumbnail_file_path, "w") image.save(f_mob, "JPEG") f_mob.close() return "success" except: return "error" def get_thumbnail_picture_url(self): if not self.picture: return "" file_path = self.picture.name filename_base, filename_ext = os.path.splitext(file_path) thumbnail_file_path = "%s_thumbnail.jpg" % filename_base if storage.exists(thumbnail_file_path): # if thumbnail version exists, return its URL path return storage.url(thumbnail_file_path) # return original as a fallback return self.picture.url In the preceding methods, we are using the file storage API instead of directly juggling the filesystem, as we could then exchange the default storage with Amazon S3 buckets or other storage services and the methods will still work. How does the creating the thumbnail work? If we had the original file saved as quotes/2014/04/20140424140000.png, we are checking whether the quotes/2014/04/20140424140000_thumbnail.jpg file doesn't exist and, in that case, we are opening the original image, cropping it from the center, resizing it to 50 x 50 pixels, and saving it to the storage. The get_thumbnail_picture_url() method checks whether the thumbnail version exists in the storage and returns its URL. If the thumbnail version does not exist, the URL of the original image is returned as a fallback. Summary In this article, we learned about passing an HttpRequest to the form and utilizing the save method of the form. You can find various book on Django on our website: Learning Website Development with Django (https://www.packtpub.com/web-development/learning-website-development-django) Instant Django 1.5 Application Development Starter (https://www.packtpub.com/web-development/instant-django-15-application-development-starter) Django Essentials (https://www.packtpub.com/web-development/django-essentials) Resources for Article: Further resources on this subject: So, what is Django?[article] Code Style in Django[article] Django JavaScript Integration: jQuery In-place Editing Using Ajax[article]
Read more
  • 0
  • 0
  • 4497

article-image-vaadin-and-its-context
Packt
25 Sep 2013
24 min read
Save for later

Vaadin and its Context

Packt
25 Sep 2013
24 min read
(For more resources related to this topic, see here.) Developing Java applications and more specifically, developing Java web applications should be fun. Instead, most projects are a mess of sweat and toil, pressure and delays, costs and cost cutting. Web development has lost its appeal. Yet, among the many frameworks available, there is one in particular that draws our attention because of its ease of use and its original stance. It has been around since the past decade and has begun to grow in importance. The name of this framework is Vaadin. The goal of this article is to see, step-by-step, how to develop web applications with Vaadin. Vaadin is the Finnish word for a female reindeer (as well as a Finnish goddess). This piece of information will do marvels to your social life as you are now one of the few people on Earth who know this (outside Finland). Before diving right into Vaadin, it is important to understand what led to its creation. Readers who already have this information (or who don't care) should go directly to Environment Setup. Rich applications Vaadin is often referred to as a Rich Internet Application (RIA) framework. Before explaining why, we need to first define some terms which will help us describe the framework. In particular, we will have a look at application tiers, the different kind of clients, and their history. Application tiers Some software run locally, that is, on the client machine and some run remotely, such as on a server machine. Some applications also run on both the client and the server. For example, when requesting an article from a website, we interact with a browser on the client side but the order itself is passed on a server in the form of a request. Traditionally, all applications can be logically separated into tiers, each having different responsibilities as follows: Presentation : The presentation tier is responsible for displaying the end-user information and interaction. It is the realm of the user interface. Business Logic : The logic tier is responsible for controlling the application logic and functionality. It is also known as the application tier, or the middle tier as it is the glue between the other two surrounding tiers, thus leading to the term middleware. Data : The data tier is responsible for storing and retrieving data. This backend may be a file system. In most cases, it is a database, whether relational, flat, or even an object-oriented one. This categorization not only naturally corresponds to specialized features, but also allows you to physically separate your system into different parts, so that you can change a tier with reduced impact on adjacent tiers and no impact on non-adjacent tiers. Tier migration In the histor yof computers and computer software, these three tiers have moved back and forth between the server and the client. Mainframes When computers were mainframes, all tiers were handled by the server. Mainframes stored data, processed it, and were also responsible for the layout of the presentation. Clients were dumb terminals, suited only for displaying characters on the screen and accepting the user input. Client server Not many companies could afford the acquisition of a mainframe (and many still cannot). Yet, those same companies could not do without computers at all, because the growing complexity of business processes needed automation. This development in personal computers led to a decrease in their cost. With the need to share data between them, the network traffic rose. This period in history saw the rise of the personal computer, as well as the Client server term, as there was now a true client. The presentation and logic tier moved locally, while shared databases were remotely accessible, as shown in the following diagram: Thin clients Big companies migrating from mainframes to client-server architectures thought that deploying software on ten client machines on the same site was relatively easy and could be done in a few hours. However, they quickly became aware of the fact that with the number of machines growing in a multi-site business, it could quickly become a nightmare. Enterprises also found that it was not only the development phase that had to be managed like a project, but also the installation phase. When upgrading either the client or the server, you most likely found that the installation time was high, which in turn led to downtime and that led to additional business costs. Around 1991, Sir Tim Berners-Leeinvented the Hyper Text Markup Language, better known as HTML. Some time after that, people changed its original use, which was to navigate between documents, to make HTML-based web applications. This solved the deployment problem as the logic tier was run on a single-server node (or a cluster), and each client connected to this server. A deployment could be done in a matter of minutes, at worst overnight, which was a huge improvement. The presentation layer was still hosted on the client, with the browser responsible for displaying the user interface and handling user interaction. This new approach brought new terms, which are as follows: The old client-server architecture was now referred to as fat client . The new architecture was coined as thin client, as shown in the following diagram: Limitations of the thin-client applications approach Unfortunately, this evolution was made for financial reasons and did not take into account some very important drawbacks of the thin client. Poor choice of controls HTML does not support many controls, and what is available is not on par with fat-client technologies. Consider, for example, the list box: in any fat client, choices displayed to the user can be filtered according to what is typed in the control. In legacy HTML, there's no such feature and all lines are displayed in all cases. Even with HTML5, which is supposed to add this feature, it is sadly not implemented in all browsers. This is a usability disaster if you need to display the list of countries (more than 200 entries!). As such, ergonomics of true thin clients have nothing to do with their fat-client ancestors. Many unrelated technologies Developers of fat-client applications have to learn only two languages: SQL and the technology's language, such as Visual Basic, Java, and so on. Web developers, on the contrary, have to learn an entire stack of technologies, both on the client side and on the server side. On the client side, the following are the requirements: First, of course, is HTML. It is the basis of all web applications, and although some do not consider it a programming language per se, every web developer must learn it so that they can create content to be displayed by browsers. In order to apply some common styling to your application, one will probably have to learn the Cascading Style Sheets ( CSS) technology. CSS is available in three main versions, each version being more or less supported by browser version combinations (see Browser compatibility). Most of the time, it is nice to have some interactivity on the client side, like pop-up windows or others. In this case, we will need a scripting technology such as ECMAScript. ECMAScript is the specification of which JavaScript is an implementation (along with ActionScript ). It is standardized by the ECMA organization. See http://www.ecma-international.org/publications/standards/Ecma-262.htm for more information on the subject. Finally, one will probably need to update the structure of the HTML page, a healthy dose of knowledge of the Document Object Model (DOM) is necessary. As a side note, consider that HTML, CSS, and DOM are W3C specifications while ECMAScript is an ECMA standard. From a Java point-of-view and on the server side, the following are the requirements: As servlets are the most common form of request-response user interactions in Java EE, every web developer worth his salt has to know both the Servlet specification and the Servlet API. Moreover, most web applications tend to enforce the Model-View-Controller paradigm. As such, the Java EE specification enforces the use of servlets for controllers and JavaServer Pages (JSP ) for views. As JSP are intended to be templates, developers who create JSP have an additional syntax to learn, even though they offer the same features as servlets. JSP accept scriptlets, that is, Java code snippets, but good coding practices tend to frown upon this, however, as Java code can contain any feature, including some that should not be part of views—for example, the database access code. Therefore, a completely new technology stack is proposed in order to limit code included in JSP: the tag libraries. These tag libraries also have a specification and API, and that is another stack to learn. However, these are a few of the standard requirements that you should know in order to develop web applications in Java. Most of the time, in order to boost developer productivity, one has to use frameworks. These frameworks are available in most of the previously cited technologies. Some of them are supported by Oracle, such as Java Server Faces, others are open source, such as Struts. JavaEE 6 seems to favor replacement of JSP and Servlet by Java Server Faces(JSF). Although JSF aims to provide a component-based MVC framework, it is plagued by a relative complexity regarding its components lifecycle. Having to know so much has negative effects, a few are as follows: On the technical side, as web developers have to manage so many different technologies, web development is more complex than fat-client development, potentially leading to more bugs On the human resources side, different meant either different profiles were required or more resources, either way it added to the complexity of human resource management On the project management side, increased complexity caused lengthier projects: developing a web application was potentially taking longer than developing a fat-client application All of these factors tend to make the thin-client development cost much more than fat-client, albeit the deployment cost was close to zero. Browser compatibility The Web has standards, most of them upheld by the World Wide Web Consortium. Browsers more or less implement these standards, depending on the vendor and the version. The ACID test, in version 3, is a test for browser compatibility with web standards. Fortunately, most browsers pass the test with 100 percent success, which was not the case two years ago. Some browsers even make the standards evolve, such as Microsoft which implemented the XmlHttpRequest objectin Internet Explorer and thus formed the basis for Ajax. One should be aware of the combination of the platform, browser, and version. As some browsers cannot be installed with different versions on the same platform, testing can quickly become a mess (which can fortunately be mitigated with virtual machines and custom tools like http://browsershots.org). Applications should be developed with browser combinations in mind, and then tested on it, in order to ensure application compatibility. For intranet applications, the number of supported browsers is normally limited. For Internet applications, however, most common combinations must be supported in order to increase availability. If this wasn't enough, then the same browser in the same version may run differently on different operating systems. In all cases, each combination has an exponential impact on the application's complexity, and therefore, on cost. Page flow paradigm Fat-client applications manage windows. Most of the time, there's a main window. Actions are mainly performed in this main window, even if sometimes managed windows or pop-up windows are used. As web applications are browser-based and use HTML over HTTP, things are managed differently. In this case, the presentation unit is not the window but the page. This is a big difference that entails a performance problem: indeed, each time the user clicks on a submit button, the request is sent to the server, processed by it, and the HTML response is sent back to the client. For example, when a client submits a complex registration form, the entire page is recreated on the server side and sent back to the browser even if there is a minor validation error, even though the required changes to the registration form would have been minimal. Beyond the limits Over the last few years, users have been applying some pressure in order to have user interfaces that offer the same richness as good old fat-client applications. IT managers, however, are unwilling to go back to the old deploy-as-a-project routine and its associated costs and complexity. They push towards the same deployment process as thin-client applications. It is no surprise that there are different solutions in order to solve this dilemma. What are rich clients? All the following solutions are globally called rich clients, even if the approach differs. They have something in common though: all of them want to retain the ease of deployment of the thin client and solve some or all of the problems mentioned previously. Rich clients fulfill the fourth quadrant of the following schema, which is like a dream come true, as shown in the following diagram: Some rich client approaches The following solutions are strategies that deserve the rich client label. Ajax Ajax was one of the first successful rich-client solutions. The term means Asynchronous JavaScript with XML. In effect, this browser technology enables sending asynchronous requests, meaning there is no need to reload the full page. Developers can provide client scripts implementing custom callbacks: those are executed when a response is sent from the server. Most of the time, such scripts use data provided in the response payload to dynamically update relevant part of the page DOM. Ajax addresses the richness of controls and the page flow paradigm. Unfortunately: It aggravates browser-compatibility problems as Ajax is not handled in the same way by all browsers. It has problems unrelated directly to the technologies, which are as follows: Either one learns all the necessary technologies to do Ajax on its own, that is, JavaScript, Document Object Model, and JSON/XML, to communicate with the server and write all common features such as error handling from scratch. Alternatively, one uses an Ajax framework, and thus, one has to learn another technology stack. Richness through a plugin The oldest way to bring richness to the user's experience is to execute the code on the client side and more specifically, as a plugin in the browser. Sun—now Oracle—proposed the applet technology, whereas Microsoft proposed ActiveX. The latest technology using this strategy is Flash. All three were failures due to technical problems, including performance lags, security holes, and plain-client incompatibility or just plain rejection by the market. There is an interesting way to revive the applet with the Apache Pivot project, as shown in the following screenshot (http://pivot.apache.org/), but it hasn't made a huge impact yet; A more recent and successful attempt at executing code on the client side through a plugin is through Adobe's Flex. A similar path was taken by Microsoft's Silverlight technology. Flex is a technology where static views are described in XML and dynamic behavior in ActionScript. Both are transformed at compile time in Flash format. Unfortunately, Apple refused to have anything to do with the Flash plugin on iOS platforms. This move, coupled with the growing rise of HTML5, resulted in Adobe donating Flex to the Apache foundation. Also, Microsoft officially renounced plugin technology and shifted Silverlight development to HTML5. Deploying and updating fat-client from the web The most direct way toward rich-client applications is to deploy (and update) a fat-client application from the web. Java Web Start Java Web Start (JWS), available at http://download.oracle.com/javase/1.5.0/docs/guide/javaws/, is a proprietary technology invented by Sun. It uses a deployment descriptor in Java Network Launching Protocol (JNLP) that takes the place of the manifest inside a JAR file and supplements it. For example, it describes the main class to launch the classpath, and also additional information such as the minimum Java version, icons to display on the user desktop, and so on. This descriptor file is used by the javaws executable, which is bundled in the Java Runtime Environment. It is the javaws executable's responsibility to read the JNLP file and do the right thing according to it. In particular, when launched, javaws will download the updated JAR. The detailed process goes something like the following: The user clicks on a JNLP file. The JNLP file is downloaded on the user machine, and interpreted by the local javaws application. The file references JARs that javaws can download. Once downloaded, JWS reassembles the different parts, create the classpath, and launch the main class described in the JNLP. JWS correctly tackles all problems posed by the thin-client approach. Yet it never reaches critical mass for a number of reasons: First time installations are time-consuming because typically lots of megabytes need to be transferred over the wire before the users can even start using the app. This is a mere annoyance for intranet applications, but a complete no go for Internet apps. Some persistent bugs weren't fixed across major versions. Finally, the lack of commercial commitment by Sun was the last straw. A good example of a successful JWS application is JDiskReport (http://www.jgoodies.com/download/jdiskreport/jdiskreport.jnlp), a disk space analysis tool by Karsten Lentzsch , which is available on the Web for free. Update sites Updating software through update sites is a path taken by both Integrated Development Environment ( IDE ) leaders, NetBeans and Eclipse. In short, once the software is initially installed, updates and new features can be downloaded from the application itself. Both IDEs also propose an API to build applications. This approach also handles all problems posed by the thin-client approach. However, like JWS, there's no strong trend to build applications based on these IDEs. This can probably be attributed to both IDEs using the OSGI standard whose goal is to address some of Java's shortcomings but at the price of complexity. Google Web Toolkit Google Web Toolkit (GWT) is the framework used by Google to create some of its own applications. Its point of view is very unique among the technologies presented here. It lets you develop in Java, and then the GWT compiler transforms your code to JavaScript, which in turn manipulates the DOM tree to update HTML. It's GWT's responsibility to handle browser compatibility. This approach also solves the other problems of the pure thin-client approach. Yet, GWT does not shield developers from all the dirty details. In particular, the developer still has to write part of the code handling server-client communication and he has to take care of the segregation between Java server-code which will be compiled into byte code and Java client-code which will be compiled into JavaScript. Also, note that the compilation process may be slow, even though there are a number of optimization features available during development. Finally, developers need a good understanding of the DOM, as well as the JavaScript/DOM event model. Why Vaadin? Vaadin is a solution evolved from a decade of problem-solving approach, provided by a Finnish company named Vaadin Ltd, formerly IT Mill. Therefore, having so many solutions available, could question the use of Vaadin instead of Flex or GWT? Let's first have a look at the state of the market for web application frameworks in Java, then detail what makes Vaadin so unique in this market. State of the market Despite all the cons of the thin-client approach, an important share of applications developed today uses this paradigm, most of the time with a touch of Ajax augmentation. Unfortunately, there is no clear leader for web applications. Some reasons include the following: Most developers know how to develop plain old web applications, with enough Ajax added in order to make them usable by users. GWT, although new and original, is still complex and needs seasoned developers in order to be effective. From a Technical Lead or an IT Manager's point of view, this is a very fragmented market where it is hard to choose a solution that will meet users' requirements, as well as offering guarantees to be maintained in the years to come. Importance of Vaadin Vaadin is a unique framework in the current ecosystem; its differentiating features include the following: There is no need to learn different technology stacks, as the coding is solely in Java. The only thing to know beside Java is Vaadin's own API, which is easy to learn. This means: The UI code is fully object-oriented There's no spaghetti JavaScript to maintain It is executed on the server side Furthermore, the IDE's full power is in our hands with refactoring and code completion. No plugin to install on the client's browser, ensuring all users that browse our application will be able to use it as-is. As Vaadin uses GWT under the hood, it supports all browsers that the version of GWT also supports. Therefore, we can develop a Vaadin application without paying attention to the browsers and let GWT handle the differences. Our users will interact with our application in the same way, whether they use an outdated version (such as Firefox 3.5), or a niche browser (like Opera). Moreover, Vaadin uses an abstraction over GWT so that the API is easier to use for developers. Also, note that Vaadin Ltd (the company) is part of GWT steering committee, which is a good sign for the future. Finally, Vaadin conforms to standards such as HTML and CSS, making the technology future proof. For example, many applications created with Vaadin run seamlessly on mobile devices although they were not initially designed to do so. Vaadin integration In today's environment, integration features of a framework are very important, as normally every enterprise has rules about which framework is to be used in some context. Vaadin is about the presentation layer and runs on any servlet container capable environment. Integrated frameworks There are three integration levels possible which are as follows: Level 1 : out-of-the-box or available through an add-on, no effort required save reading the documentation Level 2 : more or less documented Level 3 : possible with effort The following are examples of such frameworks and tools with their respective integration estimated effort: Level 1 : Java Persistence API ( JPA ): JPA is the Java EE 5 standard for all things related to persistence. An add-on exists that lets us wire existing components to a JPA backend. Other persistence add-ons are available in the Vaadin directory, such as a container for Hibernate, one of the leading persistence frameworks available in the Java ecosystem. A bunch of widget add-ons, such as tree tables, popup buttons, contextual menus, and many more. Level 2 : Spring is a framework which is based on Inversion of Control ( IoC ) that is the de facto standard for Dependency Injection. Spring can easily be integrated with Vaadin, and different strategies are available for this. Context Dependency Injection ( CDI ): CDI is an attempt at making IoC a standard on the Java EE platform. Whatever can be done with Spring can be done with CDI. Any GWT extensions such as Ext-GWT or Smart GWT can easily be integrated in Vaadin, as Vaadin is built upon GWT's own widgets. Level 3 : We can use another entirely new framework and languages and integrate them with Vaadin, as long as they run on the JVM: Apache iBatis, MongoDB, OSGi, Groovy, Scala, anything you can dream of! Integration platforms Vaadin provides an out-of-the-box integration with an important third-party platform: Liferay is an open source enterprise portal backed by Liferay Inc. Vaadin provides a specialized portlet that enables us to develop Vaadin applications as portlets that can be run on Liferay. Also, there is a widgetset management portlet provided by Vaadin, which deploys nicely into Liferay's Control Panel. Using Vaadin in the real world If you embrace Vaadin, then chances are that you will want to go beyond toying with the Vaadin framework and develop real-world applications. Concerns about using a new technology Although it is okay to use the latest technology for a personal or academic project, projects that have business objectives should just run and not be riddled with problems from third-party products. In particular, most managers may be wary when confronted by anew product (or even a new version), and developers should be too. The following are some of the reasons to choose Vaadin: Product is of highest quality : The Vaadin team has done rigorous testing throughout their automated build process. Currently, it consists of more than 8,000 unit tests. Moreover, in order to guarantee full compatibility between versions, many (many!) tests execute pixel-level regression testing. Support : Commercial : Although completely committed to open source, Vaadin Limited offer commercial support for their product. Check their Pro Account offering. User forums : A Vaadin user forum is available. Anyone registered can post questions and see them answered by a member of the team or of the community. Note that Vaadin registration is free, as well as hassle-free: you will just be sent the newsletter once a month (and you can opt-out, of course). Retro-compatibility: API : The server-side API is very stable, version after version, and has survived major client-engines rewrite. Some part of the API has been changed from v6 to v7, but it is still very easy to migrate. Architecture : Vaadin's architecture favors abstraction and is at the root of it all. Full-blown documentation available : Product documentation : Vaadin's site provides three levels of documentation regarding Vaadin: a five-minute tutorial, a one-hour tutorial, and the famed article of Vaadin . Tutorials API documentation : The Javadocs are available online; there is no need to build the project locally. Course/webinar offerings : Vaadin Ltd currently provides four different courses, which tackles all the needed skills for a developer to be proficient in the framework. Huge community around the product : There is a community gathering, which is ever growing and actively using the product. There are plenty of blogs and articles online on Vaadin. Furthermore, there are already many enterprises using Vaadin for their applications. Available competent resources : There are more and more people learning Vaadin. Moreover, if no developer is available, the framework can be learned in a few days. Integration with existing product/platforms : Vaadin is built to be easily integrated with other products and platforms. The artile of Vaadin describes how to integrate with Liferay and Google App Engine. Others already use Vaadin Upon reading this, managers and developers alike should realize Vaadin is mature and is used on real-world applications around the world. If you still have any doubts, then you should check http://vaadin.com/who-is-using-vaadin and be assured that big businesses trusted Vaadin before you, and benefited from its advantages as well. Summary In this article, we saw the migration of application tiers in the software architecture between the client and the server. We saw that each step resolved the problems in the previous architecture: Client-server used the power of personal computers in order to decrease mainframe costs Thin-clients resolved the deployment costs and delays Thin-clients have numerous drawbacks. For the user, a lack of usability due to poor choice of controls, browser compatibility issues, and the navigation based on page flow; for the developer, many technologies to know. As we are at the crossroad, there is no clear winner in all the solutions available: some only address a few of the problems, some aggravate them. Vaadin is an original solution that tries to resolve many problems at once: It provides rich controls It uses GWT under the cover that addresses most browser compatibility issues It has abstractions over the request response model, so that the model used is application-based and not page based The developer only needs to know one programming language: Java, and Vaadin generates all HTML, JavaScript, and CSS code for you Now we can go on and create our first Vaadin application! Resources for Article : Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Creating a Basic Vaadin Project [Article] Vaadin – Using Input Components and Forms [Article]
Read more
  • 0
  • 0
  • 4495

article-image-adding-health-checks
Packt
14 Feb 2014
3 min read
Save for later

Adding health checks

Packt
14 Feb 2014
3 min read
(For more resources related to this topic, see here.) A health check is a runtime test for our application. We are going to create a health check that tests the creation of new contacts using the Jersey client. The health check results are accessible through the admin port of our application, which by default is 8081. How to do it… To add a health check perform the following steps: Create a new package called com.dwbook.phonebook.health and a class named NewContactHealthCheck in it: import javax.ws.rs.core.MediaType; import com.codahale.metrics.health.HealthCheck; import com.dwbook.phonebook.representations.Contact; import com.sun.jersey.api.client.*; public class NewContactHealthCheck extends HealthCheck { private final Client client; public NewContactHealthCheck(Client client) { super(); this.client = client; } @Override protected Result check() throws Exception { WebResource contactResource = client .resource("http://localhost:8080/contact"); ClientResponse response = contactResource.type( MediaType.APPLICATION_JSON).post( ClientResponse.class, new Contact(0, "Health Check First Name", "Health Check Last Name", "00000000")); if (response.getStatus() == 201) { return Result.healthy(); } else { return Result.unhealthy("New Contact cannot be created!"); } } } Register the health check with the Dropwizard environment by using the HealthCheckRegistry#register() method within the #run() method of the App class. You will first need to import com.dwbook.phonebook.health.NewContactHealthCheck. The HealthCheckRegistry can be accessed using the Environment#healthChecks() method: // Add health checks e.healthChecks().register ("New Contact health check", new NewContactHealthCheck(client)); After building and starting your application, navigate with your browser to http://localhost:8081/healthcheck: The results of the defined health checks are presented in the JSON format. In case the custom health check we just created or any other health check fails, it will be flagged as "healthy": false, letting you know that your application faces runtime problems. How it works… We used exactly the same code used by our client class in order to create a health check; that is, a runtime test that confirms that the new contacts can be created by performing HTTP POST requests to the appropriate endpoint of the ContactResource class. This health check gives us the required confidence that our web service is functional. All we need for the creation of a health check is a class that extends HealthCheck and implements the #check() method. In the class's constructor, we call the parent class's constructor specifying the name of our check—the one that will be used to identify our health check. In the #check() method, we literally implement a check. We check that everything is as it should be. If so, we return Result.healthy(), else we return Result.unhealthy(), indicating that something is going wrong. Summary This article showed what a health check is and demonstrated how to add a health check. The health check we created tested the creation of new contacts using the Jersey client. Resources for Article: Further resources on this subject: RESTful Web Services – Server-Sent Events (SSE) [Article] Connecting to a web service (Should know) [Article] Web Services and Forms [Article]
Read more
  • 0
  • 0
  • 4467

article-image-making-specs-more-concise-intermediate
Packt
13 Sep 2013
6 min read
Save for later

Making specs more concise (Intermediate)

Packt
13 Sep 2013
6 min read
(For more resources related to this topic, see here.) Making specs more concise (Intermediate) So far, we've written specifications that work in the spirit of unit testing, but we're not yet taking advantage of any of the important features of RSpec to make writing tests more fluid. The specs illustrated so far closely resemble unit testing patterns and have multiple assertions in each spec. How to do it... Refactor our specs in spec/lib/location_spec.rb to make them more concise: require "spec_helper" describe Location do describe "#initialize" do subject { Location.new(:latitude => 38.911268, :longitude => -77.444243) } its (:latitude) { should == 38.911268 } its (:longitude) { should == -77.444243 } end end While running the spec, you see a clean output because we've separated multiple assertions into their own specifications: Location #initialize latitude should == 38.911268 longitude should == -77.444243 Finished in 0.00058 seconds 2 examples, 0 failures The preceding output requires either the .rspec file to contain the --format doc line, or when executing rspec in the command line, the --format doc argument must be passed. The default output format will print dots (.) for passing tests, asterisks (*) for pending tests, E for errors, and F for failures. It is time to add something meatier. As part of our project, we'll want to determine if Location is within a certain mile radius of another point. In spec/lib/location_spec.rb, we'll write some tests, starting with a new block called context. The first spec we want to write is the happy path test. Then, we'll write tests to drive out other states. I am going to re-use our Location instance for multiple examples, so I'll refactor that into another new construct, a let block: require "spec_helper" describe Location do let(:latitude) { 38.911268 } let(:longitude) { -77.444243 } let(:air_space) { Location.new(:latitude => 38.911268,: longitude => -77.444243) } describe "#initialize" do subject { air_space } its (:latitude) { should == latitude } its (:longitude) { should == longitude } end end Because we've just refactored, we'll execute rspec and see the specs pass. Now, let's spec out a Location#near? method by writing the code we wish we had: describe "#near?" do context "when within the specified radius" do subject { air_space.near?(latitude, longitude, 1) } it { should be_true } end end end Running rspec now results in failure because there's no Location#near? method defined. The following is the naive implementation that passes the test (in lib/location.rb): def near?(latitude, longitude, mile_radius) true end Now, we can drive a failure case, which will force a real implementation in spec/lib/location_spec.rb within the describe "#near?" block: context "when outside the specified radius" do subject { air_space.near?(latitude * 10, longitude * 10, 1) } it { should be_false } end Running the specs now results in the expected failure. The following is a passing implementation of the haversine formula in lib/location.rb that satisfies both cases: R = 3_959 # Earth's radius in miles, approx def near?(lat, long, mile_radius) to_radians = Proc.new { |d| d * Math::PI / 180 } dist_lat = to_radians.call(lat - self.latitude) dist_long = to_radians.call(long - self.longitude) lat1 = to_radians.call(self.latitude) lat2 = to_radians.call(lat) a = Math.sin(dist_lat/2) * Math.sin(dist_lat/2) + Math.sin(dist_long/2) * Math.sin(dist_long/2) * Math.cos(lat1) * Math.cos(lat2) c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)) (R * c) <= mile_radius end Refactor both of the previous tests to be more expressive by utilizing predicate matchers: describe "#near?" do context "when within the specified radius" do subject { air_space } it { should be_near(latitude, longitude, 1) } end context "when outside the specified radius" do subject { air_space } it { should_not be_near(latitude * 10, longitude * 10, 1) } end end Now that we have a passing spec for #near?, we can alleviate a problem with our implementation. The #near? method is too complicated. It could be a pain to try and maintain this code in future. Refactor for ease of maintenance while ensuring that the specs still pass: R = 3_959 # Earth's radius in miles, approx def near?(lat, long, mile_radius) loc = Location.new(:latitude => lat,:longitude => long) R * haversine_distance(loc) <= mile_radius end private def to_radians(degrees) degrees * Math::PI / 180 end def haversine_distance(loc) dist_lat = to_radians(loc.latitude - self.latitude) dist_long = to_radians(loc.longitude - self.longitude) lat1 = to_radians(self.latitude) lat2 = to_radians(loc.latitude) a = Math.sin(dist_lat/2) * Math.sin(dist_lat/2) +Math.sin(dist_long/2) * Math.sin(dist_long/2) *Math.cos(lat1) * Math.cos(lat2) 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)) end Finally, run rspec again and see that the tests continue to pass. A successful refactor! How it works... The subject block takes the return statement of the block—a new instance of Location in the previous example—and binds it to a locally scoped variable named subject. Subsequent it and its blocks can refer to that subject variable. Furthermore, the its blocks implicitly operate on the subject variable to produce more concise tests. Here is an example illustrating how subject is used to produce easier-to-read tests: describe "Example" do subject { { :key1 => "value1", :key2 => "value2" } } it "should have a size of 2" do subject.size.should == 2 end end We can use subject from within the it block and this will refer to the anonymous hash returned by the subject block. In the preceding test, we could have been more concise with an its block: its (:size) { should == 2 } We're not limited to just sending symbols to an its block—we can use strings too: its ('size') { should == 2 } When there is an attribute of subject you want to assert but the value cannot easily be turned into a valid Ruby symbol, you'll need to use a string. This string is not evaluated as Ruby code; it's only evaluated against the subject under test as a method of that class. Hashes, in particular, allow you to define an anonymous array with the key value to assert the value for that key: its ([:key1]) { should == "value1" } There's more... In the previous code examples, another block known as the context block was presented. The context block is a grouping mechanism for associating tests. For example, you may have a conditional branch in your code that changes the outputs of a method. Here, you may use two context blocks, one for a value and the second for another value. In our example, we're separating the happy path (when a given point is within the specified mile radius) from the alternative (when a given point is outside the specified mile radius). context is a useful construct that allows you to declare let and other blocks within it, and those blocks apply only for the scope of the containing context. Summary This article demonstrated to us the idiomatic RSpec code that makes good use of the RSpec Domain Specific Language (DSL). Resources for Article : Further resources on this subject: Quick start - your first Sinatra application [Article] Behavior-driven Development with Selenium WebDriver [Article] External Tools and the Puppet Ecosystem [Article]
Read more
  • 0
  • 0
  • 4457
article-image-moodle-20-multimedia-creating-and-integrating-screencasts-and-videos
Packt
23 May 2011
8 min read
Save for later

Moodle 2.0 Multimedia: Creating and Integrating Screencasts and Videos

Packt
23 May 2011
8 min read
  Moodle 2.0 Multimedia Cookbook Add images, videos, music, and much more to make your Moodle course interactive and fun         Read more about this book       (For more resources on Moodle 2.0, see here.) Introduction Moodle 2.0 offers new features, which make it easier to insert videos, especially from the http://www.youtube.com website. You can find them easily from the file picker, provided you have administrative access to the course. You have to bear in mind that you need to be an administrator in order to enable this option. This article covers different ways to create and interact using either screencasts or videos. We will work with several multimedia assets, which will concern the baseline topic of Wildlife. This topic has many resources, which can be integrated with screencasts and videos available on the Web. Creating screencasts using several free and open source software available on the Web is one of the main goals of this chapter. There is plenty of commercial software, which can be used to create screencasts. We will not focus on them though. We add some special features to the screencasts in order to enhance them. Videos can be recorded in several ways. You may use your cell phone, camera, or the webcam of your computer. We are to focus on the way of creating them and uploading into our Moodle course. We can also use a recorded video from YouTube and upload it directly from the file picker in Moodle 2.0. You can also design a playlist in order to combine several videos and let your students watch them in a row. We do it by creating an account in YouTube. The channel in YouTube can be either public or private; it depends on how we want to carry it out. You can create some screencasts in order to present information to your students instead of showing presentations made using Open Office, PowerPoint, or Microsoft Word. Changing any of these into a screencast is more appealing to the students and not such a difficult task to carry out either. We can create an explanation by recording our voice, for which we will create a virtual board that we can choose to be visible to the audience; in the second case, our explanations can only be heard with no visualization. This is quite an important aspect to be taken into account, especially in teaching because students need a dynamic explanation by their teacher. There are several software available that can be used to create screencasts. One of them is Cam Studio. This software captures AVI files and it is open source. It captures onscreen video and audio. Its disadvantage is that only Windows users can use it. You can download it from http://camstudio.com/. It is time for Mac users. There is also a free program for Mac users that focuses on making quick films by saving the recorded video to get a quick access. It does not record audio. This is Copernicus and you can download it from http://danicsoft.com/software/copernicus/. We need a tool for both Mac and Windows, which is free and open source as well. So, JingProject.com is the software. It does not only record video, but also allows you to take a picture, draw, or add a message on it, and upload the media to a free hosting account. A URL is provided in order to watch the video or the image. You can download it from the following website: http://www.techsmith.com/download/jing/. Screencast-o-matic is another tool that is based on Java that does not need to be downloaded at all. It allows you to upload in an automatic way. It works well with both Mac and Windows machines. You can use this at http://www.screencast-o-matic.com/. This is the tool that we are to work with in the creation of a screencast. We may also modify the videos to make them suitable for learning. We can add annotations in different ways so as to interact through the video with our students. That is to say, we add our comments instead of adding our voice so that students read what we need to tell them. Creating a screencast In this recipe, we create a screencast and upload it to our Moodle course. The baseline topic is Wildlife. Therefore, in this recipe, we will explain to our students where wild animals are located. We can paste in a world map of the different animals, while we add extra data through the audio files. Thus, we can also add more information using different types of images that are inserted in the map. Getting ready Before creating the screencast, plan the whole sequence of the explanation that we want to show to our students, therefore, we will use a very useful Java applet available at http://www.screencast-o-matic.com/. Screencast-o-matic requires the free Java Run-time Environment (also known as JRE) for both the teacher and the students' computers. You can download and install its latest version from http://java.sun.com. How to do it... First of all, design the background scene of the screencast to work with. Afterwards, enter the website http://www.screencast-o-matic.com/. Follow these to create the screencast: Click on Start recording. Another pop-up window appears that looks as shown in the following screenshot: Resize the frame to surround the recording area that you want to record. Click on the recording button (red button). If you want to make a pause, click on the pause button or Alt + P, as shown in the following screenshot: If you want to integrate the webcam or a bluetooth video, click on the upwards arrow in this icon, as shown in the following screenshot: When the screencast is finished, click on Done. You can preview the screencast after you finish designing it. If you need to edit it, click on Go back to add more. If you are satisfied with the preview, click on Done with this screencast, as shown in the following screenshot: When the screencast is finished, our next task is to export it because we need to upload it to our Moodle course. Click on Export Movie. Click on the downwards arrow in Type and choose Flash (FLV), as shown in the following screenshot: Customize the Size and Options blocks, as shown in the previous screenshot or as you wish. When you finish, click on Export, as shown in the previous screenshot. Write a name for this file and click on Save. When the file is exported, click on Go back and do more with this screencast if you want to edit it. Click on Done with this screencast if you are satisfied with the result. A pop-up window appears, click on OK. How it works... We have just created the screencast teaching about wild animals, which students have to watch to learn about the places where wild animals live around the world. We need to upload it to our Moodle course. It is a passive resource; therefore, we can add a resource or design an activity out of it. In this case, we design an activity. Choose the weekly outline section where you want to insert it, and follow these steps: Click on Add an activity | Online text within Assignments. Complete the Assignment name and Description blocks. Click on the Moodle Media icon | Find or upload a sound, video or applet ... | Upload a file | Browse | look for the file that you want to upload and click on it. Click on Open | Upload this file | Insert. Click on Save and return to course. Click on the activity. It looks as shown in the following screenshot: There's more... In the case that we create a screencast, which lasts for around 30 minutes or longer, it will take a long time to upload it to our Moodle course. Therefore, it will be advisable to watch the screencast using a free and open source media player, that is to say VLC Media Player. VLC Media Player You can download the VLC Media Player from the following website: http://www.videolan.org/vlc/. It works with most popular video files formats such as AVI, MP4, and Flash, among others. Follow these steps in order to watch the screencast: Click on Media | Open File | browse for the file that you want to open and click on it. Click on Open. The screencast is displayed, as shown in the following screenshot: See also Enhancing a screencast with annotations  
Read more
  • 0
  • 0
  • 4430

article-image-enhancing-your-blog-advanced-features
Packt
22 Sep 2015
8 min read
Save for later

Enhancing Your Blog with Advanced Features

Packt
22 Sep 2015
8 min read
In this article by Antonio Melé, the author of the Django by Example book shows how to use the Django forms, and ModelForms. You will let your users share posts by e-mail, and you will be able to extend your blog application with a comment system. You will also learn how to integrate third-party applications into your project, and build complex QuerySets to get useful information from your models. In this article, you will learn how to add tagging functionality using a third-party application. (For more resources related to this topic, see here.) Adding tagging functionality After implementing our comment system, we are going to create a system for adding tags to our posts. We are going to do this by integrating in our project a third-party Django tagging application. django-taggit is a reusable application that primarily offers you a Tag model, and a manager for easily adding tags to any model. You can take a look at its source code at https://github.com/alex/django-taggit. First, you need install django-taggit via pip by running the pip install django-taggit command. Then, open the settings.py file of the project, and add taggit to your INSTALLED_APPS setting as the following: INSTALLED_APPS = ( # ... 'mysite.blog', 'taggit', ) Then, open the models.py file of your blog application, and add to the Post model the TaggableManager manager, provided by django-taggit as the following: from taggit.managers import TaggableManager # ... class Post(models.Model): # ... tags = TaggableManager() You just added tags for this model. The tags manager will allow you to add, retrieve, and remove tags from the Post objects. Run the python manage.py makemigrations blog command to create a migration for your model changes. You will get the following output: Migrations for 'blog': 0003_post_tags.py: Add field tags to post Now, run the python manage.py migrate command to create the required database tables for django-taggit models and synchronize your model changes. You will see an output indicating that the migrations have been applied: Operations to perform: Apply all migrations: taggit, admin, blog, contenttypes, sessions, auth Running migrations: Applying taggit.0001_initial... OK Applying blog.0003_post_tags... OK Your database is now ready to use django-taggit models. Open the terminal with the python manage.py shell command, and learn how to use the tags manager. First, we retrieve one of our posts (the one with the ID as 1): >>> from mysite.blog.models import Post >>> post = Post.objects.get(id=1) Then, add some tags to it and retrieve its tags back to check that they were successfully added: >>> post.tags.add('music', 'jazz', 'django') >>> post.tags.all() [<Tag: jazz>, <Tag: django>, <Tag: music>] Finally, remove a tag and check the list of tags again: >>> post.tags.remove('django') >>> post.tags.all() [<Tag: jazz>, <Tag: music>] This was easy, right? Run the python manage.py runserver command to start the development server again, and open http://127.0.0.1:8000/admin/taggit/tag/ in your browser. You will see the admin page with the list of the Tag objects of the taggit application: Navigate to http://127.0.0.1:8000/admin/blog/post/ and click on a post to edit it. You will see that the posts now include a new Tags field as the following one where you can easily edit tags: Now, we are going to edit our blog posts to display the tags. Open the blog/post/list.html template and add the following HTML code below the post title: <p class="tags">Tags: {{ post.tags.all|join:", " }}</p> The join template filter works as the Python string join method to concatenate elements with the given string. Open http://127.0.0.1:8000/blog/ in your browser. You will see the list of tags under each post title: Now, we are going to edit our post_list view to let users see all posts tagged with a tag. Open the views.py file of your blog application, import the Tag model form django-taggit, and change the post_list view to optionally filter posts by tag as the following: from taggit.models import Tag def post_list(request, tag_slug=None): post_list = Post.published.all() if tag_slug: tag = get_object_or_404(Tag, slug=tag_slug) post_list = post_list.filter(tags__in=[tag]) # ... The view now takes an optional tag_slug parameter that has a None default value. This parameter will come in the URL. Inside the view, we build the initial QuerySet, retrieving all the published posts. If there is a given tag slug, we get the Tag object with the given slug using the get_object_or_404 shortcut. Then, we filter the list of posts by the ones which tags are contained in a given list composed only by the tag we are interested in. Remember that QuerySets are lazy. The QuerySet for retrieving posts will only be evaluated when we loop over the post list to render the template. Now, change the render function at the bottom of the view to pass all the local variables to the template using locals(). The view will finally look as the following: def post_list(request, tag_slug=None): post_list = Post.published.all() if tag_slug: tag = get_object_or_404(Tag, slug=tag_slug) post_list = post_list.filter(tags__in=[tag]) paginator = Paginator(post_list, 3) # 3 posts in each page page = request.GET.get('page') try: posts = paginator.page(page) except PageNotAnInteger: # If page is not an integer deliver the first page posts = paginator.page(1) except EmptyPage: # If page is out of range deliver last page of results posts = paginator.page(paginator.num_pages) return render(request, 'blog/post/list.html', locals()) Now, open the urls.py file of your blog application, and make sure you are using the following URL pattern for the post_list view: url(r'^$', post_list, name='post_list'), Now, add another URL pattern as the following one for listing posts by tag: url(r'^tag/(?P<tag_slug>[-w]+)/$', post_list, name='post_list_by_tag'), As you can see, both the patterns point to the same view, but we are naming them differently. The first pattern will call the post_list view without any optional parameters, whereas the second pattern will call the view with the tag_slug parameter. Let’s change our post list template to display posts tagged with a specific tag, and also link the tags to the list of posts filtered by this tag. Open blog/post/list.html and add the following lines before the for loop of posts: {% if tag %} <h2>Posts tagged with "{{ tag.name }}"</h2> {% endif %} If the user is accessing the blog, he will the list of all posts. If he is filtering by posts tagged with a specific tag, he will see this information. Now, change the way the tags are displayed into the following: <p class="tags"> Tags: {% for tag in post.tags.all %} <a href="{% url "blog:post_list_by_tag" tag.slug %}">{{ tag.name }}</a> {% if not forloop.last %}, {% endif %} {% endfor %} </p> Notice that now we are looping through all the tags of a post, and displaying a custom link to the URL for listing posts tagged with this tag. We build the link with {% url "blog:post_list_by_tag" tag.slug %} using the name that we gave to the URL, and the tag slug as parameter. We separate the tags by commas. The complete code of your template will look like the following: {% extends "blog/base.html" %} {% block title %}My Blog{% endblock %} {% block content %} <h1>My Blog</h1> {% if tag %} <h2>Posts tagged with "{{ tag.name }}"</h2> {% endif %} {% for post in posts %} <h2><a href="{{ post.get_absolute_url }}">{{ post.title }}</a></h2> <p class="tags"> Tags: {% for tag in post.tags.all %} <a href="{% url "blog:post_list_by_tag" tag.slug %}">{{ tag.name }}</a> {% if not forloop.last %}, {% endif %} {% endfor %} </p> <p class="date">Published {{ post.publish }} by {{ post.author }}</p> {{ post.body|truncatewords:30|linebreaks }} {% endfor %} {% include "pagination.html" with page=posts %} {% endblock %} Open http://127.0.0.1:8000/blog/ in your browser, and click on any tag link. You will see the list of posts filtered by this tag as the following: Summary In this article, you added tagging to your blog posts by integrating a reusable application. The book Django By Example, hands-on-guide will also show you how to integrate other popular technologies with Django in a fun and practical way. Resources for Article: Further resources on this subject: Code Style in Django[article] So, what is Django? [article] Share and Share Alike [article]
Read more
  • 0
  • 0
  • 4402

article-image-configuring-mysql
Packt
01 Apr 2010
14 min read
Save for later

Configuring MySQL

Packt
01 Apr 2010
14 min read
Let's get started. Setting up a fixed InnoDB tablespace When using the InnoDB storage engine of MySQL, the data is typically not stored in a per-database or per-table directory structure, but in several dedicated files, which collectively contain the so-called tablespace. By default (when installing MySQL using the configuration wizard) InnoDB is confi gured to have one small file to store data in, and this file grows as needed. While this is a very fl exible and economical confi guration to start with, this approach also has some drawbacks: there is no reserved space for your data, so you have to rely on free disk space every time your data grows. Also, if your database grows bigger, the file will grow to a size which makes it hard to handle—a dozen files of 1 GB each are typically easier to manage than one clumsy 12 GB file. Large data files might, for example, cause problems if you try to put those files into an archive for backup or data transmission purposes. Even if the 2 GB limit is not present any more for the current file systems, many compression programs still have problems dealing with large files. And finally, the constant adaptation of the file in InnoDB's default configuration size will cause a (small, but existent) performance hit if your database grows. The following recipe will show you how to define a fixed tablespace for your InnoDB installation, by which you can avoid these drawbacks of the InnoDB default configuration. Getting ready To install a fixed tablespace, you will have to reflect about some aspects: how much tablespace should be reserved for your database, and how to size the single data files which in sum constitute the tablespace. Note that once your database completely allocates your tablespace, you will run into table full errors (error code 1114) when trying to add new data to your database. Additionally, you have to make sure that your current InnoDB tablespace is completely empty. Ideally, you should set up the tablespace of a freshly installed MySQL instance, in which case this prerequisite is given. To check whether any InnoDB tables exist in your database, execute the following statement and delete the given tables until the result is empty: SELECT TABLE_SCHEMA, TABLE_NAME FROM information_schema.tables WHERE engine="InnoDB"; If your database already contains data stored in InnoDB tables that you do not want to lose, you will have to create a backup of your database and recover the data from it when you are done with the recipe. Please refer to the chapter Backing Up and Restoring MySQL Data for further information on this. And finally, you have to make sure that the InnoDB data directory (as defined by the innodb_data_home_dir variable) features sufficient free disk space to store the InnoDB data files. For the following example, we will use a fixed tablespace with a size of 500 MB and a maximal file size of 200 MB. How to do it... Open the MySQL configuration file (my.ini or my.cnf) in a text editor. Identify the line starting with innodb_data_file_path in the [mysqld] section. If no such line exists, add the line to the file. Change the line innodb_data_file_path to read as follows: innodb_data_file_path=ibdata1:200M;ibdata2:200M;ibdata3:100M Save the changed configuration file. Shut down your database instance (if running). Delete previous InnoDB data files (typically called ibdata1, ibdata2, and so on) from the directory defined by the innodb_data_home_dir variable. Delete previous InnoDB logfiles (named ib_logfile0, ib_logfile1, so on) from the directory defined by the innodb_log_group_home_dir variable. If innodb_log_group_home_dir is not configured explicitly, it defaults to the datadir directory. Start your database. Wait for all data and log files to be created. Depending on the size of your tablespace and the speed of your disk system, creation of InnoDB data fi les can take a significant amount of time (several minutes is not an uncommon time for larger installations). During this initialization sequence, MySQL is started but it will not accept any requests. How it works... Steps 1 through 4—and particularly 3—cover the actual change to be made to the MySQL configuration, which is necessary to adapt the InnoDB tablespace settings. The value of the innodb_data_file_path variable consists of a list of data file definitions that are separated by semicolons. Each data file definition is constructed of a fi le name and a file size with a colon as a separator. The size can be expressed as a plain numeric value, which defines the size of the data file in bytes. If the numeric value has a K, M, or G postfix, the number is interpreted as Kilobytes, Megabytes, or Gigabytes respectively. The list length is not limited to the three entries of our example; if you want to split a large tablespace into relatively small files, the list can easily contain dozens of data file definitions. If your tablespace consists of more than 10 files, we propose naming the first nine files ibdata01 through ibdata09 (instead of ibdata1 and so forth; note the zero), so that the files are listed in a more consistent order when they are displayed in your file browser or command line interface. Step 5 is prerequisite to the steps following after it, as deletion of vital InnoDB files while the system is still running is obviously not a good idea. In step 6, old data files are deleted to prevent collision with the new files. If InnoDB detects an existing file whose size differs from the size defined in the innodb_data_file_path variable, it will not initialize successfully. Hence, this step ensures that new, properly saved files can be created during the next MySQL start. Note that deletion of the InnoDB data files is only suffi cient if all InnoDB tables were deleted previously (as discussed in the Getting ready section). Alternatively, you could delete all *.frm files for InnoDB tables from the MySQL data directory, but we do not encourage this approach (clean deletion using DROP TABLE statements should be preferred over manual intervention in MySQL data directories whenever possible). Step 7 is necessary to prevent InnoDB errors after the data files are created, as the InnoDB engine refuses to start if the log files are older than the tablespace files. With steps 8 and 9, the new settings take effect. When starting the database for the first time after changes being made to the InnoDB tablespace configuration, take a look at the MySQL error log to make sure the settings were accepted and no errors have occurred. The MySQL error log after the first start with the new settings will look similar to this:   InnoDB: The first specified data file E:MySQLInnoDBTestibdata1 didnot exist:InnoDB: a new database to be created!091115 21:35:56 InnoDB: Setting file E:MySQLInnoDBTestibdata1 sizeto 200 MBInnoDB: Database physically writes the file full: wait...InnoDB: Progress in MB: 100 200...InnoDB: Progress in MB: 100091115 21:36:19 InnoDB: Log file .ib_logfile0 did not exist: new tobe createdInnoDB: Setting log file .ib_logfile0 size to 24 MBInnoDB: Database physically writes the file full: wait......InnoDB: Doublewrite buffer not found: creating newInnoDB: Doublewrite buffer createdInnoDB: Creating foreign key constraint system tablesInnoDB: Foreign key constraint system tables created091115 21:36:22 InnoDB: Started; log sequence number 0 0091115 21:36:22 [Note] C:Program FilesMySQLMySQL Server 5.1binmysqld: ready for connections.Version: '5.1.31-community-log' socket: '' port: 3306 MySQLCommunity Server (GPL)   There's more... If you already use a fixed tablespace, and you want to increase the available space, you can simply append additional files to your fixed tablespace by adding additional data file definitions to the current innodb_data_file_path variable setting. If you simply append additional files, you do not have to empty your tablespace first, but you can change the confi guration and simply restart your database. Nevertheless, as with all changes to the confi guration, we strongly encourage creating a backup of your database first.   Setting up an auto-extending InnoDB tablespace The previous recipe demonstrates how to define a tablespace with a certain fixed size. While this provides maximum control and predictability, you have to block disk space based on the estimate of the maximum size required in the foreseeable future. As long as you store less data in your database than the reserved tablespace allows for, this basically means some disk space is wasted. This especially holds true if your setting does not allow for a separate file system exclusively for your MySQL instance, because then other applications compete for disk space as well. In these cases, a dynamic tablespace that starts with little space and grows as needed could be an alternative. The following recipe will show you how to achieve this. Getting ready When defining an auto-extending tablespace, you should first have an idea about the minimum tablespace requirements of your database, which will set the initial size of the tablespace. Furthermore, you have to decide whether you want to split your initial tablespace into files of a certain maximum size (for better file handling). If the above settings are identical to the current settings and you only want to make your tablespace grow automatically if necessary, you will be able to keep your data. Otherwise, you have to empty your current InnoDB tablespace completely (please refer to the previous recipe Setting up a fixed InnoDB tablespace for details). As with all major confi guration changes to your database, we strongly advise you to create a backup of your data first. If you have to empty your tablespace, you can use this backup to recover your data after the changes are completed. Again, please refer to the chapter Backing Up and Restoring MySQL Data for further information on this. And as before, you have to make sure that there is enough disk space available in the innodb_data_home_dir directory—not only for the initial database size, but also for the anticipated growth of your database. The recipe also requires you to shut down your database temporarily; so you have to make sure all clients are disconnected while performing the required steps to prevent conflicting access. As the recipe demands changes to your MySQL confi guration file (my.cnf or my.ini), you need write access to this file. For the following example, we will use an auto-extending tablespace with an initial size of 100 MB and a file size of 50 MB. How to do it... Open the MySQL configuration file (my.ini or my.cnf) in a text editor. Identify the line starting with innodb_data_file_path in the [mysqld] section. If no such line exists, add the line to the file. Change the line innodb_data_file_path to read as follows: innodb_data_file_path=ibdata1:50M;ibdata2:50M:autoextend Note that no file defi nition except the last one must have the :autoextend option; you will run into errors otherwise. Save the changed confi guration file. Shut down your database instance (if running). Delete previous InnoDB data files (typically called ibdata1, ibdata2, and so on) from the directory defi ned by the innodb_data_home_dir variable. Delete previous InnoDB logfiles (named ib_logfile0, ib_logfile1, and so on) from the directory defined by the innodb_log_group_home_dir variable. If innodb_log_group_home_dir is not configured explicitly, it defaults to the datadir directory Start your database. Wait for all data and log files to be created. Depending on the size of your tablespace and the speed of your disk system, creation of InnoDB data files can take a signifi cant amount of time (several minutes is not an uncommon time for larger installations). During this initialization sequence, MySQL is started but will not accept any requests. When starting the database for the first time after changes being made to the InnoDB tablespace configuration, take a look at the MySQL error log to make sure the settings were accepted and no errors have occurred. How it works... The above steps are basically identical to the steps of the previous recipe Setting up a fixed InnoDB tablespace, the only difference being the definition of the innodb_data_file_path variable. In this recipe, we create two files of 50 MB size, the last one having an additional :autoextend property. If the innodb_data_file_path variable is not set explicitly, it defaults to the value ibdata1:10M:autoextend. As data gets inserted into the database, parts of the tablespace will be allocated. As soon as the 100 MB of initial tablespace is not sufficient any more, the file ibdata2 will become larger to match the additional tablespace requirements. Note that the :autoextend option causes the tablespace files to be extended automatically, but they are not automatically reduced in size again if the space requirements decrease. Please refer to the Decreasing InnoDB tablespace recipe for instructions on how to free unused tablespace. There's more... The recipe only covers the basic aspects of auto-extending tablespaces; the following sections provide insight into some more advanced topics. Making an existing tablespace auto-extensible If you already have a database with live data in place and you want to change your current fixed configuration to use the auto-extension feature, you can simply add the :autoextend option to the last file definition. Let us assume a current configuration like the following: innodb_data_file_path=ibdata1:50M;ibdata2:50M The respective configuration with auto-extension will look like this: innodb_data_file_path=ibdata1:50M;ibdata2:50M:autoextend In this case, do not empty the InnoDB tablespace first, you can simply change the configuration file and restart your database, and you should be fine. As with all configuration changes, however, we strongly recommend to back up your database before editing these settings even in this case. Controlling the steps of tablespace extension The amount by which the size of the auto-extending tablespace file is increased is controlled by the innodb_autoextend_increment variable. The value of this variable defines the number of Megabytes by which the tablespace is enlarged. By default, 8 MB are added to the file if the current tablespace is no longer sufficient. Limiting the size of an auto-extending tablespace If you want to use an auto-extending tablespace, but also want to limit the maximum size your tablespace will grow to, you can add a maximum size for the auto-extended tablespace file by using the :autoextend:max:[size] option. The [size] portion is a placeholder for a size definition using the same notation as the size description for the tablespace file itself, which means a numeric value and an optional K, M, or G modifier (for sizes in Kilo-, Mega-, and Gigabytes). As an example, if you want to have a tiny initial tablespace of 10 MB, which is extended as needed, but with an upper limit of 2 GB, you would enter the following line to your MySQL configuration file: innodb_data_file_path=ibdata1:10M:autoextend:max:2G Note that if the maximum size is reached, you will run into errors when trying to add new data to your database. Adding a new auto-extending data file Imagine an auto-extending tablespace with an auto-extended file, which grew so large over time that you want to prevent the file from growing further and want to append a new auto-extending data file to the tablespace. You can do so using the following steps: Shut down your database instance. Look up the exact size of the auto-extended InnoDB data file (the last file in your current configuration). Put the exact size as the tablespace fi le size definition into the innodb_data_file_path configuration (number of bytes without any K, M, or G modifier), and add a new auto-extending data file. Restart your database. As an example, if your current confi guration reads ibdata1:10M:autoextend and the ibdata1 file has an actual size of 44,040,192 bytes, change configuration to innodb_data_file_path=ibdata1:44040192;ibdata2:10M:autoextend:max:2G.  
Read more
  • 0
  • 0
  • 4359
article-image-working-live-data-and-angularjs
Packt
12 Jun 2014
14 min read
Save for later

Working with Live Data and AngularJS

Packt
12 Jun 2014
14 min read
(For more resources related to this topic, see here.) Big Data is a new field that is growing every day. HTML5 and JavaScript applications are being used to showcase these large volumes of data in many new interesting ways. Some of the latest client implementations are being accomplished with libraries such as AngularJS. This is because of its ability to efficiently handle and organize data in many forms. Making business-level decisions off of real-time data is a revolutionary concept. Humans have only been able to fathom metrics based off of large-scale systems, in real time, for the last decade at most. During this time, the technology to collect large amounts of data has grown tremendously, but the high-level applications that use this data are only just catching up. Anyone can collect large amounts of data with today's complex distributed systems. Displaying this data in different formats that allow for any level of user to digest and understand its meaning is currently the main portion of what the leading-edge technology is trying to accomplish. There are so many different formats that raw data can be displayed in. The trick is to figure out the most efficient ways to showcase patterns and trends, which allow for more accurate business-level decisions to be made. We live in a fast paced world where everyone wants something done in real time. Load times must be in milliseconds, new features are requested daily, and deadlines get shorter and shorter. The Web gives companies the ability to generate revenue off a completely new market and AngularJS is on the leading edge. This new market creates many new requirements for HTML5 applications. JavaScript applications are becoming commonplace in major companies. These companies are using JavaScript to showcase many different types of data from inward to outward facing products. Working with live data sets in client-side applications is a common practice and is the real world standard. Most of the applications today use some type of live data to accomplish some given set of tasks. These tasks rely on this data to render views that the user can visualize and interact with. There are many advantages of working with the Web for data visualization, and we are going to showcase how these tie into an AngularJS application. AngularJS offers different methods to accomplish a view that is in charge of elegantly displaying large amounts of data in very flexible and snappy formats. Some of these different methods feed directives' data that has been requested and resolved, while others allow the directive to maintain control of the requests. We will go over these different techniques of how to efficiently get live data into the view layer by creating different real-world examples. We will also go over how to properly test directives that rely on live data to achieve their view successfully. Techniques that drive directives Most standard data requirements for a modern application involve an entire view that depends on a set of data. This data should be dependent on the current state of the application. The state can be determined in different ways. A common tactic is to build URLs that replicate a snapshot of the application's state. This can be done with a combination of URL paths and parameters. URL paths and parameters are what you will commonly see change when you visit a website and start clicking around. An AngularJS application is made up of different route configurations that use the URL to determine which action to take. Each configuration will have an associated controller, template, and other forms of options. These configurations work in unison to get data into the application in the most efficient ways. AngularUI also offers its own routing system. This UI-Router is a simple system built on complex concepts, which allows nested views to be controlled by different state options. This concept yields the same result as ngRoute, which is to get data into the controller; however, UI-Router does it in a more eloquent way, which creates more options. AngularJS 2.0 will contain a hybrid router that utilizes the best of each. Once the controller gets the data, it feeds the retrieved data to the template views. The template is what holds the directives that are created to perform the view layer functionality. The controller feeds directives' data, which forces the directives to rely on the controllers to be in charge of the said data. This data can either be fed immediately after the route configurations are executed or the application can wait for the data to be resolved. AngularJS offers you the ability to make sure that data requests have been successfully accomplished before any controller logic is executed. The method is called resolving data, and it is utilized by adding the resolve functions to the route configurations. This allows you to write the business logic in the controller in a synchronous manner, without having to write callbacks, which can be counter-intuitive. The XHR extensions of AngularJS are built using promise objects. These promise objects are basically a way to ensure that data has been successfully retrieved or to verify whether an error has occurred. Since JavaScript embraces callbacks at the core, there are many points of failure with respect to timing issues of when data is ready to be worked with. This is where libraries such as the Q library come into play. The promise object allows the execution thread to resemble a more synchronous flow, which reduces complexity and increases readability. The $q library The $q factory is a lite instantiation of the formally accepted Q library (https://github.com/kriskowal/q). This lite package contains only the functions that are needed to defer JavaScript callbacks asynchronously, based on the specifications provided by the Q library. The benefits of using this object are immense, when working with live data. Basically, the $q library allows a JavaScript application to mimic synchronous behavior when dealing with asynchronous data requests or methods that are not thread blocked by nature. This means that we can now successfully write our application's logic in a way that follows a synchronous flow. ES6 (ECMAScript6) incorporates promises at its core. This will eventually alleviate the need, for many functions inside the $q library or the entire library itself, in AngularJS 2.0. The core AngularJS service that is related to CRUD operations is called $http. This service uses the $q library internally to allow the powers of promises to be used anywhere a data request is made. Here is an example of a service that uses the $q object in order to create an easy way to resolve data in a controller. Refer to the following code: this.getPhones = function() { var request = $http.get('phones.json'), promise; promise = request.then(function(response) { return response.data; },function(errorResponse){ return errorResponse; }); return promise; } Here, we can see that the phoneService function uses the $http service, which can request for all the phones. The phoneService function creates a new request object, that calls a then function that returns a promise object. This promise object is returned synchronously. Once the data is ready, the then function is called and the correct data response is returned. This service is best showcased correctly when used in conjunction with a resolve function that feeds data into a controller. The resolve function will accept the promise object being returned and will only allow the controller to be executed once all of the phones have been resolved or rejected. The rest of the code that is needed for this example is the application's configuration code. The config process is executed on the initialization of the application. This is where the resolve function is supposed to be implemented. Refer to the following code: var app = angular.module('angularjs-promise-example',['ngRoute']); app.config(function($routeProvider){ $routeProvider.when('/', { controller: 'PhoneListCtrl', templateUrl: 'phoneList.tpl.html', resolve: { phones: function(phoneService){ return phoneService.getPhones(); } } }).otherwise({ redirectTo: '/' }); }) app.controller('PhoneListCtrl', function($scope, phones) { $scope.phones = phones; }); A live example of this basic application can be found at http://plnkr.co/edit/f4ZDCyOcud5WSEe9L0GO?p=preview. Directives take over once the controller executes its initial context. This is where the $compile function goes through all of its stages and links directives to the controller's template. The controller will still be in charge of driving the data that is sitting inside the template view. This is why it is important for directives to know what to do when their data changes. How should data be watched for changes? Most directives are on a need-to-know basis about the details of how they receive the data that is in charge of their view. This is a separation of logic that reduces cyclomatic complexity in an application. The controllers should be in charge of requesting data and passing this data to directives, through their associated $scope object. Directives should be in charge of creating DOM based on what data they receive and when the data changes. There are an infinite number of possibilities that a directive can try to achieve once it receives its data. Our goal is to showcase how to watch live data for changes and how to make sure that this works at scale so that our directives have the opportunity to fulfill their specific tasks. There are three built-in ways to watch data in AngularJS. Directives use the following methods to carry out specific tasks based on the different conditions set in the source of the program: Watching an object's identity for changes Recursively watching all of the object's properties for changes Watching just the top level of an object's properties for changes Each of these methods has its own specific purpose. The first method can be used if the variable that is being watched is a primitive type. The second type of method is used for deep comparisons between objects. The third type is used to do a shallow watch on an array of any type or just on a normal object. Let's look at an example that shows the last two watcher types. This example is going to use jsPerf to showcase our logic. We are leaving the first watcher out because it only watches primitive types and we will be watching many objects for different levels of equality. This example sets the $scope variable in the app's run function because we want to make sure that the jsPerf test resets each data set upon initialization. Refer to the following code: app.run(function($rootScope) { $rootScope.data = [ {'bob': true}, {'frank': false}, {'jerry': 'hey'}, {'bargle':false}, {'bob': true}, {'bob': true}, {'frank': false}, {'jerry':'hey'},{'bargle': false},{'bob': true},{'bob': true},{'frank': false}]; }); This run function sets up our data object that we will watch for changes. This will be constant throughout every test we run and will reset back to this form at the beginning of each test. Doing a deep watch on $rootScope.data This watch function will do a deep watch on the data object. The true flag is the key to setting off a deep watch. The purpose of a deep comparison is to go through every object property and compare it for changes on every digest. This is an expensive function and should be used only when necessary. Refer to the following code: app.service('Watch', function($rootScope) { return { run: function() { $rootScope.$watch('data', function(newVal, oldVal) { },true); //the digest is here because of the jsPerf test. We are using thisrun function to mimic a real environment. $rootScope.$digest(); } }; }); Doing a shallow watch on $rootScope.data The shallow watch is called whenever a top-level object is changed in the data object. This is less expensive because the application does not have to traverse n levels of data. Refer to the following code: app.service('WatchCollection', function($rootScope) { return { run: function() { $rootScope.$watchCollection('data', function(n, o) { }); $rootScope.$digest(); } }; }); During each individual test, we get each watcher service and call its run function. This fires the watcher on initialization, and then we push another test object to the data array, which fires the watch's trigger function again. That is the end of the test. We are using jsperf.com to show the results. Note that the watchCollection function is much faster and should be used in cases where it is acceptable to shallow watch an object. The example can be found at http://jsperf.com/watchcollection-vs-watch/5. Refer to the following screenshot: This test implies that the watchCollection function is a better choice to watch an array of objects that can be shallow watched for changes. This test is also true for an array of strings, integers, or floats. This brings up more interesting points, such as the following: Does our directive depend on a deep watch of the data? Do we want to use the $watch function, even though it is slow and memory taxing? Is it possible to use the $watch function if we are using large data objects? The directives that have been used in this book have used the watch function to watch data directly, but there are other methods to update the view if our directives depend on deep watchers and very large data sets. Directives can be in charge There are some libraries that believe that elements can be in charge of when they should request data. Polymer (http://www.polymer-project.org/) is a JavaScript library that allows DOM elements to control how data is requested, in a declarative format. This is a slight shift from the processes that have been covered so far in this article, when thinking about what directives are meant for and how they should receive data. Let's come up with an actual use case that could possibly allow this type of behavior. Let's consider a page that has many widgets on it. A widget is a directive that needs a set of large data objects to render its view. To be more specific, lets say we want to show a catalog of phones. Each phone has a very large amount of data associated with it, and we want to display this data in a very clean simple way. Since watching large data sets can be very expensive, what will allow directives to always have the data they require, depending on the state of the application? One option is to not use the controller to resolve the Big Data and inject it into a directive, but rather to use the controller to request for directive configurations that tell the directive to request certain data objects. Some people would say this goes against normal conventions, but I say it's necessary when dealing with many widgets in the same view, which individually deal with large amounts of data. This method of using directives to determine when data requests should be made is only suggested if many widgets on a page depend on large data sets. To create this in a real-life example, let's take the phoneService function, which was created earlier, and add a new method to it called getPhone. Refer to the following code: this.getPhone = function(config) { return $http.get(config.url); }; Now, instead of requesting for all the details on the initial call, the original getPhones method only needs to return phone objects with a name and id value. This will allow the application to request the details on demand. To do this, we do not need to alter the getPhones method that was created earlier. We only need to alter the data that is supplied when the request is made. It should be noted that any directive that is requesting data should be tested to prove that it is requesting the correct data at the right time. Testing directives that control data Since the controller is usually in charge of how data is incorporated into the view, many directives do not have to be coupled with logic related to how that data is retrieved. Keeping things separate is always good and is encouraged, but in some cases, it is necessary that directives and XHR logic be used together. When these use cases reveal themselves in production, it is important to test them properly. The tests in the book use two very generic steps to prove business logic. These steps are as follows: Create, compile, and link DOM to the AngularJS digest cycle Test scope variables and DOM interactions for correct outputs Now, we will add one more step to the process. This step will lie in the middle of the two steps. The new step is as follows: Make sure all data communication is fired correctly AngularJS makes it very simple to allow additional resource related logic. This is because they have a built-in backend service mock, which allows many different ways to create fake endpoints that return structured data. The service is called $httpBackend.
Read more
  • 0
  • 0
  • 4310

article-image-digitally-signing-and-verifying-messages-web-services-part-1
Packt
22 Oct 2009
8 min read
Save for later

Digitally Signing and Verifying Messages in Web Services ( part 1 )

Packt
22 Oct 2009
8 min read
Confidentiality and integrity are two critical components of web services. While confidentiality can be ensured by means of encryption, the encrypted data can still be overwritten and the integrity of the message can be compromised. So it becomes is equally important to protect the integrity of the message; digital signatures helps us in doing just that. Overview of Digital Signatures In the web services scenario, XML messages are exchanged between the client application and the web services. Certain messages contain critical business information, and therefore the integrity of the message should be ensured. Ensuring the integrity of the message is not a new concept, it has been there for a long time. The concept is to make sure that the data was not tampered while in transit between the sender and the receiver. Consider, for example, that Alice and Bob are exchanging emails that are critical to business. Alice wants to make sure that Bob receives the correct email that she sent and no one else tampered with or modified the email in between. In order to ensure the integrity of the message, Alice digitally signs the message using her private key, and when Bob receives the message, he will check to make sure that the signature is still valid before he can trust or read the email. What is this digital signature? And how does it prove that no one else tampered with the data? When a message is digitally signed, it basically follows these steps: Create a digest value of the message(a unique string value for the message using a SHA1 or MD5 algorithm). Encrypt the digest value using the private key—known only to the sender. Exchange the message along with the encrypted digest value. MD5 and SHA1 are message digest algorithms to calculate the digest value. The digest or hash value is nothing but a non-reversible unique string for any given data, i.e. the digest value will change even if a space is added or removed. SHA1 produces a 160 bit digest value, while MD5 produces a 128 bit value. When Bob receives the message, his first task is to validate the signature. Validation of signature goes through a sequence of steps: Create a digest value of the message again using the same algorithm. Encrypt the digest value using the public key of Alice(obtained out of band or part of message, etc.) Validate to make sure that the digest value encrypted using the public key matches the one that was sent by Alice. Since the public key is known or exchanged along with the message, Bob can check the validity of the certificate itself. Digital certificates are issued by a trusted party such as Verisign. When a certificate is compromised, you can cancel the certificate, which will invalidate the public key. Once the signature is verified, Bob can trust that the message was not tampered with by anyone else. He can also validate the certificate to make sure that it is not expired or revoked, and also to ensure that no one actually tampered with the private key  of Alice. Digital Signatures in Web Services In the last section, we learnt about digital signatures. Since web services are all about interoperability, digital-signature-related information is represented in an industry standard format called XML Signature (standardized by W3C). The following are the key data elements that are represented in an interoperable manner by XML Signature: What data (what part of SOAP message) is digitally signed? What hash algorithm (MD5 or SHA1) is used to create the digest value? What signature algorithm is used? Information about the certificate or key. In the next section, we will describe how the Oracle Web Services Manager can help generate and verify signatures in web services. Signature Generation Using Oracle WSM Oracle Web Services Manager can centrally manage the security policy, including digital signature generation. One of the greatest advantages in using Oracle WSM to digitally sign messages is that the policy information and the digital certificate information are centrally stored and managed. An organization can have many web services, and some of them might exchange certain business critical information and require that the messages be digitally signed. Oracle WSM will play a key role when different web services have different requirements to sign the message, or when it is required to take certain actions before or after signing the message. Oracle WSM can be used to configure the signature at each web service level and that reduces the burden of deploying certificates across multiple systems. In this section, we will discuss more about how to digitally sign the response message of the web service using Oracle WSM. Sign Message Policy Step As a quick refresher, in Oracle WSM, each web service is registered within a gateway or an agent and a policy is attached to each web service. The policy steps are divided mainly into request pipeline template and response pipeline template, where different policies can be applied for request or response message processing. In this section, I will describe how to configure the policy for a response pipeline template to digitally sign the response message. It is assumed that the web service is registered within a gateway and a detailed example will be described later in this article . In the response pipeline, we can add a policy step called Sign Message to digitally sign the message. In order to digitally sign a message, the key components that are required are: Private key store Private key password The part of SOAP message that is being signed The signature algorithm being used The following screenshot describes the Sign Message policy step with certain values populated.   In the previous screenshot, the values that are populated are: Keystore location—The location where the private key file is located. Keystore type—Whether or not it is PKCS12 or JKS. Keystore password—The password to the keystore. Signer's private-key alias—The alias to gain access to the private key from the keystore. Signer's private-key password—The password to access the private key. Signed Content—Whether the BODY or envelope of the SOAP message should be signed. The above information is a part of a policy that is attached to the time service which will sign the response message. As per the information that is shown in the screenshot, the BODY of the SOAP message response will be digitally signed us in the SHA1 as the digest algorithm, and PKCS12 key store. Once the message is signed, the SOAP message will look like: <?xml version="1.0" encoding="UTF-8"?><soap:Envelope soap_encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" > <soap:Header> <wsse:Security soap_mustUnderstand="1"> <wsse:BinarySecurityToken ValueType="http://docs. oasis-open.org/wss/2004/01/oasis-200401-wss- x509-token-profile-1.0#X509v3" EncodingType="http://docs.oasis-open. org/wss/2004/01/oasis-200401-wss-soap-message- security-1.0#Base64Binary" wsu_Id="_ VLL9yEsi09I9f5ihwae2lQ22" >SecurityTOkenoKE2ZA==< /wsse:BinarySecurityToken> <dsig:Signature > <dsig:SignedInfo> <dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/ xml-exc-c14n#"/> <dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/ xmldsig#rsa-sha1"/> <dsig:Reference URI="#ishUwYWW2AAthrx hlpv1CA22"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <dsig:DigestValue>ynuqANuYM3qzdTnGOLT7SMxWHY=</dsig:DigestValue> </dsig:Reference> <dsig:Reference URI="#UljvWiL8yjedImz 6zy0pHQ22"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <dsig:DigestValue>9ZebvrbVYLiPv1BaVLDaLJVhwo=</dsig:DigestValue> </dsig:Reference> </dsig:SignedInfo> <dsig:SignatureValue>QqmUUZDLNeLpAEFXndiBLk</dsig:SignatureValue> <dsig:KeyInfo> <wsse:SecurityTokenReference wsu_Id="_7vjdWs1ABULkiLeE7Y4lAg22" > <wsse:Reference URI="#_VLL9yEsi09I9f5ihwae2lQ22"/> </wsse:SecurityTokenReference> </dsig:KeyInfo> </dsig:Signature> <wsu:Timestamp wsu_Id="UljvWiL8yjedImz6zy0pHQ22"> <wsu:Created>2007-11-16T15:13:48Z</wsu:Created> </wsu:Timestamp> </wsse:Security> </soap:Header> <soap:Body wsu_Id="ishUwYWW2AAthrxhlpv1CA22" > <n:getTimeResponse > <Result xsi_type="xsd:string">10:13 AM</Result> </n:getTimeResponse> </soap:Body></soap:Envelope>
Read more
  • 0
  • 0
  • 4281
Modal Close icon
Modal Close icon