Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-dotnetnukeskinning-creating-containers
Packt
15 Oct 2009
9 min read
Save for later

DotNetNukeSkinning: Creating Containers

Packt
15 Oct 2009
9 min read
Creating our first container In VWD (Visual Web Developer), from the Solution Explorer window, find the following location and create a new folder called FirstSkin: ~/Portals/_default/Containers/ Add a new item by right-clicking on this new folder. Add a new HTML file and call it Container.htm. Similarly, add a CSS and an XML file, Container.css and Container.xml respectively. Clear out all the code in the newly created files that VWD initializes it with. DNN tokens for containers These tokens, however, have applied mostly to creating skins, not containers. Containers have their own set of tokens to use here. The following is a listing of them. We'll be working with them throughout the rest of this article. Actions: This is the menu that will appear when you hover over the triangle. It is traditionally placed to the left of the icon and title of the module. Title: As you can imagine, this is the title of the module displayed in the container. This is usually placed at the top. Icon: Most of the modules don't have an icon, but many of the administrative pages in DotNetNuke have icons assigned to them. You can always assign icons to your modules, but none of them are set by default. Visibility: This skin object is traditionally displayed as a plus or a minus sign inside a small square. It acts as a toggle to show or hide/collapse or expand the content of the module. Content Pane: Similar to when we created our skin, this is where the content goes. The difference here is that we have only one content pane. It is required in order to make the container work. LINKACTIONS: This isn't used very often in containers. It is a listing of links that gives you access to the same things contained in the ACTIONS skin object. PRINTMODULE: This is the small printer icon you see in the preceding screenshot. When you click on it, it allows you to print the contents of the module. ACTIONBUTTON: This skin object allows you to display items as links or image links to commonly used items found in the actions menu. The last item on that list, the ActionButton, is different from the others in that we can have several uses of it in different ways. When used as a token, you would place them in your container HTM file as [ACTIONBUTTON:1], [ACTIONBUTTON:2], [ACTIONBUTTON:3], and so on. As you can imagine, we would define what each of these action buttons refer to. We do this in the XML file with an attribute called CommandName. For example, the following is a code snippet of what you could have to add as an action button: <Objects> <Object> <Token>[ACTIONBUTTON:1]</Token> <Settings> <Setting> <Name>CommandName</Name> <Value>AddContent.Action</Value> </Setting> <Setting> <Name>DisplayIcon</Name> <Value>True</Value> </Setting> <Setting> <Name>DisplayLink</Name> <Value>True</Value> </Setting> </Settings> </Object> Looking at the CommandName attribute, we can have several values which will determine which of the action buttons will be inserted into our container. The following is a listing: AddContent.Action: This typically allows you to add content, or in this case of the Text/HTML module, edit the content of the module. SyndicateModule.Action: This is an XML feed button, if it is supported by the module. PrintModule.Action: This is the printer icon allowing the users to print the content of the module. Yes, this is the second way of adding it as we already have the PRINTMODULE token. ModuleSettings.Action: This is a button/link which takes you to the settings of the module. ModuleHelp.Action—This is a help question mark icon/link that we've seen in the preceding screenshots. Adding to the container Now that we know what can be added, let's add them to our new container. We'll start off with the HTM file and then move on to the CSS file. In our HTM file, add the following code. This will utilize all of the container tokens with the exception of the action buttons, which we'll add soon. <div style="background-color:White;"> ACTIONS[ACTIONS] <hr /> TITLE[TITLE] <hr /> ICON[ICON] <hr /> VISIBILITY[VISIBILITY] <hr /> CONTENTPANE[CONTENTPANE] <hr /> LINKACTIONS[LINKACTIONS] <hr /> PRINTMODULE[PRINTMODULE]</div> Go to the skin admin page (Admin | Skins on the menu). Now that we have a container in the folder called FirstSkin, we'll now get a little added benefit: When you select FirstSkin as the skin to deal with, the container will be selected as well, so you can work with them as a unit or as a Skin Package. Go ahead, parse the skin package and apply our FirstSkin container. Go to the Home page. It may not be pretty, but pretty is not what we were looking for. This container, as it sits, gives us a better illustration of how each token is displayed with a convenient label beside each. There are a few things to point out here, besides we'll be taking out our handy labels and putting in some structure. Our module has no icon, so we won't see one here. If you go to the administrative pages, you will see the icon. The LINKACTIONS is a skin object that you'll use if you don't want to use the action menu ([ACTIONS]). Table Structure The structure of our container will be quite similar to how we laid out our skin. Let's start off with a basic table layout. We'll have a table with three rows. The first row will be for the header area which will contain things like the action menu, the title, icon, and so forth. The second row will be for the content. The third row will be for the footer items, like the printer and edit icon/links. Both the header and footer rows will have nested tables inside to have independent positioning within the cells. The markup which defines these three rows has been highlighted: <table border="0" cellpadding="0" cellspacing="0" class="ContainerMainTable"> <tr> <td style="padding:5px;"> <table border="0" cellpadding="0" cellspacing="0" class="ContainerHeader"> <tr> <td style="width:5px;">[ACTIONS]</td> <td style="width:5px;">[ICON]</td> <td align="left">[TITLE]</td> <td style="width:5px;padding-right:5px;" valign="top">[VISIBILITY]</td> <td style="width:5px;">[ACTIONBUTTON:4]</td> </tr> </table> </td> </tr> <tr> <td class="ContainerContent"> [CONTENTPANE] </td> </tr> <tr> <td style="padding:5px;"> <table border="0" cellpadding="0" cellspacing="0" class="ContainerFooter"> <tr> <td>[ACTIONBUTTON:1]</td> <td>[ACTIONBUTTON:2]</td> <td></td> <td>[ACTIONBUTTON:3]</td> <td style="width:10px;">[PRINTMODULE]</td> </tr> </table> </td> </tr></table> Making necessary XML additions The action buttons we used will not work unless we add items to our XML file. For each of our action buttons, we'll add a token element, then a few setting elements for each. There are three specific settings we'll set up for each: CommandName, DisplayIcon, and DisplayLink. See the following code: <Objects> <Object> <Token>[ACTIONBUTTON:1]</Token> <Settings> <Setting> <Name>CommandName</Name> <Value>AddContent.Action</Value> </Setting> <Setting> <Name>DisplayIcon</Name> <Value>True</Value> </Setting> <Setting> <Name>DisplayLink</Name> <Value>True</Value> </Setting> </Settings> </Object> <Object> <Token>[ACTIONBUTTON:2]</Token> <Settings> <Setting> <Name>CommandName</Name> <Value>SyndicateModule.Action</Value> </Setting> <Setting> <Name>DisplayIcon</Name> <Value>True</Value> </Setting> <Setting> <Name>DisplayLink</Name> <Value>False</Value> </Setting> </Settings> </Object> <Object> <Token>[ACTIONBUTTON:3]</Token> <Settings> <Setting> <Name>CommandName</Name> <Value>ModuleSettings.Action</Value> </Setting> <Setting> <Name>DisplayIcon</Name> <Value>True</Value> </Setting> <Setting> <Name>DisplayLink</Name> <Value>False</Value> </Setting> </Settings> </Object> <Object> <Token>[ACTIONBUTTON:4]</Token> <Settings> <Setting> <Name>CommandName</Name> <Value>ModuleHelp.Action</Value> </Setting> <Setting> <Name>DisplayIcon</Name> <Value>True</Value> </Setting> <Setting> <Name>DisplayLink</Name> <Value>False</Value> </Setting> </Settings> </Object></Objects> The CommandName is the attribute that determines which action button is used by the ordinal numeric values. Notice that these four action buttons use different CommandName values as you might expect. The DisplayIcon attribute is a Boolean (true/false or yes/no) value indicating whether or not the icon is displayed; the DisplayLink is similar and used for setting if the text is used as a link. A good example of both is the EditText ([ACTIONBUTTON:1]) in our Text/HTML module on the Home page. Notice that it has both the icon and the text as links to add/edit the content of the module.
Read more
  • 0
  • 0
  • 6142

article-image-asynchronous-communication-between-components
Packt
09 Oct 2015
12 min read
Save for later

Asynchronous Communication between Components

Packt
09 Oct 2015
12 min read
In this article by Andreas Niedermair, the author of the book Mastering ServiceStack, we will see the communication between asynchronous components. The recent release of .NET has added several new ways to further embrace asynchronous and parallel processing by introducing the Task Parallel Library (TPL) and async and await. (For more resources related to this topic, see here.) The need for asynchronous processing has been there since the early days of programming. Its main concept is to offload the processing to another thread or process to release the calling thread from waiting and it has become a standard model since the rise of GUIs. In such interfaces only one thread is responsible for drawing the GUI, which must not be blocked in order to remain available and also to avoid putting the application in a non-responding state. This paradigm is a core point in distributed systems, at some point, long running operations are offloaded to a separate component, either to overcome blocking or to avoid resource bottlenecks using dedicated machines, which also makes the processing more robust against unexpected application pool recycling and other such issues. A synonym for "fire-and-forget" is "one-way", which is also reflected by the design of static routes of ServiceStack endpoints, where the default is /{format}/oneway/{service}. Asynchronism adds a whole new level of complexity to our processing chain, as some callers might depend on a return value. This problem can be overcome by adding callback or another event to your design. Messaging or in general a producer consumer chain is a fundamental design pattern, which can be applied within the same process or inter-process, on the same or a cross machine to decouple components. Consider the following architecture: The client issues a request to the service, which processes the message and returns a response. The server is known and is directly bound to the client, which makes an on-the-fly addition of servers practically impossible. You'd need to reconfigure the clients to reflect the collection of servers on every change and implement a distribution logic for requests. Therefore, a new component is introduced, which acts as a broker (without any processing of the message, except delivery) between the client and service to decouple the service from the client. This gives us the opportunity to introduce more services for heavy load scenarios by simply registering a new instance to the broker, as shown in the following figure:. I left out the clustering (scaling) of brokers and also the routing of messages on purpose at this stage of introduction. In many cross process scenarios a database is introduced as a broker, which is constantly polled by services (and clients, if there's a response involved) to check whether there's a message to be processed or not. Adding a database as a broker and implementing your own logic can be absolutely fine for basic systems, but for more advanced scenarios it lacks some essential features, which Messages Queues come shipped with. Scalability: Decoupling is the biggest step towards a robust design, as it introduces the possibility to add more processing nodes to your data flow. Resilience: Messages are guaranteed to be delivered and processed as automatic retrying is available for non-acknowledged (processed) messages. If the retry count is exceeded, failed messages are stored in a Dead Letter Queue (DLQ) to be inspected later and are requeued after fixing the issue that caused the failure. In case of a partial failure of your infrastructure, clients can still produce messages that get delivered and processed as soon as there is even a single consumer back online. Pushing instead of polling: This is where asynchronism comes into play, as clients do not constantly poll for messages but instead it gets pushed by the broker when there's a new message in their subscribed queue. This minimizes the spinning and wait time, when the timer ticks only for 10 seconds. Guaranteed order: Most Message Queues offer a guaranteed order of the processing under defined conditions (mostly FIFO). Load balancing: With multiple services registered for messages, there is an inherent load balancing so that the heavy load scenarios can be handled better. In addition to this round-robin routing there are other routing logics, such as smallest-mailbox, tail-chopping, or random routing. Message persistence: Message Queues can be configured to persist their data to disk and even survive restarts of the host on which they are running. To overcome the downtime of the Message Queue you can even setup a cluster to offload the demand to other brokers while restarting a single node. Built-in priority: Message Queues usually have separate queues for different messages and even provide a separate in queue for prioritized messages. There are many more features, such as Time to live, security and batching modes, which we will not cover as they are outside the scope of ServiceStack. In the following example we will refer to two basic DTOs: public class Hello : ServiceStack.IReturn<HelloResponse> { public string Name { get; set; } } public class HelloResponse { public string Result { get; set; } } The Hello class is used to send a Name to a consumer that generates a message, which will be enqueued in the Message Queue as well. RabbitMQ RabbitMQ is a mature broker built on top of the Advanced Message Queuing Protocol (AMQP), which makes it possible to solve even more complex scenarios, as shown here: The messages will survive restarts of the RabbitMQ service and the additional guaranty of delivery is accomplished by depending upon an acknowledgement of the receipt (and processing) of the message, by default it is done by ServiceStack for typical scenarios. The client of this Message Queue is located in the ServiceStack.RabbitMq object's NuGet package (it uses the official client in the RabbitMQ.Client package under the hood). You can add additional protocols to RabbitMQ, such as Message Queue Telemetry Transport (MQTT) and Streaming Text Oriented Messaging Protocol (STOMP), with plugins to ease Interop scenarios. Due to its complexity, we will focus on an abstracted interaction with the broker. There are many books and articles available for a deeper understanding of RabbitMQ. A quick overview of the covered scenarios is available at https://www.rabbitmq.com/getstarted.html. The method of publishing a message with RabbitMQ does not differ much from RedisMQ: using ServiceStack; using ServiceStack.RabbitMq; using (var rabbitMqServer = new RabbitMqServer()) { using (var messageProducer = rabbitMqServer.CreateMessageProducer()) { var hello = new Hello { Name = "Demo" }; messageProducer.Publish(hello); } } This will create a Helloobject and publish it to the corresponding queue in RabbitMQ. To retrieve this message, we need to register a handler, as shown here: using System; using ServiceStack; using ServiceStack.RabbitMq; using ServiceStack.Text; var rabbitMqServer = new RabbitMqServer(); rabbitMqServer.RegisterHandler<Hello>(message => { var hello = message.GetBody(); var name = hello.Name; var result = "Hello {0}".Fmt(name); result.Print(); return null; }); rabbitMqServer.Start(); "Listening for hello messages".Print(); Console.ReadLine(); rabbitMqServer.Dispose(); This registers a handler for Hello objects and prints a message to the console. In favor of a straightforward example we are omitting all the parameters with default values of the constructor of RabbitMqServer, which will connect us to the local instance at port 5672. To change this, you can either provide a connectionString parameter (and optional credentials) or use a RabbitMqMessageFactory object to customize the connection. Setup Setting up RabbitMQ involves a bit of effort. At first you need to install Erlang from http://www.erlang.org/download.html, which is the runtime for RabbitMQ due to its functional and concurrent nature. Then you can grab the installer from https://www.rabbitmq.com/download.html, which will set RabbitMQ up and running as a service with a default configuration. Processing chain Due to its complexity, the processing chain with any mature Message Queue is different from what you know from RedisMQ. Exchanges are introduced in front of queues to route the messages to their respective queues according to their routing keys: The default exchange name is mx.servicestack (defined in ServiceStack.Messaging.QueueNames.Exchange) and is used in any Publish to call an IMessageProducer or IMessageQueueClient object. With IMessageQueueClient.Publish you can inject a routing key (queueName parameter), to customize the routing of a queue. Failed messages are published to the ServiceStack.Messaging.QueueNames.ExchangeDlq (mx.servicestack.dlq) and routed to queues with the name mq:{type}.dlq. Successful messages are published to ServiceStack.Messaging.QueueNames.ExchangeTopic (mx.servicestack.topic) and routed to the queue mq:{type}.outq. Additionally, there's also a priority queue to the in-queue with the name mq:{type}.priority. If you interact with RabbitMQ on a lower level, you can directly publish to queues and leave the routing via an exchange out of the picture. Each queue has features to define whether the queue is durable, deletes itself after the last consumer disconnected, or which exchange is to be used to publish dead messages with which routing key. More information on the concepts, different exchange types, queues, and acknowledging messages can be found at https://www.rabbitmq.com/tutorials/amqp-concepts.html. Replying directly back to the producer Messages published to a queue are dequeued in FIFO mode, hence there is no guarantee if the responses are delivered to the issuer of the initial message or not. To force a response to the originator you can make use of the ReplyTo property of a message: using System; using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; using ServiceStack.Text; var rabbitMqServer = new RabbitMqServer(); var messageQueueClient = rabbitMqServer.CreateMessageQueueClient(); var queueName = messageQueueClient.GetTempQueueName(); var hello = new Hello { Name = "reply to originator" }; messageQueueClient.Publish(new Message<Hello>(hello) { ReplyTo = queueName }); var message = messageQueueClient.Get<HelloResponse>(queueName); var helloResponse = message.GetBody(); This code is more or less identical to the RedisMQ approach, but it does something different under the hood. The messageQueueClient.GetTempQueueName object creates a temporary queue, whose name is generated by ServiceStack.Messaging.QueueNames.GetTempQueueName. This temporary queue does not survive a restart of RabbitMQ, and gets deleted as soon as the consumer disconnects. As each queue is a separate Erlang process, you may encounter the process limits of Erlang and the maximum amount of file descriptors of your OS. Broadcasting a message In many scenarios a broadcast to multiple consumers is required, for example if you need to attach multiple loggers to a system it needs a lower level of implementation. The solution to this requirement is to create a fan-out exchange that will forward the message to all the queues instead of one connected queue, where one queue is consumed exclusively by one consumer, as shown: using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; var fanoutExchangeName = string.Concat(QueueNames.Exchange, ".", ExchangeType.Fanout); var rabbitMqServer = new RabbitMqServer(); var messageProducer= (RabbitMqProducer) rabbitMqServer.CreateMessageProducer(); var channel = messageProducer.Channel; channel.ExchangeDeclare(exchange: fanoutExchangeName, type: ExchangeType.Fanout, durable: true, autoDelete: false, arguments: null); With the cast to RabbitMqProducer we have access to lower level actions, we need to declare and exchange this with the name mx.servicestack.fanout, which is durable and does not get deleted. Now, we need to bind a temporary and an exclusive queue to the exchange: var messageQueueClient = (RabbitMqQueueClient) rabbitMqServer.CreateMessageQueueClient(); var queueName = messageQueueClient.GetTempQueueName(); channel.QueueBind(queue: queueName, exchange: fanoutExchangeName, routingKey: QueueNames<Hello>.In); The call to messageQueueClient.GetTempQueueName() creates a temporary queue, which lives as long as there is just one consumer connected. This queue is bound to the fan-out exchange with the routing key mq:Hello.inq, as shown here: To publish the messages we need to use the RabbitMqProducer object (messageProducer): var hello = new Hello { Name = "Broadcast" }; var message = new Message<Hello>(hello); messageProducer.Publish(queueName: QueueNames<Hello>.In, message: message, exchange: fanoutExchangeName); Even though the first parameter of Publish is named queueName, it is propagated as the routingKey to the underlying PublishMessagemethod call. This will publish the message on the newly generated exchange with mq:Hello.inq as the routing key: Now, we need to encapsulate the handling of the message as: var messageHandler = new MessageHandler<Hello>(rabbitMqServer, message => { var hello = message.GetBody(); var name = hello.Name; name.Print(); return null; }); The MessageHandler<T> class is used internally in all the messaging solutions and looks for retries and replies. Now, we need to connect the message handler to the queue. using System; using System.IO; using System.Threading.Tasks; using RabbitMQ.Client; using RabbitMQ.Client.Exceptions; using ServiceStack.Messaging; using ServiceStack.RabbitMq; var consumer = new RabbitMqBasicConsumer(channel); channel.BasicConsume(queue: queueName, noAck: false, consumer: consumer); Task.Run(() => { while (true) { BasicGetResult basicGetResult; try { basicGetResult = consumer.Queue.Dequeue(); } catch (EndOfStreamException) { // this is ok return; } catch (OperationInterruptedException) { // this is ok return; } var message = basicGetResult.ToMessage<Hello>(); messageHandler.ProcessMessage(messageQueueClient, message); } }); This creates a RabbitMqBasicConsumer object, which is used to consume the temporary queue. To process messages we try to dequeuer from the Queue property in a separate task. This example does not handle the disconnects and reconnects from the server and does not integrate with the services (however, both can be achieved). Integrate RabbitMQ in your service The integration of RabbitMQ in a ServiceStack service does not differ overly from RedisMQ. All you have to do is adapt to the Configure method of your host. using Funq; using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; public override void Configure(Container container) { container.Register<IMessageService>(arg => new RabbitMqServer()); container.Register<IMessageFactory>(arg => new RabbitMqMessageFactory()); var messageService = container.Resolve<IMessageService>(); messageService.RegisterHandler<Hello> (this.ServiceController.ExecuteMessage); messageService.Start(); } The registration of an IMessageService is needed for the rerouting of the handlers to your service; and also, the registration of an IMessageFactory is relevant if you want to publish a message in your service with PublishMessage. Summary In this article the messaging pattern was introduced along with all the available clients of existing Message Queues. Resources for Article: Further resources on this subject: ServiceStack applications[article] Web API and Client Integration[article] Building a Web Application with PHP and MariaDB – Introduction to caching [article]
Read more
  • 0
  • 0
  • 6125

article-image-introduction-nginx
Packt
31 Jul 2013
8 min read
Save for later

Introduction to nginx

Packt
31 Jul 2013
8 min read
(For more resources related to this topic, see here.) So, what is nginx? The best way to describe nginx (pronounced engine-x) is as an event-based multi-protocol reverse proxy. This sounds fancy, but it's not just buzz words and actually affects how we approach configuring nginx. It also highlights some of the flexibility that nginx offers. While it is often used as a web server and an HTTP reverse proxy, it can also be used as an IMAP reverse proxy or even a raw TCP reverse proxy. Thanks to the plug-in ready code structure, we can utilize a large number of first and third party modules to implement a diverse amount of features to make nginx an ideal fit for many typical use cases. A more accurate description would be to say that nginx is a reverse proxy first, and a web server second. I say this because it can help us visualize the request flow through the configuration file and rationalize how to achieve the desired configuration of nginx. The core difference this creates is that nginx works with URIs instead of files and directories, and based on that determines how to process the request. This means that when we configure nginx, we tell it what should happen for a certain URI rather than what should happen for a certain file on the disk. A beneficial part of nginx being a reverse proxy is that it fits into a large number of server setups, and can handle many things that other web servers simply aren't designed for. A popular question is "Why even bother with nginx when Apache httpd is available?" The answer lies in the way the two programs are designed. The majority of Apache setups are done using prefork mode, where we spawn a certain amount of processes and then embed our dynamic language in each process. This setup is synchronous, meaning that each process can handle one request at a time, whether that connection is for a PHP script or an image file. In contrast, nginx uses an asynchronous event-based design where each spawned process can handle thousands of concurrent connections. The downside here is that nginx will, for security and technical reasons, not embed programming languages into its own process - this means that to handle those we will need to reverse proxy to a backend, such as Apache, PHP-FPM, and so on. Thankfully, as nginx is a reverse proxy first and foremost, this is extremely easy to do and still allows us major benefits, even when keeping Apache in use. Let's take a look at a use case where Apache is used as an application server described earlier rather than just a web server. We have embedded PHP, Perl, or Python into Apache, which has the primary disadvantage of each request becoming costly. This is because the Apache process is kept busy until the request has been fully served, even if it's a request for a static file. Our online service has gotten popular and we now find that our server cannot keep up with the increased demand. In this scenario introducing nginx as a spoon-feeding layer would be ideal. When an nginx server with a spoon-feeding layer will sit between our end user and Apache and a request comes in, nginx will reverse proxy it to Apache if it is for a dynamic file, while it will handle any static file requests itself. This means that we offload a lot of the request handling from the expensive Apache processes to the more lightweight nginx processes, and increase the number of end users we can serve before having to spend money on more powerful hardware. Another example scenario is where we have an application being used from all over the world. We don't have any static files so we can't easily offload a number of requests from Apache. In this use case, our PHP process is busy from the time the request comes in until the user has finished downloading the response. Sadly, not everyone in the world has fast internet and, as a result, the sending process could be busy for a relatively significant period of time. Let's assume our visitor is on an old 56k modem and has a maximum download speed of 5 KB per second, it will take them five seconds to download a 25 KB gzipped HTML file generated by PHP. That's five seconds where our process cannot handle any other request. When we introduce nginx into this setup, we have PHP spending only microseconds generating the response but have nginx spend five seconds transferring it to the end user. Because nginx is asynchronous it will happily handle other connections in the meantime, and thus, we significantly increase the number of concurrent requests we can handle. In the previous two examples I used scenarios where nginx was used in front of Apache, but naturally this is not a requirement. nginx is capable of reverse proxying via, for instance, FastCGI, UWSGI, SCGI, HTTP, or even TCP (through a plugin) enabling backends, such as PHP-FPM, Gunicorn, Thin, and Passenger. Quick start – Creating your first virtual host It's finally time to get nginx up and running. To start out, let's quickly review the configuration file. If you installed via a system package, the default configuration file location is most likely /etc/nginx/nginx.conf. If you installed via source and didn't change the path pre fix, nginx installs itself into/usr/local/nginx and places nginx.conf in a /conf subdirectory. Keep this file open as a reference to help visualize many of the things described in this article. Step 1 – Directives and contexts To understand what we'll be covering in this section, let me first introduce a bit of terminology that the nginx community at large uses. Two central concepts to the nginx configuration file are those of directives and contexts. A directive is basically just an identifier for the various configuration options. Contexts refer to the different sections of the nginx configuration file. This term is important because the documentation often states which context a directive is allowed to have within. A glance at the standard configuration file should reveal that nginx uses a layered configuration format where blocks are denoted by curly brackets {}. These blocks are what are referred to as contexts. The topmost context is called main, and is not denoted as a block but is rather the configuration file itself. The main context has only a few directives we're really interested in, the two major ones being worker_processes and user. These directives handle how many worker processes nginx should run and which user/group nginx should run these under. Within the main context there are two possible subcontexts, the first one being called events. This block handles directives that deal with the event-polling nature of nginx. Mostly we can ignore every directive in here, as nginx can automatically configure this to be the most optimal; however, there's one directive which is interesting, namely worker_connections. This directive controls the number of connections each worker can handle. It's important to note here that nginx is a terminating proxy, so if you HTTP proxy to a backend, such as Apache httpd, that will use up two connections. The second subcontext is the interesting one called http. This context deals with everything related to HTTP, and this is what we will be working with almost all of the time. While there are directives that are configured in the http context, for now we'll focus on a subcontext within http called server. The server context is the nginx equivalent of a virtual host. This context is used to handle configuration directives based on the host name your sites are under. Within the server context, we have another subcontext called location. The location context is what we use to match the URI. Basically, a request to nginx will flow through each of our contexts, matching first the server block with the hostname provided by the client, and secondly the location context with the URI provided by the client. Depending on the installation method, there might not be any server blocks in the nginx.conf file. Typically, system package managers take advantage of the include directive that allows us to do an in-place inclusion into our configuration file. This allows us to separate out each virtual host and keep our configuration file more organized. If there aren't any server blocks, check the bottom of the file for an includedirective and check the directory from which it includes, it should have a file which contains a server block. Step 2 – Define your first virtual hosts Finally, let us define our first server block! server { listen 80; server_name example.com; root /var/www/website;} That is basically all we need, and strictly speaking, we don't even need to define which port to listen on as port 80 is default. However, it's generally a good practice to keep it in there should we want to search for all virtual hosts on port 80 later on. Summary This article provided the details about the important aspects of nginx. It also briefed about the configuration of our virtual host using nginx by explaining two simple steps, along with a configuration example. Resources for Article : Further resources on this subject: Nginx HTTP Server FAQs [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 0
  • 6110

article-image-interacting-gnu-octave-variables
Packt
17 Jun 2011
8 min read
Save for later

Interacting with GNU Octave: Variables

Packt
17 Jun 2011
8 min read
GNU Octave Beginner's Guide Become a proficient Octave user by learning this high-level scientific numerical tool from the ground up          In the following, we shall see how to instantiate simple variables. By simple variables, we mean scalars, vectors, and matrices. First, a scalar variable with name a is assigned the value 1 by the command: octave:1> a=1 a = 1 That is, you write the variable name, in this case a, and then you assign a value to the variable using the equal sign. Note that in Octave, variables are not instantiated with a type specifier as it is known from C and other lower-level languages. Octave interprets a number as a real number unless you explicitly tell it otherwise. In Octave, a real number is a double-precision, floating-point number,which means that the number is accurate within the first 15 digits. Single precision is accurate within the first 6 digits. You can display the value of a variable simply by typing the variable name: octave:2>a a = 1 Let us move on and instantiate an array of numbers: octave:3 > b = [1 2 3] b = 1 2 3 Octave interprets this as the row vector: rather than a simple one-dimensional array. The elements (or the entries) in a row vector can also be separated by commas, so the command above could have been: octave:3> b = [1, 2, 3] b = 1 2 3 To instantiate a column vector: you can use: octave:4 > c = [1;2;3] c = 1 2 3 Notice how each row is separated by a semicolon. We now move on and instantiate a matrix with two rows and three columns (a 2 x 3 matrix): using the following command: octave:5 > A = [1 2 3; 4 5 6] A = 1 2 3 4 5 6 Notice that I use uppercase letters for matrix variables and lowercase letters for scalars and vectors, but this is, of course, a matter of preference, and Octave has no guidelines in this respect. It is important to note, however, that in Octave there is a difference between upper and lowercase letters. If we had used a lowercase a in Command 5 above, Octave would have overwritten the already existing variable instantiated in Command 1. Whenever you assign a new value to an existing variable, the old value is no longer accessible, so be very careful whenever reassigning new values to variables. Variable names can be composed of characters, underscores, and numbers. A variable name cannot begin with a number. For example, a_1 is accepted as a valid variable name, but 1_a is not. In this article, we shall use the more general term array when referring to a vector or a matrix variable. Accessing and changing array elements To access the second element in the row vector b, we use parenthesis: octave:6 > b(2) ans = 2 That is, the array indices start from 1. This is an abbreviation for "answer" and is a variable in itself with a value, which is 2 in the above example. For the matrix variable A, we use, for example: octave:7> A(2,3) ans = 6 to access the element in the second row and the third column. You can access entire rows and columns by using a colon: octave:8> A(:,2) ans = 2 5 octave:9 > A(1,:) ans = 1 2 3 Now that we know how to access the elements in vectors and matrices, we can change the values of these elements as well. To try to set the element A(2,3)to -10.1: octave:10 > A(2,3) = -10.1 A = 1.0000 2.0000 3.0000 4.0000 5.0000 -10.1000 Since one of the elements in A is now a non-integer number, all elements are shown in floating point format. The number of displayed digits can change depending on the default value, but for Octave's interpreter there is no difference—it always uses double precision for all calculations unless you explicitly tell it not to. You can change the displayed format using format short or format long. The default is format short. It is also possible to change the values of all the elements in an entire row by using the colon operator. For example, to substitute the second row in the matrix A with the vector b (from Command 3 above), we use: octave:11 > A(2,:) = b A = 1 2 3 1 2 3 This substitution is valid because the vector b has the same number of elements as the rows in A. Let us try to mess things up on purpose and replace the second column in A with b: octave:12 > A(:,2) = b error: A(I,J,...) = X: dimension mismatch Here Octave prints an error message telling us that the dimensions do not match because we wanted to substitute three numbers into an array with just two elements. Furthermore, b is a row vector, and we cannot replace a column with a row. Always read the error messages that Octave prints out. Usually they are very helpful. There is an exception to the dimension mismatch shown above. You can always replace elements, entire rows, and columns with a scalar like this: octave:13> A(:,2) = 42 A = 1 42 3 1 42 3 More examples It is possible to delete elements, entire rows, and columns, extend existing arrays, and much more. Time for action – manipulating arrays To delete the second column in A, we use: octave:14> A(:,2) = [] A = 1 3 1 3 We can extend an existing array, for example: octave:15 > b = [b 4 5] b = 1 2 3 4 5 Finally, try the following commands: octave:16> d = [2 4 6 8 10 12 14 16 18 20] d = 2 4 6 8 10 12 14 16 18 20 octave:17> d(1:2:9) ans = 2 6 10 14 18 octave:18> d(3:3:12) = -1 d = 2 4 -1 8 10 -1 14 16 -1 20 0 -1 What just happened? In Command 14, Octave interprets [] as an empty column vector and column 2 in A is then deleted in the command. Instead of deleting a column, we could have deleted a row, for example as an empty column vector and column 2 in A is then deleted in the command. octave:14> A(2,:)=[] On the right-hand side of the equal sign in Command 15, we have constructed a new vector given by [b 4 5], that is, if we write out b, we get [1 2 3 4 5] since b=[1 2 3]. Because of the equal sign, we assign the variable b to this vector and delete the existing value of b. Of course, we cannot extend b using b=[b; 4; 5] since this attempts to augment a column vector onto a row vector. Octave first evaluates the right-hand side of the equal sign and then assigns that result to the variable on the left-hand side. The right-hand side is named an expression. In Command 16, we instantiated a row vector d, and in Command 17, we accessed the elements with indices 1,3,5,7, and 9, that is, every second element starting from 1. Command 18 could have made you a bit concerned! d is a row vector with 10 elements, but the command instructs Octave to enter the value -1 into elements 3, 6, 9 and 12, that is, into an element that does not exist. In such cases, Octave automatically extends the vector (or array in general) and sets the value of the added elements to zero unless you instruct it to set a specific value. In Command 18, we only instructed Octave to set element 12 to -1, and the value of element 11 will therefore be given the default value 0 as seen from the output. In low-level programming languages, accessing non-existing or non-allocated array elements may result in a program crash the first time it is running2. This will be the best case scenario. In a worse scenario, the program will work for years, but then crash all of a sudden, which is rather unfortunate if it controls a nuclear power plant or a space shuttle. As you can see, Octave is designed to work in a vectorized manner. It is therefore often referred to as a vectorized programming language. Complex variables Octave also supports calculations with complex numbers. As you may recall, a complex number can be written as z = a + bi, where a is the real part, b is the imaginary part, and i is the imaginary unit defined from i2 = –49491. To instantiate a complex variable, say z = 1 + 2i, you can type: octave:19> z = 1 + 2I z = 1 + 2i When Octave starts, the variables i, j, I, and J are all imaginary units, so you can use either one of them. I prefer using I for the imaginary unit, since i and j are often used as indices and J is not usually used to symbolize i. To retrieve the real and imaginary parts of a complex number, you use: octave:20> real(z) ans = 1 octave:21>imag(z) ans = 2 You can also instantiate complex vectors and matrices, for example: octave:22> Z = [1 -2.3I; 4I 5+6.7I] Z = 1.0000 + 0.0000i 0.0000 - 2.3000i 0.0000 + 4.0000i 5.0000 + 6.7000i Be careful! If an array element has non-zero real and imaginary parts, do not leave any blanks (space characters) between the two parts. For example, had we used Z=[1 -2.3I; 4I 5 + 6.7I] in Command 22, the last element would be interpreted as two separate elements (5 and 6.7i). This would lead to dimension mismatch. The elements in complex arrays can be accessed in the same way as we have done for arrays composed of real numbers. You can use real(Z) and imag(Z) to print the real and imaginary parts of the complex array Z. (Try it out!)
Read more
  • 0
  • 0
  • 6095

Packt
15 Nov 2013
5 min read
Save for later

RESTful Web Services – Server-Sent Events (SSE)

Packt
15 Nov 2013
5 min read
Getting started Generally, the flow of web services is initiated by the client by sending a request for the resource to the server. This is the traditional way of consuming web services. Traditional Flow Here, the browser or Jersey client initiates the request for data from the server, and the server provides a response along with the data. Every time a client needs to initiate a request for the resource, the server may not have the capability to generate the data. This becomes difficult in an application where real-time data needs to be shown. Even though there is no new data over the server, the client needs to check for it every time. Nowadays, there is a requirement that the server needs to send some data without the client's request. For this to happen the client and server need to be connected, and the server can push the data to the client. This is why it is termed as Server-Sent Events. In these events, the connections created initially between the client and server is not released after the request. The server maintains the connection and pushes the data to the respective client when required. Server-Sent Event Flow In the Server-Sent Event Flow diagram initially, when a browser or a Jersey client initiates a request to establish a connection with the server using EventSource, the server is always in a listening mode for the new connection to be established. When a new connection from any EventSource is received, the server opens a new connection and maintains it in a queue. Maintaining a connection depends upon the implementation of business logic. SSE creates a single unidirectional connection. So, only a single connection is established between the client and server. After the connection is successfully established, the client is in the listening mode for new events from the server. Whenever any new event occurs on the server side, it will broadcast the event, along with the data to a specific open HTTP connection. In modern browsers that support HTML5, the onmessage method of EventSource is responsible for handling new events received from the server; whereas, in the case of Jersey clients, we have the onEvent method of EventSource, which handles new events from the server. Implementing Server-Sent Events (SSE) To use SSE, we need to register SseFeature on both the client and server sides. By doing so, the client/server gets connected to SseFeature to be used while traversing data over the network. SSE: Internal Working In the SSE: Internal Working diagram, we assume that the client/server is connected. When any new event is generated, the server initiates an OutboundEvent instance that will be responsible to have chunked output, which in turn will have a serialized data format. OutboundEventWriter is responsible to serialize the data on the server side. We need to specify the media type of the data in OutboundEvent. There are no restrictions of providing specific media types only. However, on the client side, InboundEvent is responsible for handling the incoming data from the server. Here, InboundEvent receives the chunked input that contains serialized data format. Using InbounEventReader, data is deserialized. Using SSEBroadCaster, we are able to broadcast events to multiple clients that are connected to the server. Let's look at the example, which shows how to create SSE web services and broadcast the events: @ApplicationPath("services") public class SSEApplication extends ResourceConfig { publicSSEApplication() { super(SSEResource.class, SseFeature.class); } } Here, we registered the SseFeature module and the SSEResource root-resource class to the server. private static final SseBroadcaster BROADCASTER = new SseBroadcaster(); …… @GET @Path("sseEvents") @Produces(SseFeature.SERVER_SENT_EVENTS) public EventOutput getConnection() { final EventOutput eventOutput = new EventOutput(); BROADCASTER.add(eventOutput); return eventOutput; } …… In the SSEResource root class, we need to create a resource method that will allow clients to establish the connection and persist accordingly. Here, we are maintaining the connection into the BROADCASTER instance in the SseBroadcaster class. EventOutput manages specific client connections. SseBroadcaster is simply responsible for accommodating a group of EventOutput; that is, the client's connection. …… @POST @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public void post(@FormParam("name") String name) { BROADCASTER .broadcast(new OutboundEvent.Builder() .data(String.class, name) .build()); } …… When any post method is consumed, we create a new event and broadcast it to the client available in the BROADCASTER instance. The OutboundEvent instance will contain the data (MediaType, Object) method that is initialized with a specific media type and actual data. We can provide any media type to send data. By using the build() method, data is being serialized with the OutBoundEventWriter class internally. When the broadcast (OutboundEvent) is called, internally SseBroadcaster pushes data on all registered EventOutputs; that is, on clients connected to SseBroadcaster. At times, there's a scenario where the client/server has been connected and after sometime, the client gets disconnected. So, in this case, SseBroadcaster automatically handles the client connection; that is, it determines whether the connection needs to be maintained. When any client connection is closed, the broadcaster detects EventOutput and frees the connection and resources obtained by that EventOutput connection. Summary Thus we learned the difference between the traditional web service flow and SSE web service flow. We also covered how to create the SSE web services and implement the Jersey client in order to consume the SSE using different programmatic models. Useful Links: Setting up the most Popular Journal Articles in your Personalized Community in Liferay Portal Understanding WebSockets and Server-sent Events in Detail RESS - The idea and the Controversies
Read more
  • 0
  • 0
  • 6093

article-image-including-google-maps-your-posts-using-apache-roller-40
Packt
31 Dec 2009
5 min read
Save for later

Including Google Maps in your Posts Using Apache Roller 4.0

Packt
31 Dec 2009
5 min read
Google Maps, YouTube, and SlideShare There are a lot of Internet and web services which you can use along with your blog to make your content more interesting for your viewers. With a good digital camera and video production software, you can make your own videos and presentations quickly and easily, and embed them in your posts with just a few clicks! For example, with Google Maps, you can add photos of your business to a custom map, and post it in your blog to attract customers. There are a lot of possibilities, and it all depends on your creativity! Including Google Maps in your posts Using Google Maps in your blog is a good way of promoting your business because you can show your visitors your exact location. Or you can blog about your favorite places and show them as if you were there, using your own photos. Time for action – using Google Maps There are a lot of things you can do with Google Maps, one of them is including maps of your favorite places in your blog, as we'll see in a moment: Open your web browser and go to Google Maps (http://maps.google.com). Type Eiffel Tower in the search textbox, and click on the Search Maps button or press Enter. Your web browser window will split in two areas, as shown in the following screenshot: Click on the Eiffel Tower link at the bottom-left part of the screen to see the Eiffel Tower's exact position in the map at the right panel:  Now click on the Satellite button to see a satellite image of the Eiffel Tower:      Drag the Eiffel Tower upwards using your mouse, to center it on the map area: Click on the Zoom here link inside the Eiffel Tower caption to see a closer image: If you look closely at the previous screenshot, you'll notice three links above the map: Print, Send, and Link. Click on Link to open a small window: Right-click on the Paste HTML to embed in website field to select the HTML code, and then click on the Copy option from the pop-up menu: Open a new web browser window and log into your Roller weblog. In the New Entry page, type The Eiffel Tower inside the Title field, and Eiffel Tower Google Maps inside the Tags field. Then click on the Toggle HTML Source button in the Rich Text editor toolbar, type This is an Eiffel Tower satellite image, courtesy of Google Maps:<br> inside the Content Field, press Enter, and paste the code you copied from the Google Maps web page: Scroll down the New Entry page and click on the Post to Weblog button. Then click on the Permalink field's URL, to see your Google Maps image posted in your blog. Click on the Zoom here link once to see a close-up of the Eiffel Tower, as shown in the following screenshot: What just happened? Now you can add Google Maps functionality inside your blog! Isn't that great? You just need to copy and paste the HTML code that the Google Maps produce automatically for you. If you want a bigger or smaller map, you can click on the Customize and preview embedded map link to customize the HTML code that you're going to paste into your blog: Then you just copy the HTML code produced by Google Maps and paste it into your blog post: If you have a Google Maps account, you can create customized maps and show them to your visitors, add photos and videos of places you've visited, and even write reviews about your favorite restaurants and hotels. Have a go hero – explore Google Maps Now that you've seen how to embed Google Maps in your weblog, it would be a great idea to create your own Google Maps account, and start exploring all the things that you can do—inserting photos and videos about places you've visited in your own custom maps, adding reviews of restaurants and other businesses in your locality, and so on. You can explore other users' maps, too. Once you get the hang of it, you'll be traveling around the world and meeting new people from your own PC. Including YouTube videos in your posts YouTube is one of the most popular video sharing websites in the world. You can include your favorite videos in your blog, or make your own videos and show them to your visitors. However, before you start complaining that we have already seen how to insert videos on your weblog, let me tell you that the big difference between uploading a video to your own blog server and uploading a video to a YouTube server is bandwidth. When someone plays back a video from your weblog, the blog server transfers the video data to your visitor's web browser, so that he/she can begin to see it, even before the video downloads completely. This is known as video streaming. Now, imagine you have 1,000 visitors, and each one of them is viewing the same video from your weblog! There would be a big amount of data flowing from your weblog to each visitor's web browser! That amount of data flowing from one PC to another is called bandwidth. You would need a very broad connection to be capable of transmitting your video to all those visitors. That's where YouTube comes into play. They have lots of bandwidth available for you and the other millions of users who share videos daily! So, if you plan to include a lot of videos in your weblog, it would be a great idea to get a YouTube account and start uploading them. In the following exercise, I'll show you how to include a YouTube video in your post, without having to upload it to your weblog.
Read more
  • 0
  • 0
  • 6090
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-server-side-swift-building-slack-bot-part-2
Peter Zignego
13 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 2

Peter Zignego
13 Oct 2016
5 min read
In Part 1 of this series, I introduced you to SlackKit and Zewo, which allows us to build and deploy a Slack bot written in Swift to a Linux server. Here in Part 2, we will finish the app, showing all of the Swift code. We will also show how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Show Me the Swift Code! Finally, some Swift code! To create our bot, we need to edit our main.swift file to contain our bot logic: import String importSlackKit class Leaderboard: MessageEventsDelegate { // A dictionary to hold our leaderboard var leaderboard: [String: Int] = [String: Int]() letatSet = CharacterSet(characters: ["@"]) // A SlackKit client instance let client: SlackClient // Initalize the leaderboard with a valid Slack API token init(token: String) { client = SlackClient(apiToken: token) client.messageEventsDelegate = self } // Enum to hold commands the bot knows enum Command: String { case Leaderboard = "leaderboard" } // Enum to hold logic that triggers certain bot behaviors enum Trigger: String { casePlusPlus = "++" caseMinusMinus = "--" } // MARK: MessageEventsDelegate // Listen to the messages that are coming in over the Slack RTM connection funcmessageReceived(message: Message) { listen(message: message) } funcmessageSent(message: Message){} funcmessageChanged(message: Message){} funcmessageDeleted(message: Message?){} // MARK: Leaderboard Internal Logic privatefunc listen(message: Message) { // If a message contains our bots user ID and a recognized command, handle that command if let id = client.authenticatedUser?.id, text = message.text { iftext.lowercased().contains(query: Command.Leaderboard.rawValue) &&text.contains(query: id) { handleCommand(command: .Leaderboard, channel: message.channel) } } // If a message contains a trigger value, handle that trigger ifmessage.text?.contains(query: Trigger.PlusPlus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .PlusPlus) } ifmessage.text?.contains(query: Trigger.MinusMinus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .MinusMinus) } } // Text parsing can be messy when you don't have Foundation... privatefunchandleMessageWithTrigger(message: Message, trigger: Trigger) { if let text = message.text, start = text.index(of: "@"), end = text.index(of: trigger.rawValue) { let string = String(text.characters[start...end].dropLast().dropFirst()) let users = client.users.values.filter{$0.id == self.userID(string: string)} // If the receiver of the trigger is a user, use their user ID ifusers.count> 0 { letidString = userID(string: string) initalizationForValue(dictionary: &leaderboard, value: idString) scoringForValue(dictionary: &leaderboard, value: idString, trigger: trigger) // Otherwise just store the receiver value as is } else { initalizationForValue(dictionary: &leaderboard, value: string) scoringForValue(dictionary: &leaderboard, value: string, trigger: trigger) } } } // Handle recognized commands privatefunchandleCommand(command: Command, channel:String?) { switch command { case .Leaderboard: // Send message to the channel with the leaderboard attached if let id = channel { client.webAPI.sendMessage(channel:id, text: "Leaderboard", linkNames: true, attachments: [constructLeaderboardAttachment()], success: {(response) in }, failure: { (error) in print("Leaderboard failed to post due to error:(error)") }) } } } privatefuncinitalizationForValue(dictionary: inout [String: Int], value: String) { if dictionary[value] == nil { dictionary[value] = 0 } } privatefuncscoringForValue(dictionary: inout [String: Int], value: String, trigger: Trigger) { switch trigger { case .PlusPlus: dictionary[value]?+=1 case .MinusMinus: dictionary[value]?-=1 } } // MARK: Leaderboard Interface privatefuncconstructLeaderboardAttachment() -> Attachment? { let Great! But we’ll need to replace the dummy API token with the real deal before anything will work. Getting an API Token We need to create a bot integration in Slack. You’ll need a Slack instance that you have administrator access to. If you don’t already have one of those to play with, go sign up. Slack is free for small teams: Create a new bot here. Enter a name for your bot. I’m going to use “leaderbot”. Click on “Add Bot Integration”. Copy the API token that Slack generates and replace the placeholder token at the bottom of main.swift with it. Testing 1,2,3… Now that we have our API token, we’re ready to do some local testing. Back in Xcode, select the leaderbot command-line application target and run your bot (⌘+R). When we go and look at Slack, our leaderbot’s activity indicator should show that it’s online. It’s alive! To ensure that it’s working, we should give our helpful little bot some karma points: @leaderbot++ And ask it to see the leaderboard: @leaderbot leaderboard Head in the Clouds Now that we’ve verified that our leaderboard bot works locally, it’s time to deploy it. We are deploying on Heroku, so if you don’t have an account, go and sign up for a free one. First, we need to add a Procfile for Heroku. Back in the terminal, run: echo slackbot: .build/debug/leaderbot > Procfile Next, let’s check in our code: git init git add . git commit -am’leaderbot powering up’ Finally, we’ll setup Heroku: Install the Heroku toolbelt. Log in to Heroku in your terminal: heroku login Create our application on Heroku and set our buildpack: heroku create --buildpack https://github.com/pvzig/heroku-buildpack-swift.git leaderbot Set up our Heroku remote: heroku git:remote -a leaderbot Push to master: git push heroku master Once you push to master, you’ll see Heroku going through the process of building your application. Launch! When the build is complete, all that’s left to do is to run our bot: heroku run:detached slackbot Like when we tested locally, our bot should become active and respond to our commands! You’re Done! Congratulations, you’ve successfully built and deployed a Slack bot written in Swift onto a Linux server! Built With: Jay: Pure-Swift JSON parser and formatterkylef’s Heroku buildpack for Swift Open Swift: Open source cross project standards for Swift SlackKit: A Slack client library Zewo: Open source libraries for modern server software Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina, USA. He writes at bytesized.co, tweets at @pvzig, and freelances at Launch Software.
Read more
  • 0
  • 0
  • 6085

article-image-managing-image-content-joomla
Packt
23 Feb 2010
3 min read
Save for later

Managing Image Content in Joomla!

Packt
23 Feb 2010
3 min read
Creating image galleries and slideshows Joomla! is a Content Management System designed primarily for organizing and managing website content. It contains numerous multimedia features built into it, but its main focus is providing two roles: Powerful CMS features, and a well-designed framework which allows additional features to be added to the system, thus creating powerful functionality. These additional features are called Extensions and are designed to plug in to the core Joomla! Framework and extend the functionality of it. With regards to the core Joomla! CMS, we have already looked at how images can be included into content items and modules. However, image galleries and slideshows are asking a bit more than just simple image management, and so require the power of extensions to provide these features: The number of multimedia extensions now available for Joomla! is staggering. We have extensions which can create complex galleries, stream in videos, and compile jukebox type audio features. Having considered at great length the best approach has resulted in one option. That is to highlight some of the most popular and useful image gallery and slideshow extensions, and hope that these will provide understanding as to the complex image management capability that can be achieved by using Joomla! Extensions. Image management extensions, and how to install them Before proceeding with covering third-party extension functionality, let's quickly cover how image extensions are added to your Joomla! site. As with most things in Joomla!, the process of installing extensions has had careful consideration from the development team, and is a very easy to perform. Most Joomla! Extensions come with their own installation instructions, and these general guidelines will help you get them installed on your site. Before installing anything, make sure you copy your file set and backup your site database. For the purpose of this example, I am going to install one of my favorite Joomla! Multimedia Extensions—the RokBox extension by RocketTheme. RokBox is a MooTools powered slideshow plugin that enables you to display numerous media formats with a professional pop-up screen lightbox effect: The installation steps are as follows: Download the required extension and save this download on your computer. The file will be in a .zip format, or possibly a .gzip or .tgz format which may require unzipping before installation: Log into your Joomla! Administrator's Control Panel, and navigate to the Extensions menu, then to Extension Manager. The page will default to the Install screen: Browse your computer for the extension .zip file using the Choose File button on this page and once the file is located, use the Upload File & Install button. The Joomla! Installation Manager will perform all of the work and if your extension has been built correctly, the installer will place all files and documents into their correct directories: An installation success message will show to inform you that the extension has been successfully installed. Some extensions success text may contain additional links to configuration pages for the extension.
Read more
  • 0
  • 0
  • 6073

article-image-schema-validation-using-sax-and-dom-parser-oracle-jdeveloper-xdk-11g
Packt
08 Oct 2009
6 min read
Save for later

Schema Validation using SAX and DOM Parser with Oracle JDeveloper - XDK 11g

Packt
08 Oct 2009
6 min read
The choice of validation method depends on the additional functionality required in the validation application. SAXParser is recommended if SAX parsing event notification is required in addition to validation with a schema. DOMParser is recommended if the DOM tree structure of an XML document is required for random access and modification of the XML document. Schema validation with a SAX parser In this section we shall validate the example XML document catalog.xml with XML schema document catalog.xsd, with the SAXParser class. Import the oracle.xml.parser.schema and oracle.xml.parser.v2 packages. Creating a SAX parser Create a SAXParser object and set the validation mode of the SAXParser object to SCHEMA_VALIDATION, as shown in the following listing: SAXParser saxParser=new SAXParser();saxParser.setValidationMode(XMLParser.SCHEMA_VALIDATION); The different validation modes that may be set on a SAXParser are discussed in the following table; but we only need the SCHEMA-based validation modes: Validation Mode Description NONVALIDATING The parser does not validate the XML document. PARTIAL_VALIDATION The parser validates the complete or a partial XML document with a DTD or an XML schema if specified. DTD_VALIDATION The parser validates the XML document with a DTD if any. SCHEMA_VALIDATION The parser validates the XML document with an XML schema if any specified. SCHEMA_LAX_VALIDATION Validates the complete or partial XML document with an XML schema if the parser is able to locate a schema. The parser does not raise an error if a schema is not found. SCHEMA_STRICT_VALIDATION Validates the complete XML document with an XML schema if the parser is able to find a schema. If the parser is not able find a schema or if the XML document does not conform to the schema, an error is raised. Next, create an XMLSchema object from the schema document with which an XML document is to be validated. An XMLSchema object represents the DOM structure of an XML schema document and is created with an XSDBuilder class object. Create an XSDBuilder object and invoke the build(InputSource) method of the XSDBuilder object to obtain an XMLSchema object. The InputSource object is created with an InputStream object created from the example XML schema document, catalog.xsd. As discussed before, we have used an InputSource object because most SAX implementations are InputSource based. The procedure to obtain an XMLSchema object is shown in the following listing: XSDBuilder builder = new XSDBuilder();InputStream inputStream=new FileInputStream(new File("catalog.xsd"));InputSource inputSource=new InputSource(inputStream);XMLSchema schema = builder.build(inputSource); Set the XMLSchema object on the SAXParser object with setXMLSchema(XMLSchema) method: saxParser.setXMLSchema(schema); Setting the error handler As in the previous section, define an error handling class, CustomErrorHandler that extends DefaultHandler class. Create an object of type CustomErrorHandler, and register the ErrorHandler object with the SAXParser as shown here: CustomErrorHandler errorHandler = new CustomErrorHandler();saxParser.setErrorHandler(errorHandler); Validating the XML document The SAXParser class extends the XMLParser class, which provides the overloaded parse methods discussed in the following table to parse an XML document: Method Description parse(InputSource in) Parses an XML document from an org.xml.sax.InputSouce object. The InputSource-based parse method is the preferred method because SAX parsers convert the input to InputSource no matter what the input type is. parse(java.io.InputStream in) Parses an XML document from an InputStream. parse(java.io.Reader r) Parses an XML document from a Reader. parse(java.lang.String in) Parses an XML document from a String URL for the XML document. parse(java.net.URL url) Parses an XML document from the specified URL object for the XML document. Create an InputSource object from the XML document to be validated, and parse the XML document with the parse(InputSource) object: InputStream inputStream=new FileInputStream(newFile("catalog.xml"));InputSource inputSource=new InputSource(inputStream);saxParser.parse(inputSource); Running the Java application The validation application SAXValidator.java is listed in the following listing with additional explanations: First we declare the import statements for the classes that we need. import java.io.File;import java.io.FileInputStream;import java.io.FileNotFoundException;import oracle.xml.parser.schema.*;import oracle.xml.parser.v2.*;import java.io.IOException;import java.io.InputStream;import org.xml.sax.SAXException;import org.xml.sax.SAXParseException;import org.xml.sax.helpers.DefaultHandler;import org.xml.sax.InputSource; We define the Java class SAXValidator for SAX validation. public class SAXValidator { In the Java class we define a method validateXMLDocument. public void validateXMLDocument(InputSource input) {try { In the method we create a SAXParser and set the XML schema on the SAXParser. SAXParser saxParser = new SAXParser();saxParser.setValidationMode(XMLParser.SCHEMA_VALIDATION);XMLSchema schema=getXMLSchema();saxParser.setXMLSchema(schema); To handle errors we create a custom error handler. We set the error handler on the SAXParser object and parse the XML document to be validated and also output validation errors if any. CustomErrorHandler errorHandler = new CustomErrorHandler();saxParser.setErrorHandler(errorHandler);saxParser.parse(input);if (errorHandler.hasValidationError == true) {System.err.println("XML Document hasValidation Error:" + errorHandler.saxParseException.getMessage());} else {System.out.println("XML Document validateswith XML schema");}} catch (IOException e) {System.err.println("IOException " + e.getMessage());} catch (SAXException e) {System.err.println("SAXException " + e.getMessage());}} We add the Java method getXMLSchema to create an XMLSchema object. try {XSDBuilder builder = new XSDBuilder();InputStream inputStream =new FileInputStream(new File("catalog.xsd"));InputSource inputSource = newInputSource(inputStream);XMLSchema schema = builder.build(inputSource);return schema;} catch (XSDException e) {System.err.println("XSDException " + e.getMessage());} catch (FileNotFoundException e) {System.err.println("FileNotFoundException " +e.getMessage());}return null;} We define the main method in which we create an instance of the SAXValidator class and invoke the validateXMLDocument method. public static void main(String[] argv) {try { InputStream inputStream = new FileInputStream(new File("catalog.xml")); InputSource inputSource=new InputSource(inputStream); SAXValidator validator = new SAXValidator(); validator.validateXMLDocument(inputSource); } catch (FileNotFoundException e) { System.err.println("FileNotFoundException " + e.getMessage()); }} Finally, we define the custom error handler class as an inner class CustomErrorHandler to handle validation errors. private class CustomErrorHandler extends DefaultHandler {protected boolean hasValidationError = false;protected SAXParseException saxParseException = null;public void error(SAXParseException exception){hasValidationError = true;saxParseException = exception;}public void fatalError(SAXParseException exception){hasValidationError = true;saxParseException = exception;}public void warning(SAXParseException exception){}}} Copy the SAXValidator.java application to SAXValidator.java in the SchemaValidation project. To demonstrate error handling, add a title element to the journal element. To run the SAXValidator.java application, right-click on SAXValidator.java in Application Navigator, and select Run. A validation error gets outputted. The validation error indicates that the title element is not expected.
Read more
  • 0
  • 0
  • 6063

article-image-getting-started-moodle-20-business
Packt
25 Apr 2011
12 min read
Save for later

Getting Started with Moodle 2.0 for Business

Packt
25 Apr 2011
12 min read
  Moodle 2.0 for Business Beginner's Guide Implement Moodle in your business to streamline your interview, training, and internal communication processes         Read more about this book       (For more resources on Moodle, see here.) So let's get on with it... Why Moodle? Moodle is an open source Learning Management System (LMS) used by universities, K-12 schools, and both small and large businesses to deliver training over the Web. The Moodle project was created by Martin Dougiamas, a computer scientist and educator, who started as an LMS administrator at a university in Perth, Australia. He grew frustrated with the system's limitations as well as the closed nature of the software which made it difficult to extend. Martin started Moodle with the idea of building the LMS based on learning theory, not software design. Moodle is based on five learning ideas: All of us are potential teachers as well as learners—in a true collaborative environment we are both We learn particularly well from the act of creating or expressing something for others to see We learn a lot by just observing the activity of our peers By understanding the contexts of others, we can teach in a more transformational way A learning environment needs to be flexible and adaptable, so that it can quickly respond to the needs of the participants within it With these five points as reference, the Moodle developer community has developed an LMS with the flexibility to address a wider range of business issues than most closed source systems. Throughout this article we will explore new ways to use the social features of Moodle to create a learning platform to deliver real business value. Moodle has seen explosive growth over the past five years. In 2005, as Moodle began to gain traction in higher education, there were under 3,000 Moodle sites around the world. As of this writing in July, 2010, there were 51,000 Moodle sites registered with Moodle.org. These sites hosted 36 million users in 214 countries. The latest statistics on Moodle use are always available at the Moodle.org site (http://moodle.org/stats). As Moodle has matured as a learning platform, many corporations have found they can save money and provide critical training services with Moodle. According to the eLearning Guild 2008 Learning Management System survey, Moodle's initial cost to acquire, install, and customize was $16.77 per learner. The initial cost per learner for SAP was $274.36, while Saba was $79.20, and Blackboard $39.06. Moodle's open source licensing provides a considerable cost advantage against traditional closed source learning management systems. For the learning function, these savings can be translated into increased course development, more training opportunities, or other innovation. Or it can be passed back to the organization's bottom line. As Jim Whitehurst, CEO of RedHat, states: "What's sold to customers better than saying 'We can save you money' is to show them how we can give you more functionality within your budget." With training budgets among the first to be cut during a downturn, using Moodle can enable your organization to move costs from software licensing to training development, support, and performance management; activities that impact the bottom line. Moodle's open source licensing also makes customization and integration easier and cheaper than proprietary systems. Moodle has built-in tools for integrating with backend authentication tools, such as Active Directory or OpenLDAP, enrollment plugins to take a data feed from your HR system to enroll people in courses, and a web services library to integrate with your organization's other systems. Some organizations choose to go further, customizing individual modules to meet their unique needs. Others have added components for unique tracking and reporting, including development of a full data warehouse. Moodle's low cost and flexibility have encouraged widespread adoption in the corporate sectors. According to the eLearning Guild LMS survey, Moodle went from a 6.8 % corporate LMS market share in 2007 to a 19.8 % market share in 2008. While many of these adopters are smaller companies, a number of very large organizations, including AA Ireland, OpenText, and other Fortune 500 companies use Moodle in a variety of ways. According to the survey, the industries with the greatest adoption of Moodle include aerospace and defense companies, consulting companies, E-learning tool and service providers, and the hospitality industry. Why open source? Moodle is freely available under the General Public License (GPL). Anyone can go to Moodle.org and download Moodle, run it on any server for as many users as they want, and never pay a penny in licensing costs. The GPL also ensures that you will be able to get the source code for Moodle with every download, and have the right to share that code with others. This is the heart of the open source value proposition. When you adopt a GPL product, you have the right to use that product in any way you see fit, and have the right to redistribute that product as long as you let others do the same. Moodle's open source license has other benefits beyond simply cost. Forrester recently conducted a survey of 132 senior business and IT executives from large companies using open source software. Of the respondents, 92 % said open source software met or exceeded their quality expectations, while meeting or exceeding their expectations for lower costs. Many organizations go through a period of adjustment when making a conscious decision to adopt an open source product. Most organizations start using open source solutions for simple applications, or deep in their network infrastructure. Common open source applications in the data center include file serving, e-mail, and web servers. Once the organization develops a level of comfort with open source, they begin to move open source into mission critical and customer-facing applications. Many organizations use an open source content management system like Drupal or Alfresco to manage their web presence. Open source databases and middleware, like MySQL and JBoss, are common in application development and have proven themselves reliable and robust solutions. Companies adopt open source software for many reasons. The Forrester survey suggests open standards, no usage restrictions, lack of vendor lock-in and the ability to use the software without a license fee as the most important reason many organizations adopt open source software. On the other side of the coin, many CTO's worry about commercial support for their software. Fortunately, there is an emerging ecosystem of vendors who support a wide variety of open source products and provide critical services. There seem to be as many models of open source business as there are open source projects. A number of different support models have sprung up in the last few years. Moodle is supported by the Moodle Partners, a group of 50 companies around the world who provide a range of Moodle services. Services offered range from hosting and support to training, instructional design, and custom code development. Each of the partners provides a portion of its Moodle revenue back to the Moodle project to ensure the continued development of the shared platform. In the same way, Linux is developed by a range of commercial companies, including RedHat and IBM who both share some development and compete with each other for business. While many of the larger packages, like Linux and JBoss have large companies behind them, there are a range of products without clear avenues for support. However, the lack of licensing fees makes them easy to pilot. As we will explore in a moment, you can have a full Moodle server up and running on your laptop in under 20 minutes. You can use this to pilot your solutions, develop your content, and even host a small number of users. Once you are done with the pilot, you can move the same Moodle setup to its own server and roll it out to the whole organization. If you decide to find a vendor to support your Moodle implementation, there are a few key questions to ask: How long have they been in business? How experienced is the staff with the products they are supporting? Are they an official Moodle partner? What is the organization's track record? How good are their references? What is their business model for generating revenue? What are their long-term prospects? Do they provide a wide range of services, including application development, integration, consulting, and software life-cycle management? Installing Moodle for experimentation As Kenneth Grahame's character the Water Rat said in The Wind in the Willows, "Believe me, my young friend, there is nothing—absolutely nothing—half so much worth doing as simply messing about in boats." One of the best tools to have to learn about Moodle is an installation where you can "mess about" without worrying about the impact on other people. Learning theory tells us we need to spend many hours practicing in a safe environment to become proficient. The authors of this book have collectively spent more than 5,000 hours experimenting, building, and messing about with Moodle. There is much to be said for having the ability to play around with Moodle without worrying about other people seeing what you are doing, even after you go live with your Moodle solution. When dealing with some of the more advanced features, like permissions and conditional activities, you will need to be able to log in with multiple roles to ensure you have the options configured properly. If you make a mistake on a production server, you could create a support headache. Having your own sandbox provides that safe place. So we are going to start your Moodle exploration by installing Moodle on your personal computer. If your corporate policy prohibits you from installing software on your machine, discuss getting a small area on a server set up for Moodle. The installation instructions below will work on either your laptop, personal computer, or a server. Time for action — download and run the Moodle installer If you have Windows or a Mac, you can download a full Moodle installer, including the web server, database, and PHP. All of these components are needed to run Moodle and installing them individually on your computer can be tedious. Fortunately, the Moodle community has created full installers based on the XAMPP package. A single double-click on the install package will install everything you need. To install Moodle on Windows: Point your browser to http://download.moodle.org/windows and download the package to your desktop. Make sure you download the latest stable version of Moodle 2, to take advantage of the features we discuss in this article. Unpack the archive by double clicking on the ZIP file. It may take a few minutes to finish unpacking the archive. Double click the Start Moodle.exe file to start up the server manager. Open your web browser and go to http://localhost. You will then need to configure Moodle on your system. Follow the prompts for the next three steps. After successfully configuring Moodle, you will have a fully functioning Moodle site on your machine. Use the stop and start applications to control when Moodle runs on your site. To install Moodle on Mac: Point your browser to http://download.moodle.org/macosx and find the packages for the latest version of Moodle 2. You have two choices of installers. XAMPP is a smaller download, but the control interface is not as refined as MAMP. Download either package to your computer (the directions here are for the MAMP package). Open the .dmg file and drag the Moodle application to your Applications folder. Open the MAMP application folder in your Applications folder. Double click the MAMP application to start the web server and database server. Once MAMP is up and running, double click the Link To Moodle icon in the MAMP folder. You now have a fully functioning Moodle site on your machine. To shut down the site, quit the MAMP application. To run your Moodle site in the future, open the MAMP application and point your browser to http://localhost:8888/moodle: Once you have downloaded and installed Moodle, for both systems, follow these steps: Once you have the base system configured, you will need to set up your administrator account. The Moodle admin account has permissions to do anything on the site, and you will need this account to get started. Enter a username, password, and fill in the other required information to create an account: A XAMMP installation on Mac or Windows also requires you to set up the site's front page. Give your site a name and hit Save changes. You can come back later and finish configuring the site. What just happened? You now have a functioning Moodle site on your laptop for experimentation. To start your Moodle server, double click on the StartMoodle.exe and point your browser at http://localhost. Now we can look at a Moodle course and begin to look at Moodle functionality. Don't worry about how we will use this functionality now, just spend some time getting to know the system. Reflection You have just installed Moodle on a server or a personal computer, for free. You can use Moodle with as many people as you want for whatever purpose you choose without licensing fees. Some points for reflection: What collaboration / learning challenges do you have in your organization? How can you use the money you save on licensing fees to innovate to meet those challenges? Are there other ways you can use Moodle to help your organization meet its goals which would not have been cost effective if you had to pay a license fee for the software?  
Read more
  • 0
  • 0
  • 6046
article-image-managing-discounts-vouchers-and-referrals-php-5-ecommerce
Packt
21 Jan 2010
3 min read
Save for later

Managing Discounts, Vouchers, and Referrals with PHP 5 Ecommerce

Packt
21 Jan 2010
3 min read
Discount codes Discount codes are a great way to both entice new customers into a store, and also to help retain customers with special discounts. The discount code should work by allowing the customer to enter a code, which will then be verified by the store, and then a discount will be applied to the order. The following are discount options we may wish to have available in our store: A fixed amount deducted from the cost of the order A fixed percentage deducted from the cost of the order The shipping cost altered, either to free or to a lower amount Product-based discounts (although we won't cover this one in the article) It may also be useful to take into account the cost of the customer's basket; after all if we have a $5 discount code, we probably wouldn't want that to apply for orders of $5 or lower, and may wish to apply a minimum order amount. Discount codes data When storing discount codes in the framework, we need to store and account for: The voucher code itself, so that we can check that the customer is entering a valid code Whether the voucher code is active, as we may wish to prepare some voucher codes, but not have them usable until a certain time, or we may wish to discontinue a code A minimum value for the customer's basket, either as an incentive for the customer to purchase more or to prevent loss-making situations (for example a $10 discount on a $5 purchase!) The type of discount: Percentage: To indicate that the discount amount is a percentage to be removed from the cost Fixed amount deducted: To indicate that the discount amount is a fixed amount to be removed from the order total Fixed amount set to shipping: To indicate that the discount amount is to be the new value for the shipping cost Discount amount; that is, the amount of discount to be applied The number of vouchers issued, if we wish to limit the number of uses of a particular voucher code An expiry date, so that if we wish to have the voucher code expire, codes with a date after the stored expiry date would no longer work Discount codes database The following table illustrates this information as database fields within a table: The default value for num_vouchers is -1, which we will use for vouchers that are not limited to a set number of issues. Field Type Description ID Integer (Primary Key, Auto increment) For the framework to reference the code Vouchercode Varchar The code the customer enters into the order Active Boolean If the code can be used Min_basket_cost Float The minimum cost of the customer's basket for the code to work for them Discount_operation ENUM('-',%','s') The type of discount Num_vouchers Integer Number of times the voucher can be used Expiry timestamp The date the voucher code expires, and is no longer usable The following code represents this data in our database: CREATE TABLE `discount_codes` (`ID` INT( 11 ) NOT NULL AUTO_INCREMENT ,`vouchercode` VARCHAR( 25 ) NOT NULL ,`active` TINYINT( 1 ) NOT NULL ,`min_basket_cost` FLOAT NOT NULL ,`discount_operation` ENUM( '-', '%', 's' ) NOT NULL ,`discount_amount` FLOAT NOT NULL ,`num_vouchers` INT( 11 ) NOT NULL DEFAULT '-1',`expiry` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ,PRIMARY KEY ( `ID` )) ENGINE = INNODB DEFAULT CHARSET = latin1 AUTO_INCREMENT =1;
Read more
  • 0
  • 0
  • 6045

article-image-fronting-external-api-ruby-rails-part-1
Mike Ball
09 Feb 2015
6 min read
Save for later

Fronting an external API with Ruby on Rails: Part 1

Mike Ball
09 Feb 2015
6 min read
Historically, a conventional Ruby on Rails application leverages server-side business logic, a relational database, and a RESTful architecture to serve dynamically-generated HTML. JavaScript-intensive applications and the widespread use of external web APIs, however, somewhat challenge this architecture. In many cases, Rails is tasked with performing as an orchestration layer, collecting data from various backend services and serving re-formatted JSON or XML to clients. In such instances, how is Rails' model-view-controller architecture still relevant? In this two part post series, we'll create a simple Rails backend that makes requests to an external XML-based web service and serves JSON. We'll use RSpec for tests and Jbuilder for view rendering. What are we building? We'll create Noterizer, a simple Rails application that requests XML from externally hosted endpoints and re-renders the XML data as JSON at a single URL. To assist in this post, I've created NotesXmlService, a basic web application that serves two XML-based endpoints: http://NotesXmlService.herokuapp.com/note-onehttp://NotesXmlService.herokuapp.com/note-two Why is this necessary in a real-world scenario? Fronting external endpoints with an application like Noterizer opens up a few opportunities: Noterizer's endpoint could serve JavaScript clients who can't perform HTTP requests across domain names to the original, external API. Noterizer's endpoint could reformat the externally hosted data to better serve its own clients' data formatting preferences. Noterizer's endpoint is a single interface to the data; multiple requests are abstracted away by its backend. Noterizer provides caching opportunities. While it's beyond the scope of this series, Rails can cache external request data, thus offloading traffic to the external API and avoiding any terms of service or rate limit violations imposed by the external service. Setup For this series, I’m using Mac OS 10.9.4, Ruby 2.1.2, and Rails 4.1.4. I’m assuming some basic familiarity with Git and the command line. Clone and set up the repo I've created a basic Rails 4 Noterizer app. Clone its repo, enter the project directory, and check out its tutorial branch: $ git clone http://github.com/mdb/noterizer && cd noterizer && git checkout tutorial Install its dependencies: $ bundle install Set up the test framework Let’s install RSpec for testing. Add the following to the project's Gemfile: gem 'rspec-rails', '3.0.1' Install rspec-rails: $ bundle install There’s now an rspec generator available for the rails command. Let's generate a basic RSpec installation: $ rails generate rspec:install This creates a few new files in a spec directory: ├── spec│   ├── rails_helper.rb│   └── spec_helper.rb We’re going to make a few adjustments to our RSpec installation.  First, because Noterizer does not use a relational database, delete the following ActiveRecord reference in spec/rails_helper.rb: # Checks for pending migrations before tests are run. # If you are not using ActiveRecord, you can remove this line. ActiveRecord::Migration.maintain_test_schema! Next, configure RSpec to be less verbose in its warning output; such verbose warnings are beyond the scope of this series. Remove the following line from .rspec: --warnings The RSpec installation also provides a spec rake task. Test this by running the following: $ rake spec You should see the following output, as there aren’t yet any RSpec tests: No examples found. Finished in 0.00021 seconds (files took 0.0422 seconds to load) 0 examples, 0 failures Note that a default Rails installation assumes tests live in a tests directory. RSpec uses a spec directory. For clarity's sake, you’re free to delete the test directory from Noterizer. Building a basic route and controller Currently, Noterizer does not have any URLs; we’ll create a single/notes URL route.  Creating the controller First, generate a controller: $ rails g controller notes Note that this created quite a few files, including JavaScript files, stylesheet files, and a helpers module. These are not relevant to our NotesController; so let's undo our controller generation by removing all untracked files from the project. Note that you'll want to commit any changes you do want to preserve. $ git clean -f Now, open config/application.rb and add the following generator configuration: config.generators do |g| g.helper false g.assets false end Re-running the generate command will now create only the desired files: $ rails g controller notes Testing the controller Let's add a basic NotesController#index test to spec/controllers/notes_spec.rb. The test looks like this: require 'rails_helper' describe NotesController, :type => :controller do describe '#index' do before :each do get :index end it 'successfully responds to requests' do expect(response).to be_success end end end This test currently fails when running rake spec, as we haven't yet created a corresponding route. Add the following route to config/routes.rb get 'notes' => 'notes#index' The test still fails when running rake spec, because there isn't a proper #index controller action.  Create an empty index method in app/controllers/notes_controller.rb class NotesController < ApplicationController def index end end rake spec still yields failing tests, this time because we haven't yet created a corresponding view. Let's create a view: $ touch app/views/notes/index.json.jbuilder To use this view, we'll need to tweak the NotesController a bit. Let's ensure that requests to the /notes route always returns JSON via a before_filter run before each controller action: class NotesController < ApplicationController before_filter :force_json def index end private def force_json request.format = :json end end Now, rake spec yields passing tests: $ rake spec . Finished in 0.0107 seconds (files took 1.09 seconds to load) 1 example, 0 failures Let's write one more test, asserting that the response returns the correct content type. Add the following to spec/controllers/notes_controller_spec.rb it 'returns JSON' do expect(response.content_type).to eq 'application/json' end Assuming rake spec confirms that the second test passes, you can also run the Rails server via the rails server command and visit the currently-empty Noterizer http://localhost:3000/notes URL in your web browser. Conclusion In this first part of the series we have created the basic route and controller for Noterizer, which is a basic example of a Rails application that fronts an external API. In the next blog post (Part 2), you will learn how to build out the backend, test the model, build up and test the controller, and also test the app with JBuilder. About this Author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 6039

article-image-introducing-feature-introjs
Packt
07 Oct 2013
5 min read
Save for later

Introducing a feature of IntroJs

Packt
07 Oct 2013
5 min read
(For more resources related to this topic, see here.) API IntroJs includes functions that let the user to control and change the execution of the introduction. For example, it is possible to make a decision for an unexpected event that happens during execution, or to change the introduction routine according to user interactions. Later on, all available APIs in IntroJs will be explained. However, these functions will extend and develop in the future. IntroJs includes these API functions: start goToStep exit setOption setOptions oncomplete onexit onchange onbeforechange introJs.start() As mentioned before, introJs.start() is the main function of IntroJs that lets the user to start the introduction for specified elements and get an instance of the introJS class. The introduction will start from the first step in specified elements. This function has no arguments and also returns an instance of the introJS class. introJs.goToStep(stepNo) Jump to the specific step of the introduction by using this function. As it is clear, introductions always start from the first step; however, it is possible to change the configuration by using this function. The goToStep function has an integer argument that accepts the number of the step in the introduction. introJs().goToStep(2).start(); //starts introduction from step 2 As the example indicates, first, the default configuration changed by using the goToStep function from 1 to 2, and then the start() function will be called. Hence, the introduction will start from the second step. Finally, this function will return the introJS class's instance. introJs.exit() The introJS.exit() function lets the user exit and close the running introduction. By default, the introduction ends when the user clicks on the Done button or goes to the last step of the introduction. introJs().exit() As it shows, the exit() function doesn't have any arguments and returns an instance of introJS. introJs.setOption(option, value) As mentioned before, IntroJs has some default options that can be changed by using the setOption method. This function has two arguments. The first one is useful to specify the option name and the second one is to set the value. introJs().setOption("nextLabel", "Go Next"); In the preceding example, nextLabel sets to Go Next. Also, it is possible to change other options by using the setOption method. introJs.setOptions(options) It is possible to change an option using the setOption method. However, to change more than one option at once, it is possible to use setOptions instead. The setOptions method accepts different options and values in the JSON format. introJs().setOptions({ skipLabel: "Exit", tooltipPosition: "right" }); In the preceding example, two options are set at the same time by using JSON and the setOptions method. introJs.oncomplete(providedCallback) The oncomplete event is raised when the introduction ends. If a function passes as an oncomplete method, it will be called by the library after the introduction ends. introJs().oncomplete(function() { alert("end of introduction"); }); In this example, after the introduction ends, the anonymous function that is passed to the oncomplete method will be called and alerted with the end of introduction message. introJs.onexit(providedCallback) As mentioned before, the user can exit the running introduction using the Esc key or by clicking on the dark area in the introduction. The onexit event notices when the user exits from the introduction. This function accepts one argument and returns the instance of running introJS. introJs().onexit(function() { alert("exit of introduction"); }); In the preceding example, we passed an anonymous function to the onexit method with an alert() statement. If the user exits the introduction, the anonymous function will be called and an alert with the message exit of introduction will appear. introJs.onchange(providedCallback) The onchange event is raised in each step of the introduction. This method is useful to inform when each step of introduction is completed. introJs().onchange(function(targetElement) { alert("new step"); }); You can define an argument for an anonymous function (targetElement in the preceding example), and when the function is called, you can access the current target element that is highlighted in the introduction with that argument. In the preceding example, when each introduction's step ends, an alert with the new step message will appear. introJs.onbeforechange(providedCallback) Sometimes, you may need to do something before each step of introduction. Consider that you need to do an Ajax call before the user goes to a step of the introduction; you can do this with the onbeforechange event. introJs().onbeforechange(function(targetElement) { alert("before new step");}); We can also define an argument for an anonymous function (targetElement in the preceding example), and when this function is called, the argument gets some information about the currently highlighted element in the introduction. So using that argument, you can know which step of the introduction will be highlighted or what's the type of target element and more. In the preceding example, an alert with the message before new step will appear before highlighting each step of the introduction. Summary In this article we learned about the API functions, their syntaxes, and how they are used. Resources for Article: Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Trapping Errors by Using Built-In Objects in JavaScript Testing [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 6039
article-image-building-site-directory-sharepoint-search
Packt
17 Jul 2012
8 min read
Save for later

Building a Site Directory with SharePoint Search

Packt
17 Jul 2012
8 min read
  (For more resources on Microsoft Sharepoint, see here.) Site Directory options There are two main approaches to providing a Site Directory feature: A central list that has to be maintained Using a search-based tool that can provide the information dynamically   List-based Site Directory With a list-based Site Directory, a list is provisioned in a central site collection, such as the root of a portal or intranet. Like all lists, site columns can be defined to help describe the site's metadata. Since it is stored in a central list, the information can easily be queried, which can make it easy to show a listing of all sites and perform filtering, sorting, and grouping, like all SharePoint lists. It is important to consider the overall site topology within the farm. If everything of relevance is stored within a single site collection, a list-based Site Directory, accessible throughout that site collection, may be easy to implement. But as soon as you have a large number of site collections or web applications, you will no longer be able to easily use that Site Directory without creating custom solutions that can access the central content and display it on those other sites. In addition, you will need to ensure that all users have access to read from that central site and list. Another downside to this approach is that the list-based Site Directory has to be maintained to be effective, and in many cases it is very difficult to keep up with this. It is possible to add new sites to the directory programmatically, using an event receiver, or as part of a process that automates the site creation. However, through the site's life cycle, changes will inevitably have to be made, and in many cases sites will be retired, archived, or deleted. While this approach tends to work well in small, centrally controlled environments, it does not work well at all in most of the large, distributed environments where the number of sites is expected to be larger and the rate of change is typically more frequent. Search-based site discovery An alternative to the list-based Site Directory is a completely dynamic site discovery based on the search system. In this case the content is completely dynamic and requires no specific maintenance. As sites are created, updated, or removed, the changes will be updated in the index as the scheduled crawls complete. For environments with a large number of sites, with a high frequency of new sites being created, this is the preferred approach. The content can also be accessed throughout the environment without having to worry about site collection boundaries, and can also be leveraged using out of the box features. The downside to this approach is that there will be a limit to the metadata you can associate with the site. Standard metadata that will be related to the site include the site's name, description, URL, and to a lesser extent, the managed path used to configure the site collection. From these items you can infer keyword relevance, but there is no support for extended properties that can help correlate the site with categories, divisions, or other specific attributes. How to leverage search Most users are familiar with how to use the Search features to find content, but are not familiar with some of the capabilities that can help them pinpoint specific content or specific types of content. This section will provide an overview on how to leverage search to provide features that help support users finding results that are only related to sites. Content classes SharePoint Search includes an object classification system that can be used to identify specific types of items as shown in the next table. It is stored in the index as a property of the item, making it available for all queries. Content Class Description STS_Site Site Collection objects STS_Web Subsite/Web objects STS_list_[templatename] List objects where [templatename] is the name of the template such as Announcements or DocumentLibrary. STS_listitem_[templatename] List Item objects where [templatename] is the name of the template such as Announcements or DocumentLibrary. SPSPeople User Profile objects (requires a User Profile Service Application) The contentclass property can be included as part of an ad hoc search performed by a user, included in the search query within a customization, or as we will see in the next section, used to provide a filter to a Search Scope. Search Scopes Search Scopes provide a way to filter down the entire search index. As the index grows and is filled with potentially similar information, it can be helpful to define Search Scopes to put specific set of rules in place to reduce the initial index that the search query is executed against. This allows you to execute a search within a specific context. The rules can be set based on the specific location, specific property values, or the crawl source of the content. The Search Scopes can be either defined centrally within the Search service application by an administrator or within a given Site Collection by a Site Collection administrator. If the scope is going to be used in multiple Site Collections, it should be defined in the Search service application. Once defined, it is available in the Search Scopes dropdown box for any ad hoc queries, within the custom code, or within the Search Web Parts. Defining the Site Directory Search Scope To support dynamic discovery of the sites, we will configure a Search Scope that will look at just site collections and subsites. As we saw above, this will enable us to separate out the site objects from the rest of the content in the search index. This Search Scope will serve as the foundation for all of the solutions in this article. To create a custom Search Scope: Navigate to the Search Service Application. Click on the Search Scopes link on the QuickLaunch menu under the Queries and Results heading. Set the Title field to Site Directory. Provide a Description. Click on the OK button as shown in the following screenshot: From the View Scopes page, click on the Add Rules link next to the new Search Scope. For the Scope Rule Type select the Property Query option. For the Property Query select the contentclass option. Set the property value to STS_Site. For the Behaviorsection, select the Include option. From the Scope Properties page, select the New Rule link. For the Scope Rule Type section, select the Property Query option./li> For the Property Query select the contentclass option. Set the property value to STS_Web. For the Behavior section, select the Include option.   The end result will be a Search Scope that will include all Site Collection and subsite entries. There will be no user generated content included in the search results of this scope. After finishing the configuration for the rules there will be a short delay before the scope is available for use. A scheduled job will need to compile the search scope changes. Once compiled, the View Scopes page will list out the currently configured search scopes, their status, and how many items in the index match the rules within the search scopes. Enabling the Search Scope on a Site Collection Once a Search Scope has been defined you can then associate it with the Site Collection(s) you would like to use it from. Associating the Search Scope to the Site Collection will allow the scope to be selected from within the Scopes dropdown on applicable search forms. This can be done by a Site Collection administrator one Site Collection at a time or it can be set via a PowerShell script on all Site Collections. To associate the search scope manually: Navigate to the Site Settings page. Under the Site Collection Administration section, click on the Search Scopes link. In the menu, select the Display Groups action. Select the Search Dropdown item. You can now select the Sites Scope for display and adjust its position within the list. Click on the OK button when complete.   Testing the Site Directory Search Scope Once the scope has been associated with the Site Collection's search settings, you will be able to select the Site Directory scope and perform a search, as shown in the following screenshot: Any matching Site Collections or subsites will be displayed. As we can see from the results shown in the next screenshot, the ERP Upgrade project site collection comes back as well as the Project Blog subsite.
Read more
  • 0
  • 0
  • 6015

article-image-understanding-express-routes
Packt
10 Jul 2013
10 min read
Save for later

Understanding Express Routes

Packt
10 Jul 2013
10 min read
(For more resources related to this topic, see here.) What are Routes? Routes are URL schema, which describe the interfaces for making requests to your web app. Combining an HTTP request method (a.k.a. HTTP verb) and a path pattern, you define URLs in your app. Each route has an associated route handler, which does the job of performing any action in the app and sending the HTTP response. Routes are defined using an HTTP verb and a path pattern. Any request to the server that matches a route definition is routed to the associated route handler. Route handlers are middleware functions, which can send the HTTP response or pass on the request to the next middleware in line. They may be defined in the app file or loaded via a Node module. A quick introduction to HTTP verbs The HTTP protocol recommends various methods of making requests to a Web server. These methods are known as HTTP verbs. You may already be familiar with the GET and the POST methods; there are more of them, about which you will learn in a short while. Express, by default, supports the following HTTP request methods that allow us to define flexible and powerful routes in the app: GET POST PUT DELETE HEAD TRACE OPTIONS CONNECT PATCH M-SEARCH NOTIFY SUBSCRIBE UNSUBSCRIBE GET, POST, PUT, DELETE, HEAD, TRACE, OPTIONS, CONNECT, and PATCH are part of the Hyper Text Transfer Protocol (HTTP) specification as drafted by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C). M-SEARCH, NOTIFY, SUBSCRIBE, and UNSUBSCRIBE are specified by the UPnP Forum. There are some obscure HTTP verbs such as LINK, UNLINK, and PURGE, which are currently not supported by Express and the underlying Node HTTP library. Routes in Express are defined using methods named after the HTTP verbs, on an instance of an Express application: app.get(), app.post(), app.put(), and so on. We will learn more about defining routes in a later section. Even though a total of 13 HTTP verbs are supported by Express, you need not use all of them in your app. In fact, for a basic website, only GET and POST are likely to be used. Revisiting the router middleware This article would be incomplete without revisiting the router middleware. The router middleware is very special middleware. While other Express middlewares are inherited from Connect, router is implemented by Express itself. This middleware is solely responsible for empowering Express with Sinatra-like routes. Connect-inherited middlewares are referred to in Express from the express object (express.favicon(), express.bodyParser(), and so on). The router middleware is referred to from the instance of the Express app (app.router)  To ensure predictability and stability, we should explicitly add router to the middleware stack: app.use(app.router); The router middleware is a middleware system of its own. The route definitions form the middlewares in this stack. Meaning, a matching route can respond with an HTTP response and end the request flow, or pass on the request to the next middleware in line. This fact will become clearer as we work with some examples in the upcoming sections. Though we won't be directly working with the router middleware, it is responsible for running the whole routing show in the background. Without the router middleware, there can be no routes and routing in Express. Defining routes for the app we know how routes and route handler callback functions look like. Here is an example to refresh your memory: app.get('/', function(req, res) { res.send('welcome'); }); Routes in Express are created using methods named after HTTP verbs. For instance, in the previous example, we created a route to handle GET requests to the root of the website. You have a corresponding method on the app object for all the HTTP verbs listed earlier. Let's create a sample application to see if all the HTTP verbs are actually available as methods in the app object: var http = require('http'); var express = require('express'); var app = express(); // Include the router middleware app.use(app.router); // GET request to the root URL app.get('/', function(req, res) { res.send('/ GET OK'); }); // POST request to the root URL app.post('/', function(req, res) { res.send('/ POST OK'); }); // PUT request to the root URL app.put('/', function(req, res) { res.send('/ PUT OK'); }); // PATCH request to the root URL app.patch('/', function(req, res) { res.send('/ PATCH OK'); }); // DELETE request to the root URL app.delete('/', function(req, res) { res.send('/ DELETE OK'); }); // OPTIONS request to the root URL app.options('/', function(req, res) { res.send('/ OPTIONS OK'); }); // M-SEARCH request to the root URL app['m-search']('/', function(req, res) { res.send('/ M-SEARCH OK'); }); // NOTIFY request to the root URL app.notify('/', function(req, res) { res.send('/ NOTIFY OK'); }); // SUBSCRIBE request to the root URL app.subscribe('/', function(req, res) { res.send('/ SUBSCRIBE OK'); }); // UNSUBSCRIBE request to the root URL app.unsubscribe('/', function(req, res) { res.send('/ UNSUBSCRIBE OK'); }); // Start the server http.createServer(app).listen(3000, function() { console.log('App started'); }); We did not include the HEAD method in this example, because it is best left for the underlying HTTP API to handle it, which it already does. You can always do it if you want to, but it is not recommended to mess with it, because the protocol will be broken unless you implement it as specified. The browser address bar isn't capable of making any type of request, except GET requests. To test these routes we will have to use HTML forms or specialized tools. Let's use Postman, a Google Chrome plugin for making customized requests to the server. We learned that route definition methods are based on HTTP verbs. Actually, that's not completely true, there is a method called app.all() that is not based on an HTTP verb. It is an Express-specific method for listening to requests to a route using any request method: app.all('/', function(req, res, next) { res.set('X-Catch-All', 'true'); next(); }); Place this route at the top of the route definitions in the previous example. Restart the server and load the home page. Using a browser debugger tool, you can examine the HTTP header response added to all the requests made to the home page, as shown in the following screenshot: Something similar can be achieved using a middleware. But the app.all() method makes it a lot easier when the requirement is route specified. Route identifiers So far we have been dealing exclusively with the root URL (/) of the app. Let's find out how to define routes for other parts of the app. Routes are defined only for the request path. GET query parameters are not and cannot be included in route definitions. Route identifiers can be string or regular expression objects. String-based routes are created by passing a string pattern as the first argument of the routing method. They support a limited pattern matching capability. The following example demonstrates how to create string-based routes: // Will match /abcd app.get('/abcd', function(req, res) { res.send('abcd'); }); // Will match /acd app.get('/ab?cd', function(req, res) { res.send('ab?cd'); }); // Will match /abbcd app.get('/ab+cd', function(req, res) { res.send('ab+cd'); }); // Will match /abxyzcd app.get('/ab*cd', function(req, res) { res.send('ab*cd'); }); // Will match /abe and /abcde app.get('/ab(cd)?e', function(req, res) { res.send('ab(cd)?e'); }); The characters ?, +, *, and () are subsets of their Regular Expression counterparts.   The hyphen (-) and the dot (.) are interpreted literally by string-based route identifiers. There is another set of string-based route identifiers, which is used to specify named placeholders in the request path. Take a look at the following example: app.get('/user/:id', function(req, res) { res.send('user id: ' + req.params.id); }); app.get('/country/:country/state/:state', function(req, res) { res.send(req.params.country + ', ' + req.params.state); } The value of the named placeholder is available in the req.params object in a property with a similar name. Named placeholders can also be used with special characters for interesting and useful effects, as shown here: app.get('/route/:from-:to', function(req, res) { res.send(req.params.from + ' to ' + req.params.to); }); app.get('/file/:name.:ext', function(req, res) { res.send(req.params.name + '.' + req.params.ext.toLowerCase()); }); The pattern-matching capability of routes can also be used with named placeholders. In the following example, we define a route that makes the format parameter optional: app.get('/feed/:format?', function(req, res) { if (req.params.format) { res.send('format: ' + req.params.format); } else { res.send('default format'); } }); Routes can be defined as regular expressions too. While not being the most straightforward approach, regular expression routes help you create very flexible and powerful route patterns. Regular expression routes are defined by passing a regular expression object as the first parameter to the routing method. Do not quote the regular expression object, or else you will get unexpected results. Using regular expression to create routes can be best understood by taking a look at some examples. The following route will match pineapple, redapple, redaple, aaple, but not apple, and apples: app.get(/.+app?le$/, function(req, res) { res.send('/.+ap?le$/'); }); The following route will match anything with an a in the route name: app.get(/a/, function(req, res) { res.send('/a/'); }); You will mostly be using string-based routes in a general web app. Use regular expression-based routes only when absolutely necessary; while being powerful, they can often be hard to debug and maintain. Order of route precedence Like in any middleware system, the route that is defined first takes precedence over other matching routes. So the ordering of routes is crucial to the behavior of an app. Let's review this fact via some examples. In the following case, http://localhost:3000/abcd will always print "abc*" , even though the next route also matches the pattern: app.get('/abcd', function(req, res) { res.send('abcd'); }); app.get('/abc*', function(req, res) { res.send('abc*'); }); Reversing the order will make it print "abc*": app.get('/abc*', function(req, res) { res.send('abc*'); }); app.get('/abcd', function(req, res) { res.send('abcd'); }); The earlier matching route need not always gobble up the request. We can make it pass on the request to the next handler, if we want to. In the following example, even though the order remains the same, it will print "abc*" this time, with a little modification in the code. Route handler functions accept a third parameter, commonly named next, which refers to the next middleware in line. We will learn more about it in the next section: app.get('/abc*', function(req, res, next) { // If the request path is /abcd, don't handle it if (req.path == '/abcd') { next(); } else { res.send('abc*'); } }); app.get('/abcd', function(req, res) { res.send('abcd'); }); So bear it in mind that the order of route definition is very important in Express. Forgetting this will cause your app behave unpredictably. We will learn more about this behavior in the examples in the next section.
Read more
  • 0
  • 0
  • 6000
Modal Close icon
Modal Close icon