Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-elements-spring-web-flow-configuration-file
Packt
12 Oct 2009
9 min read
Save for later

The Elements of the Spring Web Flow Configuration File

Packt
12 Oct 2009
9 min read
Let's take a look at the XML Schema Definition (XSD) file of the Spring Web Flow configuration file. To make the file more compact and easier to read, we have removed documentation and similar additions from the file. The complete file is downloadable from http://www.springframework.org/schema/webflow/spring-webflow-2.0.xsd. An XSD file is a file that is used to describe an XML Schema, which in itself is a W3C recommendation to describe the structure of XML files. In this kind of definition files, you can describe which elements can or must be present in an XML file. XSD files are primarily used to check the validity of XML files that are supposed to follow a certain structure. The schema can also be used to automatically create code, using code generation tools. The elements of the Spring configuration file are: flow The root element of the Spring Web Flow definition file is the flow element. It has to be present in all configurations because all other elements are sub-elements of the flow tag, which defines exactly one flow. If you want to define more than one flow, you will have to use the same number of flow tags. As every configuration file allows only one root element, you will have to define a new configuration file for every flow you want to define. attribute Using  the attribute tag, you can define metadata for a flow. You can use this metadata to change the behavior of the flow. secured The secured tag is used to secure your flow using Spring Security. The element is defined like this: <xsd:complexType name="secured"> <xsd:attribute name="attributes" type="xsd:string" use="required" /> <xsd:attribute name="match" use="optional"> <xsd:simpleType> <xsd:restriction base="xsd:string"> <xsd:enumeration value="any" /> <xsd:enumeration value="all" /> </xsd:restriction> </xsd:simpleType> </xsd:attribute> </xsd:complexType> If you want to use the secured element, you will have to at least define the attributes attribute. You can use the attributes element to define roles that are allowed to access the flow, for example. The match attribute defines whether all the elements specified in the attributes attribute have to match successfully (all), or if one of them is sufficient (any). persistence-context The persistence-context element enables you to use a persistence provider in your flow definition. This lets you persist database objects in action-states. To use persistence-context tag, you also have to define a data source and configure the persistence provider of your choice (for example, Hibernate or the Java Persistence API) in your Spring application context configuration file. The persistence-context element is empty, which means that you just have to add <persistence-context /> to your flow definition, to enable its features. var The var element can be used to define instance variables, which are accessible in your entire flow. These variables are quite important, so make sure that you are familiar with them. Using var elements, you can define model objects. Later, you can bind these model objects to the forms in your JSP web sites and can store them in a database using the persistence features enabled with the persistence-context element.  Nevertheless, using instance variables is not mandatory, so you do not have to define any, unless required in your flow. The element is defined as shown in the following snippet from the XSD file: <xsd:element name="var" minOccurs="0" maxOccurs="unbounded"> <xsd:complexType> <xsd:attribute name="name" type="xsd:string" use="required" /> <xsd:attribute name="class" type="type" use="required" /> </xsd:complexType> </xsd:element> The element has two attributes that are both required. The instance variable you want to define needs a name, so that you can reference it later in your JSP files. Spring Web Flow also needs to know the type of the variable, so you have to define the class attribute with the class of your variable. input You can use the input element to pass information into a flow. When you call a flow, you can (or will have to, depending on whether the input element is required or not) pass objects into the flow. You can then use these objects to process your flow. The XML Schema definition of this element looks like this: <xsd:complexType name="input"> <xsd:attribute name="name" type="expression" use="required" /> <xsd:attribute name="value" type="expression" /> <xsd:attribute name="type" type="type" /> <xsd:attribute name="required" type="xsd:boolean" /> </xsd:complexType> The input element possesses certain attributes, of which only the name attribute is required. You have to specify a name for the input argument, which you can use to reference the variable in your flow. The value attribute is used to specify the value of the attribute, for example, if you want to define a default value for the variable. You can also define a type (for example int or long) for the variable if you want a specific type of information. A type conversion will be tried if the argument passed to the flow does not match the type you expect. With the required attribute, you can control if the user of your flow has to pass in a variable, or if the input attribute is optional. output While you can define input parameters with the input element, you can specify return values with the output element. These are variables that will be passed to your end-state as the result value of your flow. Here's the XML Schema definition of the output element: <xsd:complexType name="output"> <xsd:attribute name="name" type="expression" use="required" /> <xsd:attribute name="value" type="expression" /> <xsd:attribute name="type" type="type" /> <xsd:attribute name="required" type="xsd:boolean" /> </xsd:attribute> </xsd:complexType> The definition is quite similar to the input element. You can also see that the name of the output element is required. Otherwise, you have no means of referencing the variable from your end-state. The value attribute is the value of your variable, for example, the result of a computation or a user returned from a database. Of course, you can also specify the type you expect your output variable to be. As with the input element, a type conversion will be attempted if the type of the variable does not match the type specified here. The required attribute will check if nothing was specified, or the result of a computation is null. If it is null, an error will be thrown.   actionTypes Per se, this is not an element which you can use in your flow definition, but it is a very important part of the XML Schema definition, referenced by many other elements. The definition is quite complex and looks like this: <xsd:group name="actionTypes"> <xsd:choice> <xsd:element name="evaluate"> <xsd:complexType> <xsd:sequence> <xsd:element name="attribute" type="attribute" minOccurs="0" maxOccurs="unbounded" /> </xsd:sequence> <xsd:attribute name="expression" type="expression" use="required" /> <xsd:attribute name="result" type="expression" use="optional" /> <xsd:attribute name="result-type" type="type" use="optional" /> </xsd:complexType> </xsd:element> <xsd:element name="render"> <xsd:complexType> <xsd:sequence> <xsd:element name="attribute" type="attribute" minOccurs="0" maxOccurs="unbounded" /> </xsd:sequence> <xsd:attribute name="fragments" type="xsd:string" use="required" /> </xsd:complexType> </xsd:element> <xsd:element name="set"> <xsd:complexType> <xsd:sequence> <xsd:element name="attribute" type="attribute" minOccurs="0" maxOccurs="unbounded" /> </xsd:sequence> <xsd:attribute name="name" type="expression" use="required" /> <xsd:attribute name="value" type="expression" use="required" /> <xsd:attribute name="type" type="type" /> </xsd:complexType> </xsd:element> </xsd:choice> </xsd:group> actionTypes is a group of sub-elements, namely evaluate, render, and set. Let's go through the single elements to understand how the whole definition works. evaluate With the evaluate element, you can execute code based on the expression languages. As described by the above source code, the element has three attributes; one of them is required. The expression attribute is required and executes the code that you want to run. The result value of the expression, if present, can be stored in the result attribute. Using the result-type attribute, you can convert the result value in the specified type, if needed. Additionally, you can define attributes using a sub-element of the attribute element. render Use the render element to partially render content of a web site. Using the required fragments attribute, you can define which fragments should be rendered. When the link on a web site is clicked the entire web site is not a re-load. Spring JavaScript allows you to update chosen parts of your web page. The remaining web site will not be reloaded, which can greatly enhance the performance of your web application. As is the case with the evaluate element, you can also specify additional attributes using the attribute sub-element. set The set element can be used to set attributes in one of the Spring Web Flows scopes. With the name attribute you can define where (in which scope) you want to define the attribute, and how it should be called. The following short source code illustrates how the set element works: <set name="flowScope.myVariable" value="myValue" type="long" /> As you can see, the name consists of the name of the scope and the name of your variable, delimited by a dot (.). Both the name attribute and the value attribute are required. The value is the actual value of your variable. The type attribute is optional and describes the type of your variable. As before, you can also define additional attributes using the attribute sub-element.
Read more
  • 0
  • 0
  • 2806

article-image-starting-tomcat-6-part-1
Packt
31 Dec 2009
9 min read
Save for later

Starting Up Tomcat 6: Part 1

Packt
31 Dec 2009
9 min read
Using scripts The Tomcat startup scripts are found within your project under the bin folder. Each script is available either as a Windows batch file (.bat), or as a Unix shell script (.sh). The behavior of either variant is very similar, and so I'll focus on their general structure and responsibilities, rather than on any operating system differences. However, it is to be noted that the Unix scripts are often more readable than the Windows batch files. In the following text, the specific extension has been omitted, and either .bat or .sh should be substituted as appropriate. Furthermore, while the Windows file separator '' has been used, it can be substituted with a '/' as appropriate. The overall structure of the scripts is as shown—you will most often invoke the startup script. Note that the shutdown script has a similar call structure. However, given its simplicity, it lends itself to fairly easy investigation, and so I leave it as an exercise for the reader. Both startup and shutdown are simple convenience wrappers for invoking the catalina script. For example, invoking startup.bat with no command line arguments calls catalina.bat with an argument of start. On the other hand, running shutdown.bat calls catalina.bat with a command line argument of stop. Any additional command line arguments that you pass to either of these scripts are passed right along to catalina.bat. The startup script has the following three main goals: If the CATALINA_HOME environment variable has not been set, it is set to the Tomcat installation's root folder. The Unix variant defers this action to the catalina script. It looks for the catalina script within the CATALINA_HOMEbin folder. The Unix variant looks for it in the same folder as the startup script. If the catalina script cannot be located, we cannot proceed, and the script aborts. It invokes the catalina script with the command line argument start followed by any other arguments supplied to the startup script. The catalina script is the actual workhorse in this process. Its tasks can be broadly grouped into two categories. First, it must ensure that all the environment variables needed for it to function have been set up correctly, and second it must execute the main class file with the appropriate options. Setting up the environment In this step, the catalina script sets the CATALINA_HOME, CATALINA_BASE, and CATALINA_TMPDIR environment variables, sets variables to point to various Java executables, and updates the CLASSPATH variable to limit the repositories scanned by the System class loader. It ensures that the CATALINA_HOME environment variable is set appropriately. This is necessary because catalina can be called independently of the startup script. Next, it calls the setenv script to give you a chance to set any installation-specific environment variables that affect the processing of this script. This includes variables that set the path of your JDK or JRE installation, any Java runtime options that need to be used, and so on. If CATALINA_BASE is set, then the CATALINA_BASEbinsetenv script is called. Else, the version under CATALINA_HOME is used. If the CATALINA_HOMEbinsetclasspath does not exist, processing aborts. Else, the BASEDIR environment variable is set to CATALINA_HOME and the setclasspath script is invoked. This script performs the following activities: It verifies that either a JDK or a JRE is available. If both the JAVA_HOME and JRE_HOME environment variables are not set, it aborts processing after warning the user. If we are running Tomcat in debug mode, that is, if '–debug' has been specified as a command line argument, it verifies that a JDK (and not just a JRE) is available. If the JAVA_ENDORSED_DIRS environment variable is not set, it is defaulted to BASEDIRendorsed. This variable is fed to the JVM as the value of the –Djava.endorsed.java.dirs system property. The CLASSPATH is then truncated to point at just JAVA_HOMElibtools.jar. This is a key aspect of the startup process, as it ensures that any CLASSPATH set in your environment is now overridden.Note that tools.jar contains the classes needed to compile and run Java programs, and to support tools such as Javadoc and native2ascii. For instance, the class com.sun.tools.javac.main.Main that is found in tools.jar represents the javac compiler. A Java program could dynamically create a Java class file and then compile it using an instance of this compiler class. Finally, variables are set to point to various Java executables, such as java, javaw (identical to java, but without an associated console window), jdb (the Java debugger), and javac (the Java compiler). These are referred to using the _RUNJAVA, _RUNJAVAW, _RUNJDB, and _RUNJAVAC environment variables respectively. The CLASSPATH is updated to also include CATALINA_HOMEbinbootstrap.jar, which contains the classes that are needed by Tomcat during the startup process. In particular, this includes the org.apache.catalina.startup.Bootstrap class. Note that including bootstrap.jar on the CLASSPATH also automatically includes commons-daemon.jar, tomcat-juli.jar, and tomcat-coyote.jar because the manifest file of bootstrap.jar lists these dependencies in its Class-Path attribute. If the JSSE_HOME environment variable is set, additional Java Secure Sockets Extension JARs are also appended to the CLASSPATH. Secure Sockets Layer (SSL) is a technology that allows clients and servers to communicate over a secured connection where all data transmissions are encrypted by the sender. SSL also allows clients and servers to determine whether the other party is indeed who they say they are, using certificates. The JSSE API allows Java programs to create and use SSL connections. Though this API began life as a standalone extension, the JSSE classes have been integrated into the JDK since Java 1.4. If the CATALINA_BASE variable is not set, it is defaulted to CATALINA_HOME. Similarly, if the Tomcat work directory location, CATALINA_TMPDIR is not specified, then it is set to CATALINA_BASEtemp. Finally, if the file CATALINA_BASEconflogging.properties exists, then additional logging related system properties are appended to the JAVA_OPTS environment variable. All the CLASSPATH machinations described above have effectively limited the repository locations monitored by the System class loader. This is the class loader responsible for finding classes located on the CLASSPATH. At this point, our execution environment has largely been validated and configured. The script notifies the user of the current execution configuration by writing out the paths for CATALINA_BASE, CATALINA_HOME, and the CATALINA_TMPDIR to the console. If we are starting up Tomcat in debug mode, then the JAVA_HOME variable is also written, else the JRE_HOME is emitted instead. These are the lines that we've grown accustomed to seeing when starting up Tomcat. C:tomcatTOMCAT_6_0_20outputbuildbin>startupUsing CATALINA_BASE: C:tomcatTOMCAT_6_0_20outputbuildUsing CATALINA_HOME: C:tomcatTOMCAT_6_0_20outputbuildUsing CATALINA_TMPDIR: C:tomcatTOMCAT_6_0_20outputbuildtempUsing JRE_HOME: C:javajdk1.6.0_14 With all this housekeeping done, the script is now ready to actually start the Tomcat instance. Executing the requested command This is where the actual action begins. This script can be invoked with the following commands: debug [-security], which is used to start Catalina in a debugger jpda start, which is used to start Catalina under a JPDA debugger run [-security], which is used to start Catalina in the current window start [-security], which starts Catalina in a separate window stop, which is used to stop Catalina version, which prints the version of Tomcat The use of a security manager, as determined by the optional –security argument, is out of scope for this article. The easiest way to understand this part of catalina.bat is to deconstruct the command line that is executed to start up the Tomcat instance. This command takes this general form (all in one line): _EXECJAVAJAVA_OPTSCATALINA_OPTSJPDA_OPTSDEBUG_OPTS-Djava.endorsed.dirs="JAVA_ENDORSED_DIRS"-classpath "CLASSPATH"-Djava.security.manager-Djava.security.policy=="SECURITY_POLICY_FILE"-Dcatalina.base="CATALINA_BASE"-Dcatalina.home="CATALINA_HOME"-Djava.io.tmpdir="CATALINA_TMPDIR"MAINCLASSCMD_LINE_ARGSACTION Where: _EXECJAVA is the executable that should be used to execute our main class. This defaults to the Java application launcher, _RUNJAVA. However, if debug was supplied as a command-line argument to the script, this is set to _RUNJDB instead. MAINCLASS is set to org.apache.catalina.startup.Bootstrap ACTION defaults to start, but is set to stop if the Tomcat instance is being stopped. CMD_LINE_ARGS are any arguments specified on the command line that follow the arguments that are consumed by catalina. SECURITY_POLICY_FILE defaults to CATALINA_BASEconfcatalina.policy. JAVA_OPTS and CATALINA_OPTS are used to carry arguments, such as maximum heap memory settings or system properties, which are intended for the Java launcher. The difference between the two is that CATALINA_OPTS is cleared out when catalina is invoked with the stop command. In addition, as indicated by its name, the latter is targeted primarily at options for running a Tomcat instance. JPDA_OPTS sets the Java Platform Debugger Architecture (JPDA) options to support remote debugging of this Tomcat instance. The default options are set in the script. It chooses TCP/IP as the protocol used to connect to the debugger (transport=dt_socket), marks this JVM as a server application (server=y), sets the host and port number on which the server should listen for remote debugging requests (address=8000), and requires the application to run until the application encounters a breakpoint (suspend=n). DEBUG_OPTS sets the -sourcepath flag when the Java Debugger is used to launch the Tomcat instance. The other variables are set as seen in the previous section. At this point, control passes to the main() method in Bootstrap.java. This is where the steps that are unique to script-based startup end. The rest of this article follows along with the logic coded into Bootstrap.java and Catalina.java.
Read more
  • 0
  • 0
  • 2805

article-image-working-jrockit-runtime-analyzer
Packt
03 Jun 2010
9 min read
Save for later

Working with JRockit Runtime Analyzer

Packt
03 Jun 2010
9 min read
The JRockit Runtime Analyzer, or JRA for short, is a JRockit-specific profiling tool that provides information about both the JRockit runtime and the application running in JRockit. JRA was the main profiling tool for JRockit R27 and earlier, but has been superseded in later versions by the JRockit Flight Recorder. Because of its extremely low overhead, JRA is suitable for use in production. This article is mainly targeted at R27.x/3.x versions of JRockit and Mission Control. The need for feedback In order to make JRockit an industry-leading JVM, there has been a great need for customer collaboration. As the focus for JRockit consistently has been on performance and scalability in server-side applications, the closest collaboration has been with customers with large server installations. An example is the financial industry. The birth of the JRockit Runtime Analyzer, or JRA, originally came from the need for gathering profiling information on how well JRockit performed at customer sites. One can easily understand that customers were rather reluctant to send us, for example, their latest proprietary trading applications to play with in our labs. And, of course, allowing us to poke around in a customer's mission critical application in production was completely out of the question. Some of these applications shuffle around billions of dollars per week. We found ourselves in a situation where we needed a tool to gather as much information as possible on how JRockit, and the application running on JRockit, behaved together; both to find opportunities to improve JRockit and to find erratic behavior in the customer application. This was a bit of a challenge, as we needed to get high quality data. If the information was not accurate, we would not know how to improve JRockit in the areas most needed by customers or perhaps at all. At the same time, we needed to keep the overhead down to a minimum. If the profiling itself incurred significant overhead, we would no longer get a true representation of the system. Also, with anything but near-zero overhead, the customer would not let us perform recordings on their mission critical systems in production. JRA was invented as a method of recording information in a way that the customer could feel confident with, while still providing us with the data needed to improve JRockit. The tool was eventually widely used within our support organization to both diagnose problems and as a tuning companion for JRockit. In the beginning, a simple XML format was used for our runtime recordings. A human-readable format made it simple to debug, and the customer could easily see what data was being recorded. Later, the format was upgraded to include data from a new recording engine for latency-related data. When the latency data came along, the data format for JRA was split into two parts, the human-readable XML and a binary file containing the latency events. The latency data was put into JRockit internal memory buffers during the recording, and to avoid introducing unnecessary latencies and performance penalties that would surely be incurred by translating the buffers to XML, it was decided that the least intrusive way was to simply dump the buffers to disk. To summarize, recordings come in two different flavors having either the .jra extension (recordings prior to JRockit R28/JRockit Mission Control 4.0) or the .jfr (JRockit Flight Recorder) extension (R28 or later). Prior to the R28 version of JRockit, the recording files mainly consisted of XML without a coherent data model. As of R28, the recording files are binaries where all data adheres to an event model, making it much easier to analyze the data. To open a JFR recording, a JRockit Mission Control of version 3.x must be used. To open a Flight Recorder recording, JRockit Mission Control version 4.0 or later must be used. Recording The recording engine that starts and stops recordings can be controlled in several different ways: By using the JRCMD command-line tool. By using the JVM command-line parameters. For more information on this, see the -XXjra parameter in the JRockit documentation. From within the JRA GUI in JRockit Mission Control. The easiest way to control recordings is to use the JRA/JFR wizard from within the JRockit Mission Control GUI. Simply select the JVM on which to perform a JRA recording in the JVM Browser and click on the JRA button in the JVM Browser toolbar. You can also click on Start JRA Recording from the context menu. Usually, one of the pre-defined templates will do just fine, but under special circumstances it may be necessary to adjust them. The pre-defined templates in JRockit Mission Control 3.x are: Full Recording: This is the standard use case. By default, it is configured to do a five minute recording that contains most data of interest. Minimal Overhead Recording: This template can be used for very latency-sensitive applications. It will, for example, not record heap statistics, as the gathering of heap statistics will, in effect, cause an extra garbage collection at the beginning and at the end of the recording. Real Time Recording: This template is useful when hunting latency-related problems, for instance when tuning a system that is running on JRockit Real Time. This template provides an additional text field for setting the latency threshold. The latency threshold is explained later in the article in the section on the latency analyzer. The threshold is by default lowered to 5 milliseconds for this type of recording, from the default 20 milliseconds, and the default recording time is longer. Classic Recording: This resembles a classic JRA recording from earlier versions of Mission Control. Most notably, it will not contain any latency data. Use this template with JRockit versions prior to R27.3 or if there is no interest in recording latency data.   All recording templates can be customized by checking the Show advanced options check box. This is usually not needed, but let's go through the options and why you may want to change them: Enable GC sampling: This option selects whether or not GC-related information should be recorded. It can be turned off if you know that you will not be interested in GC-related information. It is on by default, and it is a good idea to keep it enabled. Enable method sampling: This option enables or disables method sampling. Method sampling is implemented by using sample data from the JRockit code optimizer. If profiling overhead is a concern (it is usually very low, but still), it is usually a good idea to use the Method sample interval option to control how much method sampling information to record. Enable native sampling: This option determines whether or not to attempt to sample time spent executing native code as a part of the method sampling. This feature is disabled by default, as it is mostly used by JRockit developers and support. Most Java developers probably do fine without it. Hardware method sampling: On some hardware architectures, JRockit can make use of special hardware counters in the CPU to provide higher resolution for the method sampling. This option only makes sense on such architectures. Stack traces: Use this option to not only get sample counts but also stack traces from method samples. If this is disabled, no call traces are available for sample points in the methods that show up in the Hot Methods list. Trace depth: This setting determines how many stack frames to retrieve for each stack trace. For JRockit Mission Control versions prior to 4.0, this defaulted to the rather limited depth of 16. For applications running in application containers or using large frameworks, this is usually way too low to generate data from which any useful conclusions can be drawn. A tip, when profiling such an application, would be to bump this to 30 or more. Method sampling interval: This setting controls how often thread samples should be taken. JRockit will stop a subset of the threads every Method sample interval milliseconds in a round robin fashion. Only threads executing when the sample is taken will be counted, not blocking threads. Use this to find out where the computational load in an application takes place. See the section, Hot Methods for more information. Thread dumps: When enabled, JRockit will record a thread stack dump at the beginning and the end of the recording. If the Thread dump interval setting is also specified, thread dumps will be recorded at regular intervals for the duration of the recording. Thread dump interval: This setting controls how often, in seconds, to record the thread stack dumps mentioned earlier. Latencies: If this setting is enabled, the JRA recording will contain latency data. For more information on latencies, please refer to the section Latency later in this article. Latency threshold: To limit the amount of data in the recording, it is possible to set a threshold for the minimum latency (duration) required for an event to actually be recorded. This is normally set to 20 milliseconds. It is usually safe to lower this to around 1 millisecond without incurring too much profiling overhead. Less than that and there is a risk that the profiling overhead will become unacceptably high and/or that the file size of the recording becomes unmanageably large. Latency thresholds can be set as low as nanosecond values by changing the unit in the unit combo box. Enable CPU sampling: When this setting is enabled, JRockit will record the CPU load at regular intervals. Heap statistics: This setting causes JRockit to do a heap analysis pass at the beginning and at the end of the recording. As heap analysis involves forcing extra garbage collections at these points in order to collect information, it is disabled in the low overhead template. Delay before starting a recording: This option can be used to schedule the recording to start at a later time. The delay is normally defined in minutes, but the unit combo box can be used to specify the time in a more appropriate unit — everything from seconds to days is supported. Before starting the recording, a location to which the finished recording is to be downloaded must be specified. Once the JRA recording is started, an editor will open up showing the options with which the recording was started and a progress bar. When the recording is completed, it is downloaded and the editor input is changed to show the contents of the recording. Analyzing JRA recordings Analyzing JRA recordings may easily seem like black magic to the uninitiated, so just like we did with the Management Console, we will go through each tab of the JRA editor to explain the information in that particular tab, with examples on when it is useful. Just like in the console, there are several tabs in different tab groups.
Read more
  • 0
  • 0
  • 2804

article-image-cython-wont-bite
Packt
10 Jan 2016
9 min read
Save for later

Cython Won't Bite

Packt
10 Jan 2016
9 min read
In this article by Philip Herron, the author of the book Learning Cython Programming - Second Edition, we see how Cython is much more than just a programminglanguage. Its origin can be traced to Sage, the mathematics software package, where it was used to increase the performance of mathematical computations, such as those involving matrices. More generally, I tend to consider Cython as an alternative to Swig to generate really good python bindings to native code. Language bindings have been around for years and Swig was one of the first and best tools to generate bindings for multitudes of languages. Cython generates bindings for Python code only, and this single purpose approach means it generates the best Python bindings you can get outside of doing it all manually; attempt the latter only if you're a Python core developer. For me, taking control of legacy software by generating language bindings is a great way to reuse any software package. Consider a legacy application written in C/C++; adding advanced modern features like a web server for a dashboard or message bus is not a trivial thing to do. More importantly, Python comes with thousands of packages that have been developed, tested, and used by people for a long time, and can do exactly that. Wouldn't it be great to take advantage of all of this code? With Cython, we can do exactly this, and I will demonstrate approaches with plenty of example codes along the way. This article will be dedicated to the core concepts on using Cython, including compilation, and will provide a solid reference and introduction for all to Cython core concepts. In this article, we will cover: Installing Cython Getting started - Hello World Using distutils with Cython Calling C functions from Python Type conversion (For more resources related to this topic, see here.) Installing Cython Since Cython is a programming language, we must install its respective compiler, which just so happens to be so aptly named Cython. There are many different ways to install Cython. The preferred one would be to use pip: $ pip install Cython This should work on both Linux and Mac. Alternatively, you can use your Linux distribution's package manager to install Cython: $ yum install cython # will work on Fedora and Centos $ apt-get install cython # will work on Debian based systems In Windows, although there are a plethora of options available, following this Wiki is the safest option to stay up to date: http://wiki.cython.org/InstallingOnWindows Emacs mode There is an emacs mode available for Cython. Although the syntax is nearly the same as Python, there are differences that conflict in simply using Python mode. You can choose to grab the cython-mode.el from the Cython source code (inside the Tools directory.) The preferred way of installing packages to emacs would be to use a package repository such as MELPA(). To add the package repository to emacs, open your ~/.emacs configuration file and add the following code: (when (>= emacs-major-version 24) (require 'package) (add-to-list 'package-archives '("melpa" . "http://melpa.org/packages/") t) (package-initialize)) Once you add this and reload your configuration to install the cython mode, you can simply run the following: 'M-x package-install RET cython-mode' Once this is installed, you can activate the mode by adding this into your emacs config file: (require 'cython-mode) You can always activate the mode manually at any time with the following: 'M-x cython-mode RET' Getting the code examples Throughout this book, I intend to show real examples that are easy to digest to help you get a feel of the different things you can achieve with Cython. To access and download the code used, please clone the following repository: $ git clone git://github.com/redbrain/cython-book.git Getting started – Hello World As you will see when running the Hello World program, Cython generates native python modules. Therefore, while running any Cython code, you will reference it via a module import in Python. Let's build the module: $ cd cython-book/chapter1/helloworld $ make You should have now created helloworld.so! This is a Cython module of the same name of the Cython source code file. While in the same directory of the shared object module, you can invoke this code by running a respective Python import: $ python Python 2.7.3 (default, Aug 1 2012, 05:16:07) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import helloworld Hello World from cython! As you can see from opening helloworld.pyx, it looks just like a normal Python Hello World application; but as previously stated, Cython generates modules. These modules need a name so that it can be correctly imported by the python runtime. The Cython compiler simply uses the name of the source code file. It then requires us to compile this to the same shared object name. Overall, Cython source code files have the .pyx,.pxd, and .pxi extensions. For now, all we care about are the .pyx files; the others are for cimports and includes respectively within a .pyx module file. The following screenshot depicts the compilation flow required to have a callable native python module: I wrote a basic makefile so that you can simply run make to compile these examples. Here's the code to do this manually: $ cython helloworld.pyx $ gcc/clang -g -O2 -fpic `python-config --cflags` -c helloworld.c -o helloworld.o $ gcc/clang -shared -o helloworld.so helloworld.o `python-config –libs Using DistUtils with Cython You can compile this using Python distutils and cythonize. Open setup.py: from distutils.core import setup from Cython.Build import cythonize setup( ext_modules = cythonize("helloworld.pyx") ) Using the cythonize function as part of the ext_modules section will build any specified Cython source into an installable Python module. This will compile helloworld.pyx into the same shared library. This provides the Python practice to distribute native modules as part of distutils. Calling C functions from Python We should be careful when talking about Python and Cython for clarity, since the syntax is so similar. Let's wrap a simple AddFunction in C and make it callable from Python. Firstly, open a file called AddFunction.c, and write a simple function into it: #include <stdio.h> int AddFunction(int a, int b) { printf("look we are within your c code!n"); return a + b; } This is the C code we will call, which is just a simple function to add two integers. Now, let's get Python to call it. Open a file called AddFunction.h, wherein we will declare our prototype: #ifndef __ADDFUNCTION_H__ #define __ADDFUNCTION_H__ extern int AddFunction (int, int); #endif //__ADDFUNCTION_H__ We need this so that Cython can see the prototype for the function we want to call. In practice, you will already have your headers in your own project with your prototypes and declarations already available. Open a file called AddFunction.pyx, and insert the following code in to it: cdef extern from "AddFunction.h": cdef int AddFunction(int, int) Here, we have to declare what code we want to call. The cdef is a keyword signifying that this is from the C code that will be linked in. Now, we need a Python entry point: def Add(a, b): return AddFunction(a, b) This Add is a Python callable inside a PyAddFunction module. Again, I have provided a handy makefile to produce the module:   $ cd cython-book/chapter1/ownmodule $ make cython -2 PyAddFunction.pyx gcc -g -O2 -fpic -c PyAddFunction.c -o PyAddFunction.o `python-config --includes` gcc -g -O2 -fpic -c AddFunction.c -o AddFunction.o gcc -g -O2 -shared -o PyAddFunction.so AddFunction.o PyAddFunc-tion.o `python-config --libs` Notice that AddFunction.c is compiled into the same PyAddFunction.so shared object. Now, let's call this AddFunction and check to see if C can add numbers correctly: $ python >>> from PyAddFunction import Add >>> Add(1,2) look we are within your c code!! 3 Notice the print statement inside AddFunction.c::AddFunction and that the final result is printed correctly. Therefore, we know the control hit the C code and did the calculation in C and not inside the Python runtime. This is a revelation to what is possible. Python can be cited to be slow in some circumstances. Using this technique, it makes it possible for Python code to bypass its own runtime and to run in an unsafe context, which is unrestricted by the Python runtime, which is much faster. Type conversion Notice that we had to declare a prototype inside the cython source code PyAddFunction.pyx: cdef extern from "AddFunction.h": cdef int AddFunction(int, int) It let the compiler know that there is a function called AddFunction, and that it takes two int's and returns an int. This is all the information the compiler needs to know besides the host and target operating system's calling convention in order to call this function safely. Then, we created the Python entry point, which is a python callable that takes two parameters: def Add(a, b): return AddFunction(a, b) Inside this entry point, it simply returned the native AddFunction and passed the two Python objects as parameters. This is what makes Cython so powerful. Here, the Cython compiler must inspect the function call and generate code to safely try and convert these Python objects to native C integers. This becomes difficult when precision is taken into account, and potential overflow, which just so happens to be a major use case since it handles everything so well. Also, remember that this function returns an integer and Cython also generates code to convert the integer return into a valid Python object. Summary Overall, we installed the Cython compiler, ran the Hello World example, and took into consideration that we need to compile all code into native shared objects. We also saw how to wrap native C code to be callable from Python, and how to do type conversion of parameters and return to C code and back to Python. Resources for Article: Further resources on this subject: Monte Carlo Simulation and Options [article] Understanding Cython [article] Scaling your Application Across Nodes with Spring Python's Remoting [article]
Read more
  • 0
  • 0
  • 2786

article-image-data-migration-scenarios-sap-business-one-application-part-2
Packt
27 Oct 2009
7 min read
Save for later

Data Migration Scenarios in SAP Business ONE Application- part 2

Packt
27 Oct 2009
7 min read
Advanced data migration tools: xFusion Studio For our own projects, we have adopted a tool called xFusion. Using this tool, you gain flexibility and are able to reuse migration settings for specific project environments. The tool provides connectivity to directly extract data from applications (including QuickBooks and Peachtree). In addition, it also supports building rules for data profiling, validation, and conversions. For example, our project team participated in the development of the template for the Peachtree interface. We configured the mappings from Peachtree, and connected the data with the right fields in SAP. This was then saved as a migration template. Therefore, it would be easy and straightforward to migrate data from Peachtree to SAP in any future projects. xFusion packs save migration knowledge Based on the concept of establishing templates for migrations, xFusion provides preconfigured templates for the SAP Business ONE application. In xFusion, templates are called xFusion packs. Please note that these preconfigured packs may include master data packs, and also xFusion packs for transaction data. The following xFusion packs are provided for an SAP Business ONE migration: Administration Banking Business partner Finance HR Inventory and production Marketing documents and receipts MRP UDFs Services You can see that the packs are also grouped by business object. For example, you have a group of xFusion packs for inventory and production. You can open the pack and find a group of xFusion files that contain the configuration information. If you open the inventory and production pack, a list of folders will be revealed. Each folder has a set of Excel templates and xFusion fi les (seen in the following screenshot). An xFusion pack essentially incorporates the configuration and data manipulation procedures required to bring data from a source into SAP. The source settings can be saved in xFusion packs so that you can reuse the knowledge with regards to data manipulation and formatting. Data "massaging" using SQL The key for the migration procedure is the capability to do data massaging in order to adjust formats and columns, in a step-by-step manner, based on requirements. Data manipulation is not done programmatically, but rather via a step-by-step process, where each step uses SQL statements to verify and format data. The entire process is represented visually, and thereby documents the steps required. This makes it easy to adjust settings and fine-tune them. The following applications are supported and can, therefore, be used as a source for an SAP migration: (They are existing xFusion packs) SAP Business ONE Sage ACT! SAP SAP BW Peachtree QuickBooks Microsoft Dynamics CRM The following is a list of supported databases: Oracle ODBC MySQL OLE DB SQL Server PostgrSQL Working with xFusion The workflow in xFusion starts when you open an existing xFusion pack, or create a new one. In this example, an xFusion pack for business partner migration was opened. You can see the graphical representation of the migration process in the main window (in the following screenshot). Each icon in the graphic representation represents a data manipulation and formatting step. If you click on an icon, the complete path from the data source to the icon is highlighted. Therefore, you can select the previous steps to adjust the data. The core concept is that you do not directly change the input data, but define rules to convert data from the source format to the target format. If you open an xFusion pack for the SAP Business ONE application, the target is obviously SAP Business ONE. Therefore, you need to enter the privileges and database name so that the pack knows how to access the SAP system. In addition, the source parameters need to be provided. xFusion packs come with example Excel fi les. You need to select the Excel fi les as the relevant source. However, it is important to note that you don't need to use the Excel files. You can use any database, or other source, as long as you adjust the data format using the step-by-step process to represent the same format as provided in Excel. In xFusion. you can use the sample files that come in Excel format. The connection parameters are presented once you double-click on any of the connections listed in the Connections section as follows: It is recommended to click on Test Connection to verify the proper parameters. If all of the connections are right, you can run a migration from the source to the target by right-clicking on an icon and selecting Run Export as shown here: The progress and export is visually documented. This way, you can verify the success. There is also a log file in the directory where the currently utilized xFusion pack resides, as shown in the following screenshot: Tips and recommendations for your own project Now you know all of the main migration tools and methods. If you want to select the right tool and method for your specific situation, you will see that even though there may be many templates and preconfigured packs out there, your own project potentially comes with some individual aspects. When organizing the data migration project, use the project task skeleton I provided. It is important to subdivide the required migration steps into a group of easy-to-understand steps, where data can be verified at each level. If it gets complicated, it is probably not the right way to move forward, and you need to re-think the methods and tools you are using. Common issues The most common issue I found in similar projects is that the data to be migrated is not entirely clean and consistent. Therefore, be sure to use a data verification procedure at each step. Don't just import data, only to find out later that the database is overloaded with data that is not right. Recommendation Separate the master data and the transaction data. If you don't want to lose valuable transaction data, you can establish a reporting database which will save all of the historic transactions. For example, sales history can easily be migrated to an SQL database. You can then provide access to this information from the required SAP forms using queries or Crystal Reports. Case study During the course of evaluating the data import features available in the SAP Business ONE application, we have already learned how to import business partner information and item data. This can easily be done using the standard SAP data import features based on the Excel or text files. Using this method allows the lead, customer, and vendor data to be imported. Let's say that the Lemonade Stand enterprise has salespeople who travel to trade fairs and collect contact information. We can import the address information using the proven BP import method. But after this data is imported, what would the next step be? It would be a good idea to create and manage opportunities based on the address material. Basically, you already know how to use Excel to bring over address information. Let's enhance this concept to bring over opportunity information. We will use xFusion to import opportunity data into the SAP Business ONE application. The basis will be the xFusion pack for opportunities. Importing sales opportunities for the Lemonade Stand The xFusion pack is open, and you can see that it is a nice and clean example without major complexity. That's how it should be, as you see here:
Read more
  • 0
  • 0
  • 2782

article-image-working-amqp
Packt
18 Dec 2013
5 min read
Save for later

Working with AMQP

Packt
18 Dec 2013
5 min read
(for more resources related to this topic, see here.) Broadcasting messages In this example we are seeing how to send the same message to a possibly large number of consumers. This is a typical messaging application, broadcasting to a huge number of clients. For example, when updating the scoreboard in a massive multiplayer game, or when publishing news in a social network application. In this article we are discussing both the producer and consumer implementation. Since it is very typical to have consumers using different technologies and programming languages, we are using Java, Python, and Ruby to show interoperability with AMQP. We are going to appreciate the benefits of having separated exchanges and queues in AMQP. Getting ready To use this article you will need to set up Java, Python and Ruby environments as described. How to do it… To cook this article we are preparing four different codes: The Java publisher The Java consumer The Python consumer The Ruby consumer To prepare a Java publisher: Declare a fanout exchange: channel.exchangeDeclare(myExchange, "fanout"); Send one message to the exchange: channel.basicPublish(myExchange, "", null, jsonmessage.getBytes()); Then to prepare a Java consumer: Declare the same fanout exchange declared by the producer: channel.exchangeDeclare(myExchange, "fanout"); Autocreate a new temporary queue: String queueName = channel.queueDeclare().getQueue(); Bind the queue to the exchange: channel.queueBind(queueName, myExchange, ""); Define a custom, non-blocking consumer. Consume messages invoking channel.basicConsume() The source code of the Python consumer is very similar to the Java consumer, so there is no need to repeat the needed steps. In the Ruby consumer you need to use require "bunny" and then use the URI connection. We are now ready to mix all together, to see the article in action: Start one instance of the Java producer; messages start getting published immediately. Start one or more instances of the Java/Python/Ruby consumer; the consumers receive only the messages sent while they are running. see that the consumer has lost the messages sent while it was down. How it works… Both the producer and the consumers are connected to RabbitMQ with a single connection, but the logical path of the messages is depicted in the following figure: In step 1 we have declared the exchange that we are using. The logic is the same as in the queue declaration: if the specified exchange doesn't exist, create it; otherwise, do nothing. The second argument of exchangeDeclare() is a string, specifying the type of the exchange, fanout in this case. In step 2 the producer sends one message to the exchange. You can just view it along with the other defined exchanges issuing the following command on the RabbitMQ command shell: rabbitmqctl list_exchanges The second argument in the call to channel.basicPublish() is the routing key, which is always ignored when used with a fanout exchange. The third argument, set to null, is the optional message property. The fourth argument is just the message itself. When we started one consumer, it created its own temporary queue (step 9). Using the channel.queueDeclare() empty overload, we are creating a nondurable, exclusive, autodelete queue with an autogenerated name. Launching a couple of consumers and issuing rabbitmqctl list_queues, we can see two queues, one per consumer, with their odd names, along with the persistent myFirstQueue as shown in the following screenshot: In step 5 we have bound the queues to myExchange. It is possible to monitor these bindings too, issuing the following command: rabbitmqctl list_bindings The monitoring is a very important aspect of AMQP; messages are routed by exchanges to the bound queues, and buffered in the queues. Exchanges do not buffer messages; they are just logical elements. The fanout exchange routes messages by just placing a copy of them in each bound queue. So, no bound queues and all the messages are just received by no one consumer. As soon as we close one consumer, we implicitly destroy its private temporary queue (that's why the queues are autodelete; otherwise, these queues would be left behind unused, and the number of queues on the broker would increase indefinitely), and messages are not buffered to it anymore. When we restart the consumer, it will create a new, independent queue and as soon as we bind it to myExchange, messages sent by the publisher will be buffered into this queue and pulled by the consumer itself. There's more… When RabbitMQ is started for the first time, it creates some predefined exchanges. Issuing rabbitmqctl list_exchanges we can observe many existing exchanges, in addition to the one that we have defined in this article: All the amq.* exchanges listed here are already defined by all the AMQP-compliant brokers and can be used instead of defining your own exchanges; they do not need to be declared at all. We could have used amq.fanout in place of myLastnews.fanout_6, and this is a good choice for very simple applications. However, applications generally declare and use their own exchanges. See also With the overload used in the article, the exchange is non-autodelete (won't be deleted as soon as the last client detaches it) and non-durable (won't survive server restarts). You can find more available options and overloads at http://www.rabbitmq.com/releases/rabbitmq-java-client/current-javadoc/. Summary In this article, we are mainly using Java since this language is widely used in enterprise software development, integration, and distribution. RabbitMQ is a perfect fit in this environment. resources for article: further resources on this subject: RabbitMQ Acknowledgements [article] Getting Started with ZeroMQ [article] Setting up GlassFish for JMS and Working with Message Queues [article]
Read more
  • 0
  • 0
  • 2775
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-oracle-jdeveloper-11gr2-application-modules
Packt
20 Jan 2012
12 min read
Save for later

Oracle JDeveloper 11gR2: Application Modules

Packt
20 Jan 2012
12 min read
(For more resources on JDeveloper, see here.) Creating and using generic extension interfaces In this recipe, we will go over how to expose any of that common functionality as a generic extension interface. By doing so, this generic interface becomes available to all derived business components, which in turn can be exposed to its own client interface and make it available to the ViewController layer through the bindings layer. How to do it… Open the shared components workspace in JDeveloper Create an interface called ExtApplicationModule as follows: public interface ExtApplicationModule { // return some user authority level, based on // the user's name public int getUserAuthorityLevel();} Locate and open the custom application module framework extension class ExtApplicationModuleImpl. Modify it so that it implements the ExtApplicationModule interface. Then, add the following method to it: public int getUserAuthorityLevel() { // return some user authority level, based on the user's name return ("anonymous".equalsIgnoreCase(this.getUserPrincipalName()))? AUTHORITY_LEVEL_MINIMAL : AUTHORITY_LEVEL_NORMAL;} Rebuild the SharedComponents workspace and deploy it as an ADF Library JAR. Now, open the HRComponents workspace Locate and open the HrComponentsAppModule application module defnition. Go to the Java section and click on the Edit application module client interface button (the pen icon in the Client Interface section). On the Edit Client Interface dialog, shuttle the getUserAuthorityLevel() interface from the Available to the Selected list. How it works… In steps 1 and 2, we have opened the SharedComponents workspace and created an interface called HrComponentsAppModule. This interface contains a single method called getUserAuthorityLevel(). Then, we updated the application module framework extension class HrComponentsAppModuleImpl so that it implements the HrComponentsAppModule interface (step 3). We also implemented the method getUserAuthorityLevel() required by the interface (step 4). For the sake of this recipe, this method returns a user authority level based on the authenticated user's name. We retrieve the authenticated user's name by calling getUserPrincipal().getName() on the SecurityContext, which we retrieve from the current ADF context (ADFContext.getCurrent().getSecurityContext()). If security is not enabled for the ADF application, the user's name defaults to anonymous. In this example, we return AUTHORITY_LEVEL_MINIMAL for anonymous users, and for all others we return AUTHORITY_LEVEL_NORMAL. We rebuilt and redeployed the SharedComponents workspace in step 5. In steps 6 through 9, we opened the HRComponents workspace and added the getUserAuthorityLevel() method to the HrComponentsAppModuleImpl client interface. By doing this, we exposed the getUserAuthorityLevel() generic extension interface to a derived application module, while keeping its implementation in the base framework extension class ExtApplicationModuleImpl. There's more… Note that the steps followed in this recipe to expose an application module framework extension class method to a derived class' client interface can be followed for other business components framework extension classes as well. Exposing a custom method as a web service Service-enabling an application module allows you, among others, to expose custom application module methods as web services. This is one way for service consumers to consume the service-enabled application module. The other possibilities are accessing the application module by another application module, and accessing it through a Service Component Architecture (SCA) composite. Service-enabling an application module allows access to the same application module both through web service clients and interactive web user interfaces. In this recipe, we will go over the steps involved in service-enabling an application module by exposing a custom application module method to its service interface. Getting ready The HRComponents workspace requires a database connection to the HR schema. How to do it… Open the HRComponents project in JDeveloper. Double-click on the HRComponentsAppModule application module in the Application Navigator to open its defnition. Go to the Service Interface section and click on the Enable support for Service Interface button (the green plus sign icon in the Service Interface section). This will start the Create Service Interface wizard. In the Service Interface page, accept the defaults and click Next. In the Service Custom Methods page, locate the adjustCommission() method and shuttle it from the Available list to the Selected list. Click on Finish. Observe that the adjustCommission() method is shown in the Service Interface Custom Methods section of the application module's Service Interface. The service interface fles were generated in the serviceinterface package under the application module and are shown in the Application Navigator. Double-click on the weblogic-ejb-jar.xml fle under the META-INF package in the Application Navigator to open it. In the Beans section, select the com.packt.jdeveloper. cookbook.hr.components.model.application.common. HrComponentsAppModuleService Bean bean and click on the Performance tab. For the Transaction timeout feld, enter 120. How it works… In steps 1 through 6, we have exposed the adjustCommission() custom application module method to the application module's service interface. This is a custom method that adjusts all the Sales department employees' commissions by the percentage specifed. As a result of exposing the adjustCommission() method to the application module service interface, JDeveloper generates the following fles: HrComponentsAppModuleService.java: Defnes the service interface HrComponentsAppModuleServiceImpl.java: The service implementation class HrComponentsAppModuleService.xsd: The service schema fle describing the input and output parameters of the service HrComponentsAppModuleService.wsdl: The Web Service Defnition Language (WSDL) fle, describing the web service ejb-jar.xml: The EJB deployment descriptor. It is located in the src/META-INF directory weblogic-ejb-jar.xml: The WebLogic-specifc EJB deployment descriptor, located in the src/META-INF directory In steps 7 and 8, we adjust the service Java Transaction API (JTA) transaction timeout to 120 seconds (the default is 30 seconds). This will avoid any exceptions related to transaction timeouts when invoking the service. This is an optional step added specifcally for this recipe, as the process of adjusting the commission for all sales employees might take longer than the default 30 seconds, causing the transaction to time out. To test the service using the JDeveloper integrated WebLogic application server, right-click on the HrComponentsAppModuleServiceImpl.java service implementation fle in the Application Navigator and select Run or Debug from the context menu. This will build and deploy the HrComponentsAppModuleService web service into the integrated WebLogic server. Once the deployment process is completed successfully, you can click on the service URL in the Log window to test the service. This will open a test window in JDeveloper and also enable the HTTP Analyzer. Otherwise, copy the target service URL from the Log window and paste it into your browser's address feld. This will bring up the service's endpoint page. On this page, select the adjustCommission method from the Operation drop down, specify the commissionPctAdjustment parameter amount and click on the Invoke button to execute the web service. Observe how the employees' commissions are adjusted in the EMPLOYEES table in the HR schema. There's more… For more information on service-enabling application modules consult chapter Integrating Service-Enabled Application Modules in the Fusion Developer's Guide for Oracle Application Development Framework which can be found at http://docs.oracle.com/cd/ E24382_01/web.1112/e16182/toc.htm. Accessing a service interface method from another application module In the recipe Exposing a custom method as a web service in this article, we went through the steps required to service-enable an application module and expose a custom application module method as a web service. We will continue in this recipe by explaining how to invoke the custom application module method, exposed as a web service, from another application module. Getting ready This recipe will call the adjustCommission() custom application module method that was exposed as a web service in the Exposing a custom method as a web service recipe in this article. It requires that the web service is deployed in WebLogic and that it is accessible. The recipe also requires that both the SharedComponents workspace and the HRComponents workspace are deployed as ADF Library JARs and that are added to the workspace used by this specifc recipe. Additionally, a database connection to the HR schema is required. How to do it… Ensure that you have built and deployed both the SharedComponents and HRComponents workspaces as ADF Library JARs. Create a File System connection in the Resource Palette to the directory path where the SharedComponents.jar and HRComponents.jar ADF Library JARs are located. Create a new Fusion Web Application (ADF) called HRComponentsCaller using the Create Fusion Web Application (ADF) wizard. Create a new application module called HRComponentsCallerAppModule using the Create Application Module wizard. In the Java page, check on the Generate Application Module Class checkbox to generate a custom application module implementation class. JDeveloper will ask you for a database connection during this step, so make sure that a new database connection to the HR schema is created. Expand the File System | ReUsableJARs connection in the Resource Palette and add both the SharedComponents and HRComponents libraries to the project. You do this by right-clicking on the jar fle and selecting Add to Project… from the context menu. Bring up the business components Project Properties dialog and go to the Libraries and Classpath section. Click on the Add Library… button and add the BC4J Service Client and JAX-WS Client extensions. Double-click on the HRComponentsCallerAppModuleImpl.java custom application module implementation fle in the Application Navigator to open it in the Java editor. Add the following method to it: public void adjustCommission( BigDecimal commissionPctAdjustment) { // get the service proxy HrComponentsAppModuleService service = (HrComponentsAppModuleService)ServiceFactory .getServiceProxy( HrComponentsAppModuleService.NAME); // call the adjustCommission() service service.adjustCommission(commissionPctAdjustment);} Expose adjustCommission() to the HRComponentsCallerAppModule client interface. Finally, in order to be able to test the HRComponentsCallerAppModule application module with the ADF Model Tester, locate the connections.xml fle in the Application Resources section of the Application Navigator under the Descriptors | ADF META-INF node, and add the following confguration to it: <Reference name="{/com/packt/jdeveloper/cookbook/hr/components/model/ application/common/}HrComponentsAppModuleService" className="oracle.jbo.client.svc.Service" ><Factory className="oracle.jbo.client.svc.ServiceFactory"/><RefAddresses><StringRefAddr addrType="serviceInterfaceName"><Contents>com.packt.jdeveloper.cookbook.hr.components.model. application.common.serviceinterface.HrComponentsAppModuleService </Contents></StringRefAddr><StringRefAddr addrType="serviceEndpointProvider"><Contents>ADFBC</Contents></StringRefAddr><StringRefAddr addrType="jndiName"><Contents>HrComponentsAppModuleServiceBean#com.packt.jdeveloper. cookbook.hr.components.model.application.common. serviceinterface.HrComponentsAppModuleService</Contents></StringRefAddr><StringRefAddr addrType="serviceSchemaName"><Contents>HrComponentsAppModuleService.xsd</Contents></StringRefAddr><StringRefAddr addrType="serviceSchemaLocation"><Contents>com/packt/jdeveloper/cookbook/hr/components/model/ application/common/serviceinterface/</Contents></StringRefAddr><StringRefAddr addrType="jndiFactoryInitial"><Contents>weblogic.jndi.WLInitialContextFactory</Contents></StringRefAddr><StringRefAddr addrType="jndiProviderURL"><Contents>t3://localhost:7101</Contents></StringRefAddr></RefAddresses></Reference> How it works… In steps 1 and 2, we have made sure that both the SharedComponents and HRComponents ADF Library JARs are deployed and that a fle system connection was created, in order that both of these libraries get added to a newly created project (in step 5). Then, in steps 3 and 4, we create a new Fusion web application based on ADF, and an application module called HRComponentsCallerAppModule. It is from this application module that we intend to call the adjustCommission() custom application module method, exposed as a web service by the HrComponentsAppModule service-enabled application module in the HRComponents library JAR. For this reason, in step 4, we have generated a custom application module implementation class. We proceed by adding the necessary libraries to the new project in steps 5 and 6. Specifcally, the following libraries were added: SharedComponents.jar, HRComponents.jar, BC4J Service Client, and JAX-WS Client. In steps 7 through 9, we create a custom application module method called adjustCommission(), in which we write the necessary glue code to call our web service. In it, we frst retrieve the web service proxy, as a HrComponentsAppModuleService interface, by calling ServiceFactory.getServiceProxy() and specifying the name of the web service, which is indicated by the constant HrComponentsAppModuleService.NAME in the service interface. Then we call the web service through the retrieved interface. In the last step, we have provided the necessary confguration in the connections.xml so that we will be able to call the web service from an RMI client (the ADF Model Tester). This fle is used by the web service client to locate the web service. For the most part, the Reference information that was added to it was generated automatically by JDeveloper in the Exposing a custom method as a Web service recipe, so it was copied from there. The extra confguration information that had to be added is the necessary JNDI context properties jndiFactoryInitial and jndiProviderURL that are needed to resolve the web service on the deployed server. You should change these appropriately for your deployment. Note that these parameters are the same as the initial context parameters used to lookup the service when running in a managed environment. To test calling the web service, ensure that you have frst deployed it and that it is running. You can then use the ADF Model Tester, select the adjustCommission method and execute it. There's more… For additional information related to such topics as securing the ADF web service, enabling support for binary attachments, deploying to WebLogic, and more, refer to the Integrating Service-Enabled Application Modules section in the Fusion Developer's Guide for Oracle Application Development Framework which can be found at http://docs.oracle.com/ cd/E24382_01/web.1112/e16182/toc.htm.
Read more
  • 0
  • 0
  • 2775

article-image-hello-opencl
Packt
18 Dec 2013
16 min read
Save for later

Hello OpenCL

Packt
18 Dec 2013
16 min read
(For more resources related to this topic, see here.) The Wikipedia definition says that, Parallel Computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (in parallel). There are many Parallel Computing programming standards or API specifications, such as OpenMP, OpenMPI, Pthreads, and so on. This book is all about OpenCL Parallel Programming. In this article, we will start with a discussion on different types of parallel programming. We will first introduce you to OpenCL with different OpenCL components. We will also take a look at the various hardware and software vendors of OpenCL and their OpenCL installation steps. Finally, at the end of the article we will see an OpenCL program example SAXPY in detail and its implementation. Advances in computer architecture All over the 20th century computer architectures have advanced by multiple folds. The trend is continuing in the 21st century and will remain for a long time to come. Some of these trends in architecture follow Moore's Law. "Moore's law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years". Many devices in the computer industry are linked to Moore's law, whether they are DSPs, memory devices, or digital cameras. All the hardware advances would be of no use if there weren't any software advances. Algorithms and software applications grow in complexity, as more and more user interaction comes into play. An algorithm can be highly sequential or it may be parallelized, by using any parallel computing framework. Amdahl's Law is used to predict the speedup for an algorithm, which can be obtained given n threads. This speedup is dependent on the value of the amount of strictly serial or non-parallelizable code (B). The time T(n) an algorithm takes to finish when being executed on n thread(s) of execution corresponds to: T(n) = T(1) (B + (1-B)/n) Therefore the theoretical speedup which can be obtained for a given algorithm is given by : Speedup(n) = 1/(B + (1-B)/n) Amdahl's Law has a limitation, that it does not fully exploit the computing power that becomes available as the number of processing core increase. Gustafson's Law takes into account the scaling of the platform by adding more processing elements in the platform. This law assumes that the total amount of work that can be done in parallel, varies linearly with the increase in number of processing elements. Let an algorithm be decomposed into (a+b). The variable a is the serial execution time and variable b is the parallel execution time. Then the corresponding speedup for P parallel elements is given by: (a + P*b) Speedup = (a + P*b) / (a + b) Now defining α as a/(a+b), the sequential execution component, as follows, gives the speedup for P processing elements: Speedup(P) = P – α *(P - 1) Given a problem which can be solved using OpenCL, the same problem can also be solved on a different hardware with different capabilities. Gustafson's law suggests that with more number of computing units, the data set should also increase that is, "fixed work per processor". Whereas Amdahl's law suggests the speedup which can be obtained for the existing data set if more computing units are added, that is, "Fixed work for all processors". Let's take the following example: Let the serial component and parallel component of execution be of one unit each. In Amdahl's Law the strictly serial component of code is B (equals 0.5). For two processors, the speedup T(2) is given by: T(2) = 1 / (0.5 + (1 – 0.5) / 2) = 1.33 Similarly for four and eight processors, the speedup is given by: T(4) = 1.6 and T(8) = 1.77 Adding more processors, for example when n tends to infinity, the speedup obtained at max is only 2. On the other hand in Gustafson's law, Alpha = 1(1+1) = 0.5 (which is also the serial component of code). The speedup for two processors is given by: Speedup(2) = 2 – 0.5(2 - 1) = 1.5 Similarly for four and eight processors, the speedup is given by: Speedup(4) = 2.5 and Speedup(8) = 4.5 The following figure shows the work load scaling factor of Gustafson's law, when compared to Amdahl's law with a constant workload: Comparison of Amdahl's and Gustafson's Law OpenCL is all about parallel programming, and Gustafson's law very well fits into this book as we will be dealing with OpenCL for data parallel applications. Workloads which are data parallel in nature can easily increase the data set and take advantage of the scalable platforms by adding more compute units. For example, more pixels can be computed as more compute units are added. Different parallel programming techniques There are several different forms of parallel computing such as bit-level, instruction level, data, and task parallelism. This book will largely focus on data and task parallelism using heterogeneous devices. We just now coined a term, heterogeneous devices. How do we tackle complex tasks "in parallel" using different types of computer architecture? Why do we need OpenCL when there are many (already defined) open standards for Parallel Computing? To answer this question, let us discuss the pros and cons of different Parallel computing Framework. OpenMP OpenMP is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran. It is prevalent only on a multi-core computer platform with a shared memory subsystem. A basic OpenMP example implementation of the OpenMP Parallel directive is as follows: #pragma omp parallel { body; } When you build the preceding code using the OpenMP shared library, libgomp would expand to something similar to the following code: void subfunction (void *data) { use data; body; } setup data; GOMP_parallel_start (subfunction, &data, num_threads); subfunction (&data); GOMP_parallel_end (); void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads) The OpenMP directives make things easy for the developer to modify the existing code to exploit the multicore architecture. OpenMP, though being a great parallel programming tool, does not support parallel execution on heterogeneous devices, and the use of a multicore architecture with shared memory subsystem does not make it cost effective. MPI Message Passing Interface (MPI) has an advantage over OpenMP, that it can run on either the shared or distributed memory architecture. Distributed memory computers are less expensive than large shared memory computers. But it has its own drawback with inherent programming and debugging challenges. One major disadvantage of MPI parallel framework is that the performance is limited by the communication network between the nodes. Supercomputers have a massive number of processors which are interconnected using a high speed network connection or are in computer clusters, where computer processors are in close proximity to each other. In clusters, there is an expensive and dedicated data bus for data transfers across the computers. MPI is extensively used in most of these compute monsters called supercomputers. OpenACC The OpenACC Application Program Interface (API) describes a collection of compiler directives to specify loops and regions of code in standard C, C++, and Fortran to be offloaded from a host CPU to an attached accelerator, providing portability across operating systems, host CPUs, and accelerators. OpenACC is similar to OpenMP in terms of program annotation, but unlike OpenMP which can only be accelerated on CPUs, OpenACC programs can be accelerated on a GPU or on other accelerators also. OpenACC aims to overcome the drawbacks of OpenMP by making parallel programming possible across heterogeneous devices. OpenACC standard describes directives and APIs to accelerate the applications. The ease of programming and the ability to scale the existing codes to use the heterogeneous processor, warrantees a great future for OpenACC programming. CUDA Compute Unified Device Architecture (CUDA) is a parallel computing architecture developed by NVIDIA for graphics processing and GPU (General Purpose GPU) programming. There is a fairly good developer community following for the CUDA software framework. Unlike OpenCL, which is supported on GPUs by many vendors and even on many other devices such as IBM's Cell B.E. processor or TI's DSP processor and so on, CUDA is supported only for NVIDIA GPUs. Due to this lack of generalization, and focus on a very specific hardware platform from a single vendor, OpenCL is gaining traction. CUDA or OpenCL? CUDA is more proprietary and vendor specific but has its own advantages. It is easier to learn and start writing code in CUDA than in OpenCL, due to its simplicity. Optimization of CUDA is more deterministic across a platform, since less number of platforms are supported from a single vendor only. It has simplified few programming constructs and mechanisms. So for a quick start and if you are sure that you can stick to one device (GPU) from a single vendor that is NVIDIA, CUDA can be a good choice. OpenCL on the other hand is supported for many hardware from several vendors and those hardware vary extensively even in their basic architecture, which created the requirement of understanding a little complicated concepts before starting OpenCL programming. Also, due to the support of a huge range of hardware, although an OpenCL program is portable, it may lose optimization when ported from one platform to another. The kernel development where most of the effort goes, is practically identical between the two languages. So, one should not worry about which one to choose. Choose the language which is convenient. But remember your OpenCL application will be vendor agnostic. This book aims at attracting more developers to OpenCL. There are many libraries which use OpenCL programming for acceleration. Some of them are MAGMA, clAMDBLAS, clAMDFFT, BOLT C++ Template library, and JACKET which accelerate MATLAB on GPUs. Besides this, there are C++ and Java bindings available for OpenCL also. Once you've figured out how to write your important "kernels" it's trivial to port to either OpenCL or CUDA. A kernel is a computation code which is executed by an array of threads. CUDA also has a vast set of CUDA accelerated libraries, that is, CUBLAS, CUFFT, CUSPARSE, Thrust and so on. But it may not take a long time to port these libraries to OpenCL. Renderscripts Renderscripts is also an API specification which is targeted for 3D rendering and general purpose compute operations in an Android platform. Android apps can accelerate the performance by using these APIs. It is also a cross-platform solution. When an app is run, the scripts are compiled into a machine code of the device. This device can be a CPU, a GPU, or a DSP. The choice of which device to run it on is made at runtime. If a platform does not have a GPU, the code may fall back to the CPU. Only Android supports this API specification as of now. The execution model in Renderscripts is similar to that of OpenCL. Hybrid parallel computing model Parallel programming models have their own advantages and disadvantages. With the advent of many different types of computer architectures, there is a need to use multiple programming models to achieve high performance. For example, one may want to use MPI as the message passing framework, and then at each node level one might want to use, OpenCL, CUDA, OpenMP, or OpenACC. Besides all the above programming models many compilers such as Intel ICC, GCC, and Open64 provide auto parallelization options, which makes the programmers job easy and exploit the underlying hardware architecture without the need of knowing any parallel computing framework. Compilers are known to be good at providing instruction-level parallelism. But tackling data level or task level auto parallelism has its own limitations and complexities. Introduction to OpenCL OpenCL standard was first introduced by Apple, and later on became part of the open standards organization "Khronos Group". It is a non-profit industry consortium, creating open standards for the authoring, and acceleration of parallel computing, graphics, dynamic media, computer vision and sensor processing on a wide variety of platforms and devices. The goal of OpenCL is to make certain types of parallel programming easier, and to provide vendor agnostic hardware-accelerated parallel execution of code. OpenCL (Open Computing Language) is the first open, royalty-free standard for general-purpose parallel programming of heterogeneous systems. It provides a uniform programming environment for software developers to write efficient, portable code for high-performance compute servers, desktop computer systems, and handheld devices using a diverse mix of multi-core CPUs, GPUs, and DSPs. OpenCL gives developers a common set of easy-to-use tools to take advantage of any device with an OpenCL driver (processors, graphics cards, and so on) for the processing of parallel code. By creating an efficient, close-to-the-metal programming interface, OpenCL will form the foundation layer of a parallel computing ecosystem of platform-independent tools, middleware, and applications. We mentioned vendor agnostic, yes that is what OpenCL is about. The different vendors here can be AMD, Intel, NVIDIA, ARM, TI, and so on. The following diagram shows the different vendors and hardware architectures which use the OpenCL specification to leverage the hardware capabilities: The heterogeneous system The OpenCL framework defines a language to write "kernels". These kernels are functions which are capable of running on different compute devices. OpenCL defines an extended C language for writing compute kernels, and a set of APIs for creating and managing these kernels. The compute kernels are compiled with a runtime compiler, which compiles them on-the-fly during host application execution for the targeted device. This enables the host application to take advantage of all the compute devices in the system with a single set of portable compute kernels. Based on your interest and hardware availability, you might want to do OpenCL programming with a "host and device" combination of "CPU and CPU" or "CPU and GPU". Both have their own programming strategy. In CPUs you can run very large kernels as the CPU architecture supports out-of-order instruction level parallelism and have large caches. For the GPU you will be better off writing small kernels for better performance. Hardware and software vendors There are various hardware vendors who support OpenCL. Every OpenCL vendor provides OpenCL runtime libraries. These runtimes are capable of running only on their specific hardware architectures. Not only across different vendors, but within a vendor there may be different types of architectures which might need a different approach towards OpenCL programming. Now let's discuss the various hardware vendors who provide an implementation of OpenCL, to exploit their underlying hardware. Advanced Micro Devices, Inc. (AMD) With the launch of AMD A Series APU, one of industry's first Accelerated Processing Unit (APU), AMD is leading the efforts of integrating both the x86_64 CPU and GPU dies in one chip. It has four cores of CPU processing power, and also a four or five graphics SIMD engine, depending on the silicon part which you wish to buy. The following figure shows the block diagram of AMD APU architecture: AMD architecture diagram—© 2011, Advanced Micro Devices, Inc. An AMD GPU consist of a number of Compute Engines (CU) and each CU has 16 ALUs. Further, each ALU is a VLIW4 SIMD processor and it could execute a bundle of four or five independent instructions. Each CU could be issued a group of 64 work items which form the work group (wavefront). AMD Radeon ™ HD 6XXX graphics processors uses this design. The following figure shows the HD 6XXX series Compute unit, which has 16 SIMD engines, each of which has four processing elements: AMD Radeon HD 6xxx Series SIMD Engine—© 2011, Advanced Micro Devices, Inc. Starting with the AMD Radeon HD 7XXX series of graphics processors from AMD, there were significant architectural changes. AMD introduced the new Graphics Core Next (GCN) architecture. The following figure shows an GCN compute unit which has 4 SIMD engines and each engine is 16 lanes wide: GCN Compute Unit—© 2011, Advanced Micro Devices, Inc. A group of these Compute Units forms an AMD HD 7xxx Graphics Processor. In GCN, each CU includes four separate SIMD units for vector processing. Each of these SIMD units simultaneously execute a single operation across 16 work items, but each can be working on a separate wavefront. Apart from the APUs, AMD also provides discrete graphics cards. The latest family of graphics card, HD 7XXX, and beyond uses the GCN architecture. NVIDIA® One of NVIDIA GPU architectures is codenamed "Kepler". GeForce® GTX 680 is one Kepler architectural silicon part. Each Kepler GPU consists of different configurations of Graphics Processing Clusters (GPC) and streaming multiprocessors. The GTX 680 consists of four GPCs and eight SMXs as shown in the following figure: NVIDIA Kepler architecture—GTX 680, © NVIDIA® Kepler architecture is part of the GTX 6XX and GTX 7XX family of NVIDIA discrete cards. Prior to Kepler, NVIDIA had Fermi architecture which was part of the GTX 5XX family of discrete and mobile graphic processing units. Intel® Intel's OpenCL implementation is supported in the Sandy Bridge and Ivy Bridge processor families. Sandy Bridge family architecture is also synonymous with the AMD's APU. These processor architectures also integrated a GPU into the same silicon as the CPU by Intel. Intel changed the design of the L3 cache, and allowed the graphic cores to get access to the L3, which is also called as the last level cache. It is because of this L3 sharing that the graphics performance is good in Intel. Each of the CPUs including the graphics execution unit is connected via Ring Bus. Also each execution unit is a true parallel scalar processor. Sandy Bridge provides the graphics engine HD 2000, with six Execution Units (EU), and HD 3000 (12 EU), and Ivy Bridge provides HD 2500(six EU) and HD 4000 (16 EU). The following figure shows the Sandy bridge architecture with a ring bus, which acts as an interconnect between the cores and the HD graphics: Intel Sandy Bridge architecture—© Intel® ARM Mali™ GPUs ARM also provides GPUs by the name of Mali Graphics processors. The Mali T6XX series of processors come with two, four, or eight graphics cores. These graphic engines deliver graphics compute capability to entry level smartphones, tablets, and Smart TVs. The below diagram shows the Mali T628 graphics processor. ARM Mali—T628 graphics processor, © ARM Mali T628 has eight shader cores or graphic cores. These cores also support Renderscripts APIs besides supporting OpenCL. Besides the four key competitors, companies such as TI (DSP), Altera (FPGA), and Oracle are providing OpenCL implementations for their respective hardware. We suggest you to get hold of the benchmark performance numbers of the different processor architectures we discussed, and try to compare the performance numbers of each of them. This is an important first step towards comparing different architectures, and in the future you might want to select a particular OpenCL platform based on your application workload.
Read more
  • 0
  • 0
  • 2763

article-image-load-testing-using-visual-studio-2008-part-1
Packt
01 Oct 2009
12 min read
Save for later

Load Testing Using Visual Studio 2008: Part 1

Packt
01 Oct 2009
12 min read
Load testing an application helps the development and management team understand the application performance under various conditions. Load testing can have different parameter values and conditions to test the application and check the application performance. Each load test can simulate the number of users, network bandwidths, combinations of different web browsers, and different configurations. In the case of web applications, it is required to test the application with different sets of users and different sets of browsers to simulate multiple requests at the same time to the server. The following figure shows a sample of real-time situations for multiple users accessing the web site using different networks and different browsers from multiple locations. Load testing is meant not just for web applications. We can also test the unit tests under load tests to check the performance of the data access from the server. The load test helps identify application performance in various capacities, application performance under light loads for a short duration, performance with heavy loads, and different durations. Load testing uses a set of computers, which consists of a controller and agents. These are called rig. The agents are the computers at different locations used for simulating different user requests. The controller is the central computer which controls multiple agent computers. The Visual Studio 2008 Test Load agent in the agent computers generates the load. The Visual Studio 2008 Test Controller at the central computer controls these agents. This article explains the details of creating test scenarios and load testing the application. Creating load test The load tests are created using the Visual Studio Load Test wizard. You can first create the test project and then add the new load test which opens the wizard, and guides us to create the test. We can edit the test parameters and configuration later on, if required. Before we go on to create the test project and understand the parameters, we will consider a couple of web applications. Web applications or sites are the ones accessed by a large number of users from different locations at the same time. It is quite required to simulate this situation and check the application performance. Let’s take a couple of web applications. It is a simple web page, where the orders placed by the current user are displayed. The other application is the coded web test that retrieves the orders for the current user, similar to the first one. Using the above examples we will see the different features of load testing that is provided by Visual Studio. The following sections describe the creation of load testing, setting parameters, and load testing the application. Load test wizard The load test wizard helps us create the load test for the web tests and unit tests. There are different steps to provide the required parameters and configuration information for creating the load test. There are different ways of adding load tests to the test project: Select the test project and then select the option Add | Load Test... Select the test menu in Visual Studio 2008 and select New Test, which opens the Add | New Test... dialog. Select the load test type from the list. Provide a test name and select the test project to which the new load test should be added. Both the above options open the New Load Test Wizard shown as follows: The wizard contains four different sets with multiple pages, which collects the parameters and configuration information for the load test. The welcome page explains the different steps involved in creating a load test. On selecting a step like Scenario or Counter Sets or Run Settings, the wizard displays the section to collect the parameter information for the selected set option. We can click on the option directly or keep clicking Next and set the parameters. Once we click on Finish, the load test is created. To open the load test, expand the solution explorer and double-click on the load test, LoadTest1. We can also open the load test using the Test View window in the Test menu and double-click on the name of the load test from the list to open the Test Editor. For example, the following is a sample load test: The following detailed sections explain setting the parameters in each step: Specifying scenario Scenarios are used for simulating the actual user tests. For example, there are different end users to any web application. For a public web site, the end user could be anywhere and there could be any number of users. The bandwidth of the connection and the type of browsers used by the users also differ. Some users might be using a high-speed connection, and some a slow dial-up connection. But if the application is an Intranet application, the end users are limited within the LAN network. The speed at which the users connect will also be constant most of the time. The number of users and the browsers used are the two main things which differ in this case. The scenarios are created using these combinations which are required for the application under test. Enter the name for the scenario in the wizard page. We can add any number of scenarios to the test. For example, we might want to load test the WebTest3 with 40 per user per hour and another load test for WebTest11Coded with 20 per user per hour. Now, let us create a new load test and set the parameters for each scenario. Think time The think time is the time taken by the user to navigate to the next web page. This is useful for the load test to simulate the test accurately. We can set the load test to use the actual think time recorded by the web test or we can give a specific think time for each test. The other option is to set the normal distribution of the think time between the requests. The time varies slightly between the requests, but is mostly realistic. There is a third option, which is configured not to use the think times between the requests. The think times can also be modified for the scenario after creating the load test. Select the scenario and right-click and then open Properties to set the think time. Now once the properties are set for the scenario, click Next in the New Load Test Wizard to set parameters for Load Pattern. Load pattern Load pattern is used for controlling the user loads during the tests. The test pattern varies based on the type of test. If it is a simple Intranet web application test or a unit test, then we might want to have a minimum number of users for a constant period of time. But in case of a public web site, the amount of users would differ from time to time. In this case, we might want to increase the number of users from a very small number to a large number with a time interval. For example I might have a user load of 10 initially but during testing I may want the user load to be increased by 10 in every 10 seconds of testing until the maximum user count reaches 100. So at 90th second the user count will reach 100 and the increment stops and stays with 100 user load till the test completion. Constant load The load starts with the specified user count and ends with the same number. User Count: This is to specify the number of user counts for simulation. Step load The load test starts with the specified minimum number of users and the count increases constantly with the time duration specified until the user count reaches the maximum specified. Start user count: This specifies the number of users to start with Step duration: This specifies the time between the increases in user count Step user count: This specifies the number of users to add to the current user count Maximum user count: This specifies the maximum number of user count We have set the parameters for the Load Pattern. The next step in the wizard is to set the parameter values for Test Mix Model and Test Mix. Test mix model and test mix The test load model has to simulate the accuracy of the end-users count distribution. Before selecting the test mix, the wizard provides a configuration page to choose the test mix model from three different options. They are based on the total number of tests, on the virtual users, and on user pace. The next page in the wizard provides the option to select the tests and provide the distribution percentage, or the option to specify the tests per user per hour for each test for the selected model. The mix of tests is based on the percentages specified or the test per user specified for each test. Based on the total number of tests The next test to be run is based on the selected number of times. The number of times the tests is run should match the test distribution. For example, if the distribution is like the one shown in the image on the previous page, then the next test selected for the run is based on the percentage distributions. Based on the number of virtual users This model determines running particular tests based on the percentage of virtual users. Selecting the next test to be run depends on the percentage of virtual users and also on the percentage assigned to the tests. At any point, the number of users running a particular test matches the assigned distribution. Based on user pace This option determines running each test for the specified number of times per hour. This model is helpful when we want the virtual users to conduct the test at a regular pace. The test mix contains different web tests, each with different number of tests per user. The number of users is defined using load pattern. The next step in the wizard is to specify the Browser Mix, explained in the next section. Browse mix We have set the number of users and the number of tests, but there is always a possibility that all the users may not use the same browser. To mix the different browser types, we can go to the next step in the wizard and select the browsers listed and give a distribution percentage for each browser type. The test does not actually use the specified browser, but sets the header information in the request to simulate the same request through the specified browser. Like different browsers, the network speed also differs with user location, which is the next step in the wizard. Network mix Click on Next in the wizard to specify the Network Mix, to simulate the actual network speed of the users. The speed differs based on user location and the type of network they use. It could be a LAN network, or cable or wireless, or a dial-up. This step is useful in simulating the actual user scenario. The next step in the wizard is to set the Counter Sets parameters, which is explained in the next sections. Counter sets Load testing the application not contains the application-specific performance but also the environmental factors. This is to know the performance of the other services required for running the load test or accessing the application under test. For example, the web application makes use of IIS and ASP.NET process and SQL Server. VSTS provides an option to track the performance of these services using counter sets as part of the load test. The load test collects the counter set data during the test and represents it as a graph for a better understanding. The same data is also saved locally so that we can load it again and analyze the results. The counter set is for all the scenarios in the load test. The counter set data is collected for the controllers and agents by default. We can also add the other systems that are part of load testing. These counter set results help us to know how the services are used during the test. Most of the time the application performance is affected by the common services or the system services used. The load test creation wizard provides the option to add the performance counters. The wizard includes the current system by default and the common counter set for the controllers and the agents. The following screenshot shows the sample for adding systems to collect the counter sets during the load test: There are lists of counters listed for the system by default. We can select any of these for which we want to collect the details during the load test. For example, the above screenshot shows that we need the data to be collected for ASP.NET, .Net Application, and IIS from System1. Using the Add Computer... option, we can keep adding the computers on which we are running the tests and choose the counter sets for each system. Once we are done with selecting the counter sets, we are ready with all the parameters for the test. But for running the test some parameters are required, which is done in the next step in the wizard.
Read more
  • 0
  • 0
  • 2759

article-image-creating-catalyst-application-catalyst-58
Packt
30 Jun 2010
7 min read
Save for later

Creating a Catalyst Application in Catalyst 5.8

Packt
30 Jun 2010
7 min read
Creating the application skeleton Catalyst comes with a script called catalyst.pl to make this task as simple as possible. catalyst.pl takes a single argument, the application's name, and creates an application with that specified name. The name can be any valid Perl module name such as MyApp or MyCompany::HR::Timesheets. Let's get started by creating MyApp, which is the example application for this article: $ catalyst.pl MyAppcreated "MyApp"created "MyApp/script"created "MyApp/lib"created "MyApp/root"created "MyApp/root/static"created "MyApp/root/static/images"created "MyApp/t"created "MyApp/lib/MyApp"created "MyApp/lib/MyApp/Model"created "MyApp/lib/MyApp/View"18 ]created "MyApp/lib/MyApp/Controller"created "MyApp/myapp.conf"created "MyApp/lib/MyApp.pm"created "MyApp/lib/MyApp/Controller/Root.pm"created "MyApp/README"created "MyApp/Changes"created "MyApp/t/01app.t"created "MyApp/t/02pod.t"created "MyApp/t/03podcoverage.t"created "MyApp/root/static/images/catalyst_logo.png"created "MyApp/root/static/images/btn_120x50_built.png"created "MyApp/root/static/images/btn_120x50_built_shadow.png"created "MyApp/root/static/images/btn_120x50_powered.png"created "MyApp/root/static/images/btn_120x50_powered_shadow.png"created "MyApp/root/static/images/btn_88x31_built.png"created "MyApp/root/static/images/btn_88x31_built_shadow.png"created "MyApp/root/static/images/btn_88x31_powered.png"created "MyApp/root/static/images/btn_88x31_powered_shadow.png"created "MyApp/root/favicon.ico"created "MyApp/Makefile.PL"created "MyApp/script/myapp_cgi.pl"created "MyApp/script/myapp_fastcgi.pl"created "MyApp/script/myapp_server.pl"created "MyApp/script/myapp_test.pl"created "MyApp/script/myapp_create.pl"Change to application directory, and run "perl Makefile.PL" to make sureyour installation is complete. At this point it is a good idea to check if the installation is complete by switching to the newly-created directory (cd MyApp) and running perl Makefile.PL. You should see something like the following: $ perl Makefile.PLinclude /Volumes/Home/Users/solar/Projects/CatalystBook/MyApp/inc/Module/Install.pminclude inc/Module/Install/Metadata.pminclude inc/Module/Install/Base.pmCannot determine perl version info from lib/MyApp.pminclude inc/Module/Install/Catalyst.pm*** Module::Install::Catalystinclude inc/Module/Install/Makefile.pmPlease run "make catalyst_par" to create the PAR package!*** Module::Install::Catalyst finished.include inc/Module/Install/Scripts.pminclude inc/Module/Install/AutoInstall.pminclude inc/Module/Install/Include.pminclude inc/Module/AutoInstall.pm*** Module::AutoInstall version 1.03*** Checking for Perl dependencies...[Core Features]- Test::More ...loaded. (0.94 >= 0.88)- Catalyst::Runtime ...loaded. (5.80021 >= 5.80021)- Catalyst::Plugin::ConfigLoader ...loaded. (0.23)- Catalyst::Plugin::Static::Simple ...loaded. (0.29)- Catalyst::Action::RenderView ...loaded. (0.14)- Moose ...loaded. (0.99)- namespace::autoclean ...loaded. (0.09)- Config::General ...loaded. (2.42)*** Module::AutoInstall configuration finished.include inc/Module/Install/WriteAll.pminclude inc/Module/Install/Win32.pminclude inc/Module/Install/Can.pminclude inc/Module/Install/Fetch.pmWriting Makefile for MyAppWriting META.yml Note that it mentions that all the required modules are available. If any modules are missing, you may have to install those modules using cpan.You can also alternatively install the missing modules by running make followed by make install. We will discuss what each of these files do but for now, let's just change to the newly-created MyApp directory (cd MyApp) and run the following command: $ perl script/myapp_server.pl This will start up the development web server. You should see some debugging information appear on the console, which is shown as follows: [debug] Debug messages enabled[debug] Loaded plugins:.--------------------------------------------.| Catalyst::Plugin::ConfigLoader 0.23| Catalyst::Plugin::Static::Simple 0.21 |'--------------------------------------------'[debug] Loaded dispatcher "Catalyst::Dispatcher" [debug] Loaded engine"Catalyst::Engine::HTTP"[debug] Found home "/home/jon/projects/book/chapter2/MyApp"[debug] Loaded Config "/home/jon/projects/book/chapter2/MyApp/myapp.conf"[debug] Loaded components:.-----------------------------+----------.| Class | Type +-------------------------------+----------+| MyApp::Controller::Root | instance |'----------------------------------+----------'[debug] Loaded Private actions:.----------------------+-------- --------+----.| Private | Class | Method |+-----------+------------+--------------+| /default | MyApp::Controller::Root | default || /end | MyApp::Controller::Root | end|/index | MyApp::Controller::Root | index'--------------+--------------+-------'[debug] Loaded Path actions:.----------------+--------------.| Path | Private|+-------------------+-------------------------+| / | /default|| / | /index|'----------------+--------------------------------'[info] MyApp powered by Catalyst 5.80004You can connect to your server at http://localhost:3000 This debugging information contains a summary of plugins, Models, Views, and Controllers that your application uses, in addition to showing a map of URLs to actions. As we haven't added anything to the application yet, this isn't particularly helpful, but it will become helpful as we add features. To see what your application looks like in a browser, simply browse to http://localhost:3000. You should see the standard Catalyst welcome page as follows: Let's put the application aside for a moment, and see the usage of all the files that were created. The list of files is as shown in the following screenshot: Before we modify MyApp, let's take a look at how a Catalyst application is structured on a disk. In the root directory of your application, there are some support files. If you're familiar with CPAN modules, you'll be at home with Catalyst. A Catalyst application is structured in exactly the same way (and can be uploaded to the CPAN unmodified, if desired). This article will refer to MyApp as your application's name, so if you use something else, be sure to substitute properly. Latest helper scripts Catalyst 5.8 is ported to Moose and the helper scripts for Catalyst were upgraded much later. Therefore, it is necessary for you to check if you have the latest helper scripts. We will discuss helper scripts later. For now, catalyst.pl is a helper script and if you're using an updated helper script, then the lib/MyApp.pm file (or lib/whateverappname.pm) will have the following line: use Moose; If you don't see this line in your application package in the lib directory, then you will have to update the helper scripts. You can do that by executing the following command: cpan Catalyst::Helper Files in the MyApp directory The MyApp directory contains the following files: Makefile.PL: This script generates a Makefile to build, test, and in stall your application. It can also contain a list of your application's CPAN dependencies and automatically install them. To run Makefile.PL and generate a Makefile, simply type perl Makefile.PL. After that, you can run make to build the application, make test to test the application (you can try this right now, as some sample tests have already been created), make install to install the application, and so on. For more details, see the Module::Install documentation. It's important that you don't delete this file. Catalyst looks for it to determine where the root of your application is. Changes: This is simply a free-form text file where you can document changes to your application. It's not required, but it can be helpful to end users or other developers working on your application, especially if you're writing an open source application. README: This is just a text file with information on your application. If you're not going to distribute your application, you don't need to keep it around. myapp.conf: This is your application's main confi guration file, which is loaded when you start your application. You can specify configuration directly inside your application, but this file makes it easier to tweak settings without worrying about breaking your code. myapp.conf is in Apache-style syntax, but if you rename the file to myapp.pl, you can write it in Perl (or myapp.yml for YML format; see the Config::Any manual for a complete list). The name of this file is based on your application's name. Everything is converted to lowercase, double colons are replaced with underscores, and the .conf extension is appended. Files in the lib directory The heart of your application lives in the lib directory. This directory contains a file called MyApp.pm. This file defines the namespace and inheritance that are necessary to make this a Catalyst application. It also contains the list of plugins to load application-specific configurations. These configurations can also be defined in the myapp.conf file mentioned previously. However, if the same configuration is mentioned in both the files, then the configuration mentioned here takes precedence. Inside the lib directory, there are three key directories, namely MyApp/Controller, MyApp/Model, and MyApp/View. Catalyst loads the Controllers, Models, and Views from these directories respectively.
Read more
  • 0
  • 0
  • 2753
article-image-commonly-used-data-structures
Packt
16 Nov 2016
23 min read
Save for later

Commonly Used Data Structures

Packt
16 Nov 2016
23 min read
In this article by Erik Azar and Mario Eguiluz Alebicto, authors of the book Swift Data Structure and Algorithms, we will learn that the Swift language is truly powerful, but a powerful language is nothing if it doesn't have a powerful standard library to accompany it. The Swift standard library defines a base layer of functionality that you can use for writing your applications, including fundamental data types, collection types, functions and methods, and a large number of protocols. (For more resources related to this topic, see here.) We're going to take a close look at the Swift standard library, specifically support for collection types, with a very low level examination of Arrays, Dictionaries, Sets, and Tuples. The topics covered in this article are: Using the Swift standard library Implementing subscripting Understanding immutability Interoperability between Swift and Objective-C Swift Protocol-Oriented Programming Using the Swift standard library Users often treat a standard library as part of the language. In reality, philosophies on standard library design very widely, often with opposing views. For example, the C and C++ standard libraries are relatively small, containing only the functionality that "every developer" might reasonably require to develop an application. Conversely, languages such as Python, Java, and .NET have large standard libraries that include features, which tend to be separate in other languages, such as XML, JSON, localization, and e-mail processing. In the Swift programming language, the Swift standard library is separate from the language itself, and is a collection of classes, structures, enumerations, functions, and protocols, which are written in the core language. The Swift standard library is currently very small, even compared to C and C++. It provides a base layer of functionality through a series of generic structures and enums, which also adopt various protocols that are also defined in the library. You'll use these as the building blocks for applications that you develop. The Swift standard library also places specific performance characteristics on functions and methods in the library. These characteristics are guaranteed only when all participating protocols performance expectations are satisfied. An example is the Array.append(), its algorithm complexity is amortized O(1), unless the self's storage is shared with another live array; O(count) if self does not wrap a bridged NSArray; otherwise the efficiency is unspecified. We're going to take a detailed look at the implementations for Arrays, Dictionaries, Sets, and Tuples. You'll want to make sure you have at least Xcode 8.1 installed to work with code in this section Why Structures? If you're coming from a background working in languages such as Objective-C, C++, Java, Ruby, Python, or other object-oriented languages, you traditionally use classes to define the structure of your types. This is not the case in the Swift standard library; structures are used when defining the majority of the types in the library. If you're coming from an Objective-C or C++ background this might seem especially odd and feel wrong, because classes are much more powerful than structures. So why does Swift use structures, which are value types, when it also supports classes, which are reference types, that support inheritance, deinitializers, and reference counting? It's exactly because of the limited functionality of structures over classes why Swift uses structures instead of classes for the building blocks of the standard library. Because structures are value types it means they can have only one owner and are always copied when assigned to a new variable or passed to a function. This can make your code inherently safer because changes to the structure will not affect other parts of your application. The preceding description refers to the "copying" of value types. The behavior you see in your code will always be as if a copy took place. However, Swift only performs an actual copy behind the scenes when it is absolutely necessary to do so. Swift manages all value copying to ensure optimal performance, and you should not avoid assignment to try to preempt this optimization. Structures in Swift are far more powerful than in other C-based languages, they are very similar to classes. Swift structures support the same basic features as C-based structures, but Swift also adds support, which makes them feel more like classes. Features of Swift structures: In addition to an automatically generated memberwise initializer, they can have custom initializers Can have methods Can implement protocols So this may leave you asking, when should I use a class over a structure? Apple has published guidelines you can follow when considering to create a structure. If one or more of the following conditions apply, consider creating a structure: Its primary purpose is to encapsulate a few simple data values You expect the encapsulated values to be copied rather than referenced when you pass around or assign an instance of the structure Any properties stored by the structure are value types, which would be expected to be copied instead of referenced The structure doesn't need to inherit properties or behavior from another existing type In all other cases, create a class which will call instances of that class to be passed by the reference. Apple Guidelines for choosing between classes and structures: https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/ClassesAndStructures.html Declaring arrays in Swift An array stores values of the same type in an ordered list. Arrays in Swift have a few important differences between arrays in Objective-C. The first is elements in Swift arrays must be of the same type. If you're used to working with Objective-C NSArrays prior to Xcode 7 you may feel this is a disadvantage, but it actually is a significant advantage as it allows you to know exactly what type you get back when working with an array, allowing you to write more efficient code that leads to less defects. However, if you find you must have dissimilar types in an array, you can define the array to be of a protocol that is common to the other types, or defines the array of type AnyObject. Another difference is Objective-C array elements have to be a class type. In Swift, arrays are generic type collections, there are no limitations on what that type can be. You can create an array that holds any value type, such as Int, Float, String, or Enums, as well as classes. The last difference is, unlike arrays in Objective-C, Swift arrays are defined as a structure instead of a class. We know the basic constructs for declaring an integer array and performing operations such as adding, removing, and deleting elements from an array. Let's take a closer, more detailed examination of how arrays are implemented in the standard library. There are three Array types available in Swift: Array ContiguousArray ArraySlice Every Array class maintains a region of memory that stores the elements contained in the array. For array element types that are not a class or @objc protocol type, the arrays memory region is stored in contiguous blocks. Otherwise, when an arrays element type is a class or @objc protocol type, the memory region can be a contiguous block of memory, an instance of NSArray, or an instance of an NSArray subclass. If may be more efficient to use a ContiguousArray if you're going to store elements that are a class or an @objc protocol type. The ContiguousArray class shares many of the protocols that Array implements, so most of the same properties are supported. The key differentiator between Array and ContiguousArray is that ContiguousArray' do not provide support for bridging to Objective-C. The ArraySlice class represents a sub-sequence of an Array, ContiguousArray, or another ArraySlice. Like ContiguousArray, ArraySlice instances also use contiguous memory to store elements, and they do not bridge to Objective-C. An ArraySlice represents a sub-sequence from a larger, existing Array type. Because of this, you need to be aware of the side effects if you try storing an ArraySlice after the original Array's lifetime has ended and the elements are no longer accessible. This could lead to memory or object leaks thus Apple recommends that long-term storage of ArraySlice instances is discouraged. When you create an instance of Array, ContiguousArray, or ArraySlice, an extra amount of space is reserved for storage of its elements. The amount of storage reserved is referred to as an array's capacity, which represents the potential amount of storage available without having to reallocate the array. Since Swift arrays share an exponential growth strategy, as elements are appended to an array, the array will automatically resize when it runs out of capacity. When you amortize the append operations over many iterations, the append operations are performed in constant time. If you know ahead of time an array will contain a large number of elements, it may be more efficient to allocate additional reserve capacity at creation time. This will avoid constant reallocation of the array as you add new elements. The following code snippet shows an example of declaring an initial capacity of 500 Integer elements for the array intArray: // Create an array using full array syntax var intArray = Array<Int>() // Create an array using shorthand syntax intArray = [Int]() intArray.capacity // contains 0 intArray.reserveCapacity(500) intArray.capacity // contains 508 You can notice from the preceding example that our reserve capacity is actually larger than 500 integer elements. For performance reasons, Swift may allocate more elements than you requests. But you can be guaranteed that at least the number of elements specified will be created. When you make a copy of an array a separate physical copy is not made during the assignment. Swift implements a feature called copy-on-write, which means that array elements are not copied until a mutating operation is performed when more than one array instances is sharing the same buffer. The first mutating operation may cost O(n) in time and space, where n is the length of the array. Initializing Array The initialization phase prepares a struct, class, or enum for use through a method called init. If you're familiar with other languages such as C++, Java, or C# this type of initialization is performed in their class constructor, which is defined using the class's name. If you're coming from Objective-C, the Swift initializers will behave a little differently from how you're used to. In Objective-C, the init methods will directly return the object they initialize, callers will then check the return value when initializing a class and check for nil to see if the initialization process failed. In Swift, this type of behavior is implemented as a feature called failable initialization, which we'll discuss shortly. There are four initializers provided for the three types of Swift arrays implemented in the standard library. Additionally, you can use a dictionary literal to define a collection of one or more elements to initialize the array, the elements are separated by a comma (,): // Create an array using full array syntax var intArray = Array<Int>() // Create an array using shorthand syntax intArray = [Int]() // Use array literal declaration var intLiteralArray: [Int] = [1, 2, 3] // [1, 2, 3] //: Use shorthand literal declaration intLiteralArray = [1, 2, 3] // [1, 2, 3] //: Create an array with a default value intLiteralArray = [Int](count: 5, repeatedValue: 2) // [2, 2, 2, 2, 2] Adding and Updating Elements in an Array To add a new element to an array you can use the append(_:) method. This will add newElement to the end of the array: var intArray = [Int]() intArray.append(50) // [50] If you have an existing collection type you can use the append(_:) method. This will append elements from newElements to the end of the array: intArray.append([60, 65, 70, 75]) // [50, 60, 65, 70, 75] If you want to add elements at a specific index you can use the insert(newElement:at:) method. This will add newElement at index i: intArray.insert(newElement: 55, at: 1)// [50, 55, 60, 65, 70, 75] Requires i <= count otherwise you will receive a fatal error: Array index out of range message and execution will stop. To replace an element at a specific index you can use the subscript notation, providing the index of the element you want to replace: intArray[2] = 63 // [50, 55, 63, 65, 70, 75] Retrieving and Removing Elements from an Array There are several methods you can use to retrieve elements from an array. If you know the specific array index or sub-range of indexes you can use array subscripting: // Initial intArray elements // [50, 55, 63, 65, 70, 75] // Retrieve an element by index intArray[5] // returns 75 // Retrieve an ArraySlice of elements by subRange intArray[2..<5] // Returns elements between index 2 and less than index 5 // [63, 65, 70] // Retrieve an ArraySlice of elements by subRange intArray[2…5] // Returns elements between index 2 and index 5 // [63, 65, 70, 75] You can also iterate over the array, examining each element in the collection: for element in intArray { print(element) } // 50 // 55 // 63 // 65 // 70 // 75 You can also check if an array has a specific element or pass a closure that will allow you to evaluate each element: intArray.contains(55) // returns true See Apples Swift documentation for a complete list of methods available for arrays:https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/CollectionTypes.html Retrieving and Initializing Dictionaries A dictionary is an unordered collection that stores associations between keys and values of the same type with no defined ordering. Each value is associated with a unique key that acts as an identifier for the value in the dictionary. A dictionary data structure works just like a real world dictionary; to retrieve a definition, reference, translation, or some other type of information for a specific word you open the dictionary and locate the word to find the information you're looking for. With a dictionary data structure, you store and retrieve values by associating them with a key. A dictionary key type must conform to the Hashtable protocol. Initializing a Dictionary Just like arrays, there are two ways you can declare a dictionary using either the full or literal syntax: // Full syntax declaration var myDict = Dictionary<Int, String>() // Shorthand syntax declaration var myDict = [Int: String]() Additionally, you can use a dictionary literal to define a collection of one or more key-value pairs to initialize the dictionary. The key and value are separated by a colon (:) and the pairs are separated by a comma (,). If the keys and values have consistent types you do not need to declare the type of dictionary in the declaration, Swift will infer it based on the key-value pairs used during initialization, which saves a few keystrokes allowing you to use the shorter form: // Use dictionary literal declaration var myDict: [Int: String] = [1: "One", 2: "Two", 3: "Three"] // [2: "Two", 3: "Three", 1: "One"] // Use shorthand literal declaration var myDict = [1: "One", 2: "Two", 3: "Three"] // [2: "Two", 3: "Three", 1: "One"] Adding/Modifying/Removing a Key-Value Pair To add a new key-value pair or to update an existing pair you can use the updateValue(_:forKey) method or subscript notation. If the key is does not exist a new pair will be added, otherwise the existing pair is updated with the new value: // Add a new pair to the dictionary myDict.updateValue("Four", forKey: 4) $R0: String? = nil // [2: "Two", 3: "Three", 1: "One", 4: "Four"] // Add a new pair using subscript notation myDict[5] = "Five" // [5: "Five", 2: "Two", 3: "Three", 1: "One", 4: "Four"] Unlike the subscript method, the updateValue(_:forKey:) method will return the value that was replaced, or nil if a new key-value pair was added. To remove a key-value pair you can use the removeValue(forKey:) method, providing the key to delete or setting the key to nil using subscript notation: // Remove a pair from the dictionary – returns the removed pair let removedPair = myDict.removeValue(forKey: 1) removedPair: String? = "One" // [5: "Five", 2: "Two", 3: "Three", 4: "Four"] // Remove a pair using subscript notation myDict[2] = nil // [5: "Five", 3: "Three", 4: "Four"] Unlike the subscript method, the removeValue(forKey:) method will return the value that was removed, or nil if the key doesn't exist. Retrieving Values from a Dictionary You can retrieve specific key-value pairs from a dictionary using subscript notation. The key is passed to the square bracket subscript notation; it's possible the key may not exist so subscripts will return an Optional. You can use either optional binding or forced unwrapping to retrieve the pair or determine if it doesn't exist. Do not use forced unwrapping though unless you're absolutely sure the key exists or a runtime exception will be thrown: // Example using Optional Binding var myDict = [1: "One", 2: "Two", 3: "Three"] if let optResult = myDict[4] { print(optResult) } else { print("Key Not Found") } // Example using Forced Unwrapping – only use if you know the key will exist let result = myDict[3]! print(result) If instead of getting a specific value, you can iterate over the sequence of a dictionary and return a (key, value) tuple, which can be decomposed into explicitly named constants. In this example, (key, value) are decomposed into (stateAbbr, stateName): // Dictionary of state abbreviations to state names let states = [ "AL" : "Alabama", "CA" : "California", "AK" : "Alaska", "AZ" : "Arizona", "AR" : "Arkansas"] for (stateAbbr, stateName) in states { print("The state abbreviation for (stateName) is (stateAbbr)") } // Output of for...in The state abbreviation for Alabama is AL The state abbreviation for California is CA The state abbreviation for Alaska is AK The state abbreviation for Arizona is AZ The state abbreviation for Arkansas is AR You can see from the output that items in the dictionary are not output in the order they were inserted. Recall that dictionaries are unordered collections, so there is no guarantee that the order pairs will be retrieved when iterating over them. If you want to retrieve only the keys or values independently you can use the keys or values properties on a dictionary. These properties will return a LazyMapCollection instance over the collection. The elements of the result will be computed lazily each time they are read by calling the transform closure function on the base element. The key and value will appear in the same order as they would as a key-value pair, .0 member and .1 member, respectively: for (stateAbbr) in states.keys { print("State abbreviation: (stateAbbr)") } //Output of for...in State abbreviation: AL State abbreviation: CA State abbreviation: AK State abbreviation: AZ State abbreviation: AR for (stateName) in states.values { print("State name: (stateName)") } //Output of for...in State name: Alabama State name: California State name: Alaska State name: Arizona State name: Arkansas You can read more about the LazyMapCollection structure on Apple's developer site at https://developer.apple.com/library/prerelease/ios/documentation/Swift/Reference/Swift_LazyMapCollection_Structure/ There may be occasions when you want to iterate over a dictionary in an ordered manner. For those cases, you can make use of the global sort(_:) method. This will return an array containing the sorted elements of a dictionary as an array: // Sort the dictionary by the value of the key let sortedArrayFromDictionary = states.sort({ $0.0 < $1.0 }) // sortedArrayFromDictionary contains... // [("AK", "Alaska"), ("AL", "Alabama"), ("AR", "Arkansas"), ("AZ", "Arizona"), ("CA", "California")] for (key) in sortedArrayFromDictionary.map({ $0.0}) { print("The key: (key)") } //Output of for...in The key: AK The key: AL The key: AR The key: AZ The key: CA for (value) in sortedArrayFromDictionary.map({ $0.1}) { print("The value: (value)") } //Output of for...in The value: Alaska The value: Alabama The value: Arkansas The value: Arizona The value: California Let's walk through what is happening here. For the sort method, we are passing a closure that will compare if the first arguments key from the key-value pair, $0.0 against the second arguments key from the key-value pair, $1.0, if the first argument is less than the second it will be added to the new array. When the sort method has iterated over and sorted all of the elements, a new array of [(String, String)] containing the key-value pairings is returned. Next, we want to retrieve the list of keys from the sorted array. On the sortedArrayFromDictionary variable we call map({ $0.0}), the transform passed to the map method will add the .0 element of each array element in sortedArrayFromDictionary to the new array returned by the map method. The last step is to retrieve a list of values from the sorted array. We are performing the same call to the map method as we did to retrieve the list of keys, this time we want the .1 element of each array element in sortedArrayFromDictionary, though. Like the preceding example, these values will be added to the new array returned by the map method.     But what if you wanted to base your sorting order on the dictionary value instead of the key? This is simple to do; you would just change the parameter syntax that is passed to the sort method. Changing it to states.sort({ $0.1 < $1.1 }) will now compare the first arguments value from the key-value pair, $0.1, against the second arguments value from its key-value pair, $1.1, and adding the lesser of the two to the new array that will be returned. Declaring Sets A Set is an unordered collection of unique, non-nil elements with no defined ordering. A Set type must conform to the Hashtable protocol to be stored in a set. All of Swift's basic types are hashable by default. Enumeration case values that do not use associate values are also hashable by default. You can store a custom type in a Set, you'll just need to ensure that it conforms to the Hashable protocol, as well as the Equatable protocol since Hashtable conforms to it. Sets can be used anywhere you would use an array when ordering is not important and you want to ensure only unique elements are stored. Additionally, access time has the potential of being more efficient over arrays. With an array, the worst case scenario when searching for an element is O(n) where n is the size of the array. Whereas accessing an element in a Set is always constant time O(1), regardless of its size. Initializing a Set Unlike the other collection types, Sets cannot be inferred from an array literal alone and must be explicitly declared by specifying the Set type: // Full syntax declaration var stringSet = Set<String>() Because of Swift's type inference, you do not have to specify the type of the Set that you're initializing, it will infer the type based on the array literal it is initialized with. Remember though that the array literal must contain the same types: // Initialize a Set from an array literal var stringSet: Set = ["Mary", "John", "Sally"] print(stringSet.debugDescription) // Out of debugDescription shows stringSet is indeed a Set type "Set(["Mary", "John", "Sally"])" Modifying and Retrieving Elements of a Set To add a new element to a Set use the insert(_:) method. You can check if an element is already stored in a Set using the contains(_:) method. Swift provides several methods for removing elements from a Set, if you have an instance of the element you want to remove you can use the remove(_:) method, which takes an element instance. If you know the index of an element in the Set you can use the remove(at:) method, which takes an instance of SetIndex<Element>. If the Set count is greater than 0 you can use the removeFirst() method to remove the element and the starting index. Lastly, if you want to remove all elements from a Set, use the removeAll() method or removeAll(keepCapacity) method, if keepCapacity is true the current capacity will not decrease: var stringSet: Set = ["Erik", "Mary", "Michael", "John", "Sally"] // ["Mary", "Michael", "Sally", "John", "Erik"] stringSet.insert("Patrick") // ["Mary", "Michael", "Sally", "Patrick", "John", "Erik"] if stringSet.contains("Erik") { print("Found element") } else { print("Element not found") } // Found element stringSet.remove("Erik") // ["Mary", "Sally", "Patrick", "John", "Michael"] if let idx = stringSet.index(of: "John") { stringSet.remove(at: idx) } // ["Mary", "Sally", "Patrick", "Michael"] stringSet.removeFirst() // ["Sally", "Patrick", "Michael"] stringSet.removeAll() // [] You can iterate over a Set the same as you would the other collection types by using the for…in loop. The Swift Set type is unordered, so you can use the sort method like we did for the Dictionary type if you want to iterate over elements in a specific order: var stringSet: Set = ["Erik", "Mary", "Michael", "John", "Sally"] // ["Mary", "Michael", "Sally", "John", "Erik"] for name in stringSet { print("name = (name)") } // name = Mary // name = Michael // name = Sally // name = John // name = Erik for name in stringSet.sorted() { print("name = (name)") } // name = Erik // name = John // name = Mary // name = Michael // name = Sally Set Operations The Set type is modeled after the mathematical Set Theory and it implements methods that support basic Set operations for comparing two Sets, as well as operations that perform membership and equality comparisons between two Sets. Comparison Operations The Swift Set type contains four methods for performing common operations on two sets. The operations can be performed either by returning a new set, or using the operations alternate InPlace method to perform the operation in place on the source Set. The union(_:) and formUnion(_:) methods create a new Set or update the source Set with all the values from both Sets, respectively. The intersection(_:) and formIntersection(_:) methods create a new Set or update the source Set with values only common to both Sets, respectively. The symmetricDifference(_:) or formSymmetricDifference(_:) methods create a new Set or update the source Set with values in either Set, but not both, respectively. The subtracting(_:) or subtract(_:) methods create a new Set or update the source Set with values not in the specified Set: let adminRole: Set = [ "READ", "EDIT", "DELETE", "CREATE", "SETTINGS", "PUBLISH_ANY", "ADD_USER", "EDIT_USER", "DELETE_USER"] let editorRole: Set = ["READ", "EDIT", "DELETE", "CREATE", "PUBLISH_ANY"] let authorRole: Set = ["READ", "EDIT_OWN", "DELETE_OWN", "PUBLISH_OWN", "CREATE"] let contributorRole: Set = [ "CREATE", "EDIT_OWN"] let subscriberRole: Set = ["READ"] // Contains values from both Sets let fooResource = subscriberRole.union(contributorRole) // "READ", "EDIT_OWN", "CREATE" // Contains values common to both Sets let commonPermissions = authorRole.intersection(contributorRole) // "EDIT_OWN", "CREATE" // Contains values in either Set but not both let exclusivePermissions = authorRole.symmetricDifference(contributorRole) // "PUBLISH_OWN", "READ", "DELETE_OWN" Membership and Equality Operations Two sets are said to be equal if they contain precisely the same values, and since Sets are unordered, the ordering of the values between Sets does not matter. Use the == operator, which is the "is equal" operator, to determine if two Sets contain all of the same values: // Note ordering of the sets does not matter var sourceSet: Set = [1, 2, 3] var destSet: Set = [2, 1, 3] var isequal = sourceSet == destSet // isequal is true Use the isSubset(of:) method to determine if all of the values of a Set are contained in a specified Set. Use the isStrictSubset(of:) method to determine if a Set is a subset, but not equal to the specified Set. Use the isSuperset(of:) method to determine if a Set contains all of the values of the specified Set. Use the isStrictSuperset(of:) method to determine if a Set is a superset, but not equal to the specified Set. Use the isDisjoint(with:) method to determine if two Sets have the same values in common: let contactResource = authorRole // "EDIT_OWN", "PUBLISH_OWN", "READ", "DELETE_OWN", "CREATE" let userBob = subscriberRole // "READ" let userSally = authorRole // "EDIT_OWN", "PUBLISH_OWN", "READ", "DELETE_OWN", "CREATE" if userBob.isSuperset(of: fooResource){ print("Access granted") } else { print("Access denied") } // "Access denied" if userSally.isSuperset(of: fooResource){ print("Access granted") } else { print("Access denied") } // Access granted authorRole.isDisjoint(with: editorRole) // false editorRole.isSubset(of: adminRole) // true Summary In this article, we've learned about the difference between classes and structures and when you would use one type over another, as well as characteristics of value types and reference types and how each type is allocated at runtime. We went into some of the implementation details for the Array, Dictionary, and Set collection types that are implemented in the Swift standard library. Resources for Article: Further resources on this subject: Python Data Structures [Article] The pandas Data Structures [Article] Python Data Persistence using MySQL Part III: Building Python Data Structures Upon the Underlying Database Data [Article]
Read more
  • 0
  • 0
  • 2750

article-image-implementing-basic-helloworld-wcf-windows-communication-foundation-service
Packt
27 Oct 2009
7 min read
Save for later

Implementing a Basic HelloWorld WCF (Windows Communication Foundation) Service

Packt
27 Oct 2009
7 min read
We will build a HelloWorld WCF service by carrying out the following steps: Create the solution and project Create the WCF service contract interface Implement the WCF service Host the WCF service in the ASP.NET Development Server Create a client application to consume this WCF service Creating the HelloWorld solution and project Before we can build the WCF service, we need to create a solution for our service projects. We also need a directory in which to save all the files. Throughout this article, we will save our project source codes in the D:SOAwithWCFandLINQProjects directory. We will have a subfolder for each solution we create, and under this solution folder, we will have one subfolder for each project. For this HelloWorld solution, the final directory structure is shown in the following image: You don't need to manually create these directories via Windows Explorer; Visual Studio will create them automatically when you create the solutions and projects. Now, follow these steps to create our first solution and the HelloWorld project: Start Visual Studio 2008. If the Open Project dialog box pops up, click Cancel to close it. Go to menu File | New | Project. The New Project dialog window will appear. From the left-hand side of the window (Project types), expand Other Project Types and then select Visual Studio Solutions as the project type. From the right-hand side of the window (Templates), select Blank Solution as the template. At the bottom of the window, type HelloWorld as the Name, and D:SOAwithWCFandLINQProjects as the Location. Note that you should not enter HelloWorld within the location, because Visual Studio will automatically create a folder for a new solution. Click the OK button to close this window and your screen should look like the following image, with an empty solution. Depending on your settings, the layout may be different. But you should still have an empty solution in your Solution Explorer. If you don't see Solution Explorer, go to menu View | Solution Explorer, or press Ctrl+Alt+L to bring it up. In the Solution Explorer, right-click on the solution, and select Add | New Project… from the context menu. You can also go to menu File | Add | New Project… to get the same result. The following image shows the context menu for adding a new project. The Add New Project window should now appear on your screen. In the left-hand side of this window (Project types), select Visual C# as the project type, and on the right-hand side of the window (Templates), select Class Library as the template. At the bottom of the window, type HelloWorldService as the Name. Leave D:SOAwithWCFandLINQProjectsHelloWorld as the Location. Again, don't add HelloWorldService to the location, as Visual Studio will create a subfolder for this new project (Visual Studio will use the solution folder as the default base folder for all the new projects added to the solution). You may have noticed that there is already a template for WCF Service Application in Visual Studio 2008. For the very first example, we will not use this template. Instead, we will create everything by ourselves so you know what the purpose of each template is. This is an excellent way for you to understand and master this new technology. Now, you can click the OK button to close this window. Once you click the OK button, Visual Studio will create several files for you. The first file is the project file. This is an XML file under the project directory, and it is called HelloWorldService.csproj. Visual Studio also creates an empty class file, called Class1.cs. Later, we will change this default name to a more meaningful one, and change its namespace to our own one. Three directories are created automatically under the project folder—one to hold the binary files, another to hold the object files, and a third one for the properties files of the project. The window on your screen should now look like the following image: We now have a new solution and project created. Next, we will develop and build this service. But before we go any further, we need to do two things to this project: Click the Show All Files button on the Solution Explorer toolbar. It is the second button from the left, just above the word Solution inside the Solution Explorer. If you allow your mouse to hover above this button, you will see the hint Show All Files, as shown in above diagram. Clicking this button will show all files and directories in your hard disk under the project folder-rven those items that are not included in the project. Make sure that you don't have the solution item selected. Otherwise, you can't see the Show All Files button. Change the default namespace of the project. From the Solution Explorer, right-click on the HelloWorldService project, select Properties from the context menu, or go to menu item Project | HelloWorldService Properties…. You will see the project properties dialog window. On the Application tab, change the Default namespace to MyWCFServices. Lastly, in order to develop a WCF service, we need to add a reference to the ServiceModel namespace. On the Solution Explorer window, right-click on the HelloWorldService project, and select Add Reference… from the context menu. You can also go to the menu item Project | Add Reference… to do this. The Add Reference dialog window should appear on your screen. Select System.ServiceModel from the .NET tab, and click OK. Now, on the Solution Explorer, if you expand the references of the HelloWorldService project, you will see that System.ServiceModel has been added. Also note that System.Xml.Linq is added by default. We will use this later when we query a database. Creating the HelloWorldService service contract interface In the previous section, we created the solution and the project for the HelloWorld WCF Service. From this section on, we will start building the HelloWorld WCF service. First, we need to create the service contract interface. In the Solution Explorer, right-click on the HelloWorldService project, and select Add | New Item…. from the context menu. The following Add New Item - HelloWorldService dialog window should appear on your screen. On the left-hand side of the window (Categories), select Visual C# Items as the category, and on the right-hand side of the window (Templates), select Interface as the template. At the bottom of the window, change the Name from Interface1.cs to IHelloWorldService.cs. Click the Add button. Now, an empty service interface file has been added to the project. Follow the steps below to customize it. Add a using statement: using System.ServiceModel; Add a ServiceContract attribute to the interface. This will designate the interface as a WCF service contract interface. [ServiceContract] Add a GetMessage method to the interface. This method will take a string as the input, and return another string as the result. It also has an attribute, OperationContract. [OperationContract] String GetMessage(String name); Change the interface to public. The final content of the file IHelloWorldService.cs should look like the following: using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.ServiceModel;namespace MyWCFServices{[ServiceContract]public interface IHelloWorldService{[OperationContract]String GetMessage(String name);}}
Read more
  • 0
  • 0
  • 2748

article-image-pointers-and-references
Packt
03 Jun 2015
14 min read
Save for later

Pointers and references

Packt
03 Jun 2015
14 min read
In this article by Ivo Balbaert, author of the book, Rust Essentials, we will go through the pointers and memory safety. (For more resources related to this topic, see here.) The stack and the heap When a program starts, by default a 2 MB chunk of memory called the stack is granted to it. The program will use its stack to store all its local variables and function parameters; for example, an i32 variable takes 4 bytes of the stack. When our program calls a function, a new stack frame is allocated to it. Through this mechanism, the stack knows the order in which the functions are called so that the functions return correctly to the calling code and possibly return values as well. Dynamically sized types, such as strings or arrays, can't be stored on the stack. For these values, a program can request memory space on its heap, so this is a potentially much bigger piece of memory than the stack. When possible, stack allocation is preferred over heap allocation because accessing the stack is a lot more efficient. Lifetimes All variables in a Rust code have a lifetime. Suppose we declare an n variable with the let n = 42u32; binding. Such a value is valid from where it is declared to when it is no longer referenced, which is called the lifetime of the variable. This is illustrated in the following code snippet: fn main() { let n = 42u32; let n2 = n; // a copy of the value from n to n2 life(n); println!("{}", m); // error: unresolved name `m`. println!("{}", o); // error: unresolved name `o`. }   fn life(m: u32) -> u32 {    let o = m;    o } The lifetime of n ends when main() ends; in general, the start and end of a lifetime happen in the same scope. The words lifetime and scope are synonymous, but we generally use the word lifetime to refer to the extent of a reference. As in other languages, local variables or parameters declared in a function do not exist anymore after the function has finished executing; in Rust, we say that their lifetime has ended. This is the case for the m and o variables in the preceding code snippet, which are only known in the life function. Likewise, the lifetime of a variable declared in a nested block is restricted to that block, like phi in the following example: {    let phi = 1.618; } println!("The value of phi is {}", phi); // is error Trying to use phi when its lifetime is over results in an error: unresolved name 'phi'. The lifetime of a value can be indicated in the code by an annotation, for example 'a, which reads as lifetime where a is simply an indicator; it could also be written as 'b, 'n, or 'life. It's common to see single letters being used to represent lifetimes. In the preceding example, an explicit lifetime indication was not necessary since there were no references involved. All values tagged with the same lifetime have the same maximum lifetime. In the following example, we have a transform function that explicitly declares the lifetime of its s parameter to be 'a: fn transform<'a>(s: &'a str) { /* ... */ } Note the <'a> indication after the name of the function. In nearly all cases, this explicit indication is not needed because the compiler is smart enough to deduce the lifetimes, so we can simply write this: fn transform_without_lifetime(s: &str) { /* ... */ } Here is an example where even when we indicate a lifetime specifier 'a, the compiler does not allow our code. Let's suppose that we define a Magician struct as follows: struct Magician { name: &'static str, power: u32 } We will get an error message if we try to construct the following function: fn return_magician<'a>() -> &'a Magician { let mag = Magician { name: "Gandalf", power: 4625}; &mag } The error message is error: 'mag' does not live long enough. Why does this happen? The lifetime of the mag value ends when the return_magician function ends, but this function nevertheless tries to return a reference to the Magician value, which no longer exists. Such an invalid reference is known as a dangling pointer. This is a situation that would clearly lead to errors and cannot be allowed. The lifespan of a pointer must always be shorter than or equal to than that of the value which it points to, thus avoiding dangling (or null) references. In some situations, the decision to determine whether the lifetime of an object has ended is complicated, but in almost all cases, the borrow checker does this for us automatically by inserting lifetime annotations in the intermediate code; so, we don't have to do it. This is known as lifetime elision. For example, when working with structs, we can safely assume that the struct instance and its fields have the same lifetime. Only when the borrow checker is not completely sure, we need to indicate the lifetime explicitly; however, this happens only on rare occasions, mostly when references are returned. One example is when we have a struct with fields that are references. The following code snippet explains this: struct MagicNumbers { magn1: &u32, magn2: &u32 } This won't compile and will give us the following error: missing lifetime specifier [E0106]. Therefore, we have to change the code as follows: struct MagicNumbers<'a> { magn1: &'a u32, magn2: &'a u32 } This specifies that both the struct and the fields have the lifetime as 'a. Perform the following exercise: Explain why the following code won't compile: fn main() {    let m: &u32 = {        let n = &5u32;        &*n    };    let o = *m; } Answer the same question for this code snippet as well: let mut x = &3; { let mut y = 4; x = &y; } Copying values and the Copy trait In the code that we discussed in earlier section the value of n is copied to a new location each time n is assigned via a new let binding or passed as a function argument: let n = 42u32; // no move, only a copy of the value: let n2 = n; life(n); fn life(m: u32) -> u32 {    let o = m;    o } At a certain moment in the program's execution, we would have four memory locations that contain the copied value 42, which we can visualize as follows: Each value disappears (and its memory location is freed) when the lifetime of its corresponding variable ends, which happens at the end of the function or code block in which it is defined. Nothing much can go wrong with this Copy behavior, in which the value (its bits) is simply copied to another location on the stack. Many built-in types, such as u32 and i64, work similar to this, and this copy-value behavior is defined in Rust as the Copy trait, which u32 and i64 implement. You can also implement the Copy trait for your own type, provided all of its fields or items implement Copy. For example, the MagicNumber struct, which contains a field of the u64 type, can have the same behavior. There are two ways to indicate this: One way is to explicitly name the Copy implementation as follows: struct MagicNumber {    value: u64 } impl Copy for MagicNumber {} Otherwise, we can annotate it with a Copy attribute: #[derive(Copy)] struct MagicNumber {    value: u64 } This now means that we can create two different copies, mag and mag2, of a MagicNumber by assignment: let mag = MagicNumber {value: 42}; let mag2 = mag; They are copies because they have different memory addresses (the values shown will differ at each execution): println!("{:?}", &mag as *const MagicNumber); // address is 0x23fa88 println!("{:?}", &mag2 as *const MagicNumber); // address is 0x23fa80 The *const function is a so-called raw pointer. A type that does not implement the Copy trait is called non-copyable. Another way to accomplish this is by letting MagicNumber implement the Clone trait: #[derive(Clone)] struct MagicNumber {    value: u64 } Then, we can use clone() mag into a different object called mag3, effectively making a copy as follows: let mag3 = mag.clone(); println!("{:?}", &mag3 as *const MagicNumber); // address is 0x23fa78 mag3 is a new pointer referencing a new copy of the value of mag. Pointers The n variable in the let n = 42i32; binding is stored on the stack. Values on the stack or the heap can be accessed by pointers. A pointer is a variable that contains the memory address of some value. To access the value it points to, dereference the pointer with *. This happens automatically in simple cases such as in println! or when a pointer is given as a parameter to a method. For example, in the following code, m is a pointer containing the address of n: let m = &n; println!("The address of n is {:p}", m); println!("The value of n is {}", *m); println!("The value of n is {}", m); This prints out the following output, which differs for each program run: The address of n is 0x23fb34 The value of n is 42 The value of n is 42 So, why do we need pointers? When we work with dynamically allocated values, such as a String, that can change in size, the memory address of that value is not known at compile time. Due to this, the memory address needs to be calculated at runtime. So, to be able to keep track of it, we need a pointer for it whose value will change when the location of String in memory changes. The compiler automatically takes care of the memory allocation of pointers and the freeing up of memory when their lifetime ends. You don't have to do this yourself like in C/C++, where you could mess up by freeing memory at the wrong moment or at multiple times. The incorrect use of pointers in languages such as C++ leads to all kinds of problems. However, Rust enforces a strong set of rules at compile time called the borrow checker, so we are protected against them. We have already seen them in action, but from here onwards, we'll explain the logic behind their rules. Pointers can also be passed as arguments to functions, and they can be returned from functions, but the compiler severely restricts their usage. When passing a pointer value to a function, it is always better to use the reference-dereference &* mechanism, as shown in this example: let q = &42; println!("{}", square(q)); // 1764 fn square(k: &i32) -> i32 {    *k * *k } References In our previous example, m, which had the &n value, is the simplest form of pointer, and it is called a reference (or borrowed pointer); m is a reference to the stack-allocated n variable and has the &i32 type because it points to a value of the i32 type. In general, when n is a value of the T type, then the &n reference is of the &T type. Here, n is immutable, so m is also immutable; for example, if you try to change the value of n through m with *m = 7; you will get a cannot assign to immutable borrowed content '*m' error. Contrary to C, Rust does not let you change an immutable variable via its pointer. Since there is no danger of changing the value of n through a reference, multiple references to an immutable value are allowed; they can only be used to read the value, for example: let o = &n; println!("The address of n is {:p}", o); println!("The value of n is {}", *o); It prints out as described earlier: The address of n is 0x23fb34 The value of n is 42 We could represent this situation in the memory as follows: It is clear that working with pointers such as this or in much more complex situations necessitates much stricter rules than the Copy behavior. For example, the memory can only be freed when there are no variables or pointers associated with it anymore. And when the value is mutable, can it be changed through any of its pointers? Mutable references do exist, and they are declared as let m = &mut n. However, n also has to be a mutable value. When n is immutable, the compiler rejects the m mutable reference binding with the error, cannot borrow immutable local variable 'n' as mutable. This makes sense since immutable variables cannot be changed even when you know their memory location. To reiterate, in order to change a value through a reference, both the variable and its reference have to be mutable, as shown in the following code snippet: let mut u = 3.14f64; let v = &mut u; *v = 3.15; println!("The value of u is now {}", *v); This will print: The value of u is now 3.15. Now, the value at the memory location of u is changed to 3.15. However, note that we now cannot change (or even print) that value anymore by using the u: u = u * 2.0; variable gives us a compiler error: cannot assign to 'u' because it is borrowed. We say that borrowing a variable (by making a reference to it) freezes that variable; the original u variable is frozen (and no longer usable) until the reference goes out of scope. In addition, we can only have one mutable reference: let w = &mut u; which results in the error: cannot borrow 'u' as mutable more than once at a time. The compiler even adds the following note to the previous code line with: let v = &mut u; note: previous borrow of 'u' occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `u` until the borrow ends. This is logical; the compiler is (rightfully) concerned that a change to the value of u through one reference might change its memory location because u might change in size, so it will not fit anymore within its previous location and would have to be relocated to another address. This would render all other references to u as invalid, and even dangerous, because through them we might inadvertently change another variable that has taken up the previous location of u! A mutable value can also be changed by passing its address as a mutable reference to a function, as shown in this example: let mut m = 7; add_three_to_magic(&mut m); println!("{}", m); // prints out 10 With the function add_three_to_magic declared as follows: fn add_three_to_magic(num: &mut i32) {    *num += 3; // value is changed in place through += } To summarize, when n is a mutable value of the T type, then only one mutable reference to it (of the &mut T type) can exist at any time. Through this reference, the value can be changed. Using ref in a match If you want to get a reference to a matched variable inside a match function, use the ref keyword, as shown in the following example: fn main() { let n = 42; match n {      ref r => println!("Got a reference to {}", r), } let mut m = 42; match m {      ref mut mr => {        println!("Got a mutable reference to {}", mr);        *mr = 43;      }, } println!("m has changed to {}!", m); } Which prints out: Got a reference to 42 Got a mutable reference to 42 m has changed to 43! The r variable inside the match has the &i32 type. In other words, the ref keyword creates a reference for use in the pattern. If you need a mutable reference, use ref mut. We can also use ref to get a reference to a field of a struct or tuple in a destructuring via a let binding. For example, while reusing the Magician struct, we can extract the name of mag by using ref and then return it from the match: let mag = Magician { name: "Gandalf", power: 4625}; let name = {    let Magician { name: ref ref_to_name, power: _ } = mag;    *ref_to_name }; println!("The magician's name is {}", name); Which prints: The magician's name is Gandalf. References are the most common pointer type and have the most possibilities; other pointer types should only be applied in very specific use cases. Summary In this article, we learned the intelligence behind the Rust compiler, which is embodied in the principles of ownership, moving values, and borrowing. Resources for Article: Further resources on this subject: Getting Started with NW.js [article] Creating Random Insults [article] Creating Man-made Materials in Blender 2.5 [article]
Read more
  • 0
  • 0
  • 2736
article-image-introducing-xcode-tools-iphone-development
Packt
28 Sep 2011
9 min read
Save for later

Introducing Xcode Tools for iPhone Development

Packt
28 Sep 2011
9 min read
  (For more resources on iPhone Development, see here.) There is a lot of fun stuff to cover, so let's get started. Development using the Xcode Tools If you are running Mac OSX 10.5, chances are your machine is already running Xcode. These are located within the /Developer/Applications folder. Apple also makes this freely available through the Apple Developer Connection at http://developer.apple.com/. The iPhone SDK includes a suite of development tools to assist you with your development of your iPhone, and other iOS device applications. We describe these in the following table. iPhone SDK Core Components COMPONENT DESCRIPTION Xcode This is the main Integrated Development Environment (IDE) that enables you to manage, edit, and debug your projects. DashCode This enables you to develop web-based iPhone and iPad applications, and Dashboard widgets. iPhone Simulator The iPhone Simulator is a Cocoa-based application, which provides a software simulator to simulate an iPhone or iPad on your Mac OSX. Interface Builder   Instruments These are the Analysis tools, which help you optimize your applications and monitor for memory leaks in real-time. The Xcode tools require an Intel-based Mac running Mac OS X version 10.6.4 or later in order to function correctly. Inside Xcode, Cocoa, and Objective-C Xcode 4 is a complete toolset for building Mac OSX (Cocoa-Based) and iOS applications. The new single-windowed development interface has been redesigned to be a lot easier and even more helpful to use than it has been in previous releases. It can now also identify mistakes in both syntax and logical errors, and will even fix your code for you. It provides you with the tools to enable you to speed up your development process, therefore becoming more productive. It also takes care of the deployment of both your Mac OSX and iOS applications. The Integrated Development Interface (IDE) allows you to do the following: Create and manage projects, including specifying platforms, target requirements, dependencies, and build configurations. Supports Syntax Colouring and automatic indenting of code. Enables you to navigate and search through the components of a project, including header files and documentation. Enables you to Build and Run your project. Enables you to debug your project locally, run within the iOS simulator, or remotely, within a graphical source-level debugger. Xcode incorporates many new features and improvements, apart from the redesigned user interface; it features a new and improved LLVM (Low Level Virtual Machine) debugger, which has been supercharged to run 3 times faster and 2.5 times more efficient. This new compiler is the next generation compiler technology designed for high-performance projects and completely supports C, Objective-c, and now C++. It is also incorporated into the Xcode IDE and compiles twice as fast and quickly as GCC and your applications will run faster. The following list includes the many improvements made to this release. The interface has been completely redesigned and features a single-window integrated development interface. Interface Builder has now been fully integrated within the Xcode development IDE. Code Assistant opens in a second window that shows you the file that you are working on, and can automatically find and open the corresponding header file(s). Fix-it checks the syntax of your code and validates symbol names as you type. It will even highlight any errors that it finds and will even fix them for you. The new Version Editor works with GIT (Free Open-Source) version control software or Subversion. This will show you the files entire SCM (software configuration management) history and will even compare any two versions of the file. The new LLVM 2.0 compiler includes full support for C, Objective-C, and C++ The LLDB debugger has now been improved to be even faster, it uses less memory than the GDB debugging engine. The new Xcode 4 development IDE now lets you work on several interdependent projects within the same window. It automatically determines its dependencies so that it builds the projects in the right order. Xcode allows you to customize an unlimited number of build and debugging tools, and executable packaging. It supports several source-code management tools, namely, CVS "Version control software which is an important component of the Source Configuration Management (SCM)" and Subversion, which allows you to add files to a repository, commit changes, get updated versions and compare versions using the Version Editor tool. The iPhone Simulator The iPhone Simulator is a very useful tool that enables you to test your applications without using your actual device, whether this being your iPhone or any other iOS device. You do not need to launch this application manually, as this is done when you Build and run your application within the Xcode Integrated Development Environment (IDE). Xcode installs your application on the iPhone Simulator for you automatically. The iPhone Simulator also has the capability of simulating different versions of the iPhone OS, and this can become extremely useful if your application needs to be installed on different iOS platforms, as well as testing and debugging errors reported in your application when run under different versions of the iOS. While the iPhone Simulator acts as a good test bed for your applications, it is recommended to test your application on the actual device, rather than relying on the iPhone Simulator for testing. The iPhone Simulator can be found at the following location /Developer/Platforms/iPhoneSimulator.Platform/Developer/Applications. Layers of the iOS Architecture According to Apple, they describe the set of frameworks and technologies that are currently implemented within the iOS operating system as a series of layers. Each of these layers is made up of a variety of different frameworks that can be used and incorporated into your applications. Layers of the iOS Architecture We shall now go into detail and explain each of the different layers of the iOS Architecture; this will give you a better understanding of what is covered within each of the Core layers. The Core OS Layer This is the bottom layer of the hierarchy and is responsible for the foundation of the Operating system, which the other layers sit on top of. This important layer is in charge of managing memory - allocating and releasing of memory once it has finished with it, taking care of file system tasks, handles networking, and other Operating System tasks. It also interacts directly with the hardware. The Core OS Layer consists of the following components: COMPONENT NAME COMPONENT NAME OS X Kernel Mach 3.0 BSD Sockets Security Power Management Keychain Certificates File System Bonjour The Core Services Layer The Core Services layer provides an abstraction over the services provided in the Core OS layer. It provides fundamental access to the iPhone OS services. The Core Services Layer consists of the following components: COMPONENT NAME COMPONENT NAME Collections Address Book Networking File Access SQLite Core Location Net Services Threading Preferences URL Utilities The Media Layer The Media Layer provides Multimedia services that you can use within your iPhone, and other iOS devices. The Media Layer is made up of the following components: COMPONENT NAME COMPONENT NAME Core Audio OpenGL Audio Mixing Audio Recording Video Playback Image Formats: JPG, PNG and TIFF PDF Quartz Core Animations OpenGL ES The Cocoa-Touch Layer The Cocoa-Touch layer provides an abstraction layer to expose the various libraries for programming the iPhone, and other IOS devices. You probably can understand why Cocoa-Touch is located at the top of the hierarchy due to its support for Multi-Touch capabilities. The Cocoa-Touch Layer is made up of the following components: COMPONENT NAME COMPONENT NAME Cocoa-Touch Layer Multi-Touch Events Multi-Touch Controls Accelerometer/Gyroscope View Hierarchy Localization/Geographical Alerts Web Views People Picker Image Picker Controllers   Understanding Cocoa, the language of the Mac Cocoa is defined as the development framework used for the development of most native Mac OSX applications. A good example of a Cocoa related application is Mail or Text Edit. This framework consists of a collection of shared object code libraries known as the Cocoa frameworks. It consists of a runtime system and a development environment. These set of frameworks provide you with a consistent and optimized set of prebuilt code modules that will speed up your development process. Cocoa provides you with a rich-layer of functionality, as well as a comprehensive object-oriented like structure and APIs on which you can build your applications. Cocoa uses the Model-View-Controller (MVC) design pattern. What are Design Patterns? Design Patterns represent and handle specific solutions to problems that arise when developing software within a particular context. These can be either a description or a template, on how to go about to solve a problem in a variety of different situations. What is the difference between Cocoa and Cocoa-Touch? Cocoa-Touch is the programming language framework that drives user interaction on iOS. It consists and uses technology derived from the cocoa framework and was redesigned to handle multi-touch capabilities. The power of the iPhone and its User Interface are available to developers throughout the Cocoa-Touch frameworks. Cocoa-Touch is built upon the Model-View-Controller structure; it provides a solid stable foundation for creating mind blowing applications. Using the Interface builder developer tool, developers will find it both very easy and fun to use the new drag-and-drop method when designing their next great masterpiece application on iOS. The Model-View-Controller The Model-View-Controller (or MVC) comprises a logical way of dividing up the code that makes up the GUI (Graphical User Interface) of an application. Object-Oriented applications like Java and .Net have adopted the MVC design pattern. The MVC model comprises three distinctive categories:Model : This part defines your application's underlying data engine. It is responsible for maintaining the integrity of that data. View : This part defines the user interface for your application and has no explicit knowledge of the origin of data displayed in that interface. It is made up of Windows, controls, and other elements that the user can see and interact with. Controller : This part acts as a bridge between the model and view and facilitates updates between them. It binds the Model and View together and the application logic decides how to handle the user's inputs.  
Read more
  • 0
  • 0
  • 2735

article-image-ranges
Packt
22 May 2014
11 min read
Save for later

Ranges

Packt
22 May 2014
11 min read
(For more resources related to this topic, see here.) Sorting ranges efficiently Phobos' std.algorthm includes sorting algorithms. Let's look at how they are used, what requirements they have, and the dangers of trying to implement range primitives without minding their efficiency requirements. Getting ready Let's make a linked list container that exposes an appropriate view, a forward range, and an inappropriate view, a random access range that doesn't meet its efficiency requirements. A singly-linked list can only efficiently implement forward iteration due to its nature; the only tool it has is a pointer to the next element. Implementing any other range primitives will require loops, which is not recommended. Here, however, we'll implement a fully functional range, with assignable elements, length, bidirectional iteration, random access, and even slicing on top of a linked list to see the negative effects this has when we try to use it. How to do it… We're going to both sort and benchmark this program. To sort Let's sort ranges by executing the following steps: Import std.algorithm. Determine the predicate you need. The default is (a, b) => a < b, which results in an ascending order when the sorting is complete (for example, [1,2,3]). If you want ascending order, you don't have to specify a predicate at all. If you need descending order, you can pass a greater-than predicate instead, as shown in the following line of code: auto sorted = sort!((a, b) => a > b)([1,2,3]); // results: [3,2,1] When doing string comparisons, the functions std.string.cmp (case-sensitive) or std.string.icmp (case-insensitive) may be used, as is done in the following code: auto sorted = sort!((a, b) => cmp(a, b) < 0)(["b", "c", "a"]); // results: a, b, c Your predicate may also be used to sort based on a struct member, as shown in the following code: auto sorted = sort!((a, b) => a.value < b.value)(structArray); Pass the predicate as the first compile-time argument. The range you want to sort is passed as the runtime argument. If your range is not already sortable (if it doesn't provide the necessary capabilities), you can convert it to an array using the array function from std.range, as shown in the following code: auto sorted = sort(fibanocci().take(10)); // won't compile, not enough capabilities auto sorted = sort(fibanocci().take(10).array); // ok, good Use the sorted range. It has a unique type from the input to signify that it has been successfully sorted. Other algorithms may use this knowledge to increase their efficiency. To benchmark Let's sort objects using benchmark by executing the following steps: Put our range and skeleton main function from the Getting ready section of this recipe into a file. Use std.datetime.benchmark to test the sorting of an array from the appropriate walker against the slow walker and print the results at the end of main. The code is as follows: auto result = benchmark!( { auto sorted = sort(list.walker.array); }, { auto sorted = sort(list.slowWalker); } )(100); writefln("Emulation resulted in a sort that was %d times slower.", result[1].hnsecs / result[0].hnsecs); Run it. Your results may vary slightly, but you'll see that the emulated, inappropriate range functions are consistently slower. The following is the output: Emulation resulted in a sort that was 16 times slower. Tweak the size of the list by changing the initialization loop. Instead of 1000 entries, try 2000 entries. Also, try to compile the program with inlining and optimization turned on (dmd –inline –O yourfile.d) and see the difference. The emulated version will be consistently slower, and as the list becomes longer, the gap will widen. On my computer, a growing list size led to a growing slowdown factor, as shown in the following table: List size Slowdown factor 500 13 1000 16 2000 29 4000 73 How it works… The interface to Phobos' main sort function hides much of the complexity of the implementation. As long as we follow the efficiency rules when writing our ranges, things either just work or fail, telling us we must call an array in the range before we can sort it. Building an array has a cost in both time and memory, which is why it isn't performed automatically (std.algorithm prefers lazy evaluation whenever possible for best speed and minimum memory use). However, as you can see in our benchmark, building an array is much cheaper than emulating unsupported functions. The sort algorithms require a full-featured range and will modify the range you pass to it instead of allocating memory for a copy. Thus, the range you pass to it must support random access, slicing, and either assignable or swappable elements. The prime example of such a range is a mutable array. This is why it is often necessary to use the array function when passing data to sort. Our linked list code used static if with a compile-time parameter as a configuration tool. The implemented functions include opSlice and properties that return ref. The ref value can only be used on function return values or parameters. Assignments to a ref value are forwarded to the original item. The opSlice function is called when the user tries to use the slice syntax: obj[start .. end]. Inside the beSlow condition, we broke the main rule of implementing range functions: avoid loops. Here, we see the consequences of breaking that rule; it ruined algorithm restrictions and optimizations, resulting in code that performs very poorly. If we follow the rules, we at least know where a performance problem will arise and can handle it gracefully. For ranges that do not implement the fast length property, std.algorithm includes a function called walkLength that determines the length by looping through all items (like we did in the slow length property). The walkLength function has a longer name than length precisely to warn you that it is a slower function, running in O(n) (linear with length) time instead of O(1) (constant) time. Slower functions are OK, they just need to be explicit so that the user isn't surprised. See also The std.algorithm module also includes other sorting algorithms that may fit a specific use case better than the generic (automatically specialized) function. See the documentation at http://dlang.org/phobos/std_algorithm.html for more information. Searching ranges Phobos' std.algorithm module includes search functions that can work on any ranges. It automatically specializes based on type information. Searching a sorted range is faster than an unsorted range. How to do it… Searching has a number of different scenarios, each with different methods: If you want to know if something is present, use canFind. Finding an item generically can be done with the find function. It returns the remainder of the range, with the located item at the front. When searching for a substring in a string, you can use haystack.find(boyerMooreFinder(needle)). This uses the Boyer-Moore algorithm which may give better performance. If you want to know the index where the item is located, use countUntil. It returns a numeric index into the range, just like the indexOf function for strings. Each find function can take a predicate to customize the search operation. When you know your range is sorted but the type doesn't already prove it, you may call assumeSorted on it before passing it to the search functions. The assumeSorted function has no runtime cost; it only adds information to the type that is used for compile-time specialization. How it works… The search functions in Phobos make use of the ranges' available features to choose good-fit algorithms. Pass them efficiently implemented ranges with accurate capabilities to get best performance. The find function returns the remainder of the data because this is the most general behavior; it doesn't need random access, like returning an index, and doesn't require an additional function if you are implementing a function to split a range on a given condition. The find function can work with a basic input range, serving as a foundation to implement whatever you need on top of it, and it will transparently optimize to use more range features if available. Using functional tools to query data The std.algorithm module includes a variety of higher-order ranges that provide tools similar to functional tools. Here, we'll see how D code can be similar to a SQL query. A SQL query is as follows: SELECT id, name, strcat("Title: ", title) FROM users WHERE name LIKE 'A%' ORDER BY id DESC LIMIT 5; How would we express something similar in D? Getting ready Let's create a struct to mimic the data table and make an array with the some demo information. The code is as follows: struct User { int id; string name; string title; } User[] users; users ~= User(1, "Alice", "President"); users ~= User(2, "Bob", "Manager"); users ~= User(3, "Claire", "Programmer"); How to do it… Let's use functional tools to query data by executing the following steps: Import std.algorithm. Use sort to translate the ORDER BY clause. If your dataset is large, you may wish to sort it at the end. This will likely require a call to an array, but it will only sort the result set instead of everything. With a small dataset, sorting early saves an array allocation. Use filter to implement the WHERE clause. Use map to implement the field selection and functions. The std.typecons.tuple module can also be used to return specific fields. Use std.range.take to implement the LIMIT clause. Put it all together and print the result. The code is as follows: import std.algorithm; import std.range; import std.typecons : tuple; // we use this below auto resultSet = users. sort!((a, b) => a.id > b.id). // the ORDER BY clause filter!((item) => item.name.startsWith("A")). // the WHERE clause take(5). map!((item) => tuple(item.id, item.name,"Title: " ~ item.title)); // the field list and transformations import std.stdio; foreach(line; resultSet) writeln(line[0], " ", line[1], " ", line[2]); It will print the following output: 1 Alice Title: President How it works… Many SQL operations or list comprehensions can be expressed in D using some building blocks from std.algorithm. They all work generally the same way; they take a predicate as a compile-time argument. The predicate is passed one or two items at a time and you perform a check or transformation on it. Chaining together functions with the dot syntax, like we did here, is possible thanks to uniform function call syntax. It could also be rewritten as take(5, filter!pred(map!pred(users))). It depends on author's preference, as both styles work exactly the same way. It is important to remember that all std.algorithm higher-order ranges are evaluated lazily. This means no computations, such as looping over or printing, are actually performed until they are required. Writing code using filter, take, map, and many other functions is akin to preparing a query. To execute it, you may print or loop the result, or if you want to save it to an array for use later, simply call .array at the end. There's more… The std.algorithm module also includes other classic functions, such as reduce. It works the same way as the others. D has a feature called pure functions. The functions in std.algorithm are conditionally pure, which means they can be used in pure functions if and only if the predicates you pass are also pure. With lambda functions, like we've been using here, the compiler will often automatically deduce this for you. If you use other functions you define as predicates and want to use it in a pure function, be sure to mark them pure as well. See also Visit http://dconf.org/2013/talks/wilson.html where Adam Wilson's DConf 2013 talk on porting C# to D showed how to translate some real-world LINQ code to D Summary In this article, we learned how to sort ranges in an efficient manner by using sorting algorithms. We learned how to search a range using different functions. We also learned how to use functional tools to query data (similar to a SQL query). Resources for Article: Further resources on this subject: Watching Multiple Threads in C# [article] Application Development in Visual C++ - The Tetris Application [article] Building UI with XAML for Windows 8 Using C [article]
Read more
  • 0
  • 0
  • 2735
Modal Close icon
Modal Close icon