Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-extracting-data-using-dom-must-know
Packt
12 Sep 2013
5 min read
Save for later

Extracting data using DOM (Must know)

Packt
12 Sep 2013
5 min read
(For more resources related to this topic, see here.) Getting ready This section will parse the content of the page at, http://jsoup.org. The index.html file in the project is provided if you want to have a fi le as input, instead of connecting to the URL. How to do it... The following screenshot shows the page that is going to be parsed: By viewing the source code for this HTML page, we know the site structure. The jsoup library is quite supportive of the DOM navigation method; it provides ways to find elements and extract their contents efficiently. Create the Document class structure by connecting to the URL. Document doc = Jsoup.connect("http://jsoup.org").get(); Navigate to the menu tag whose class is nav-sections. Elements navDivTag = doc.getElementsByClass("nav-sections"); Get the list of all menu tags that are owned by &#lt;a> . Elements list = navDivTag.get(0).getElementsByTag("a"); Extract content from each Element class in the previous menu list. for(Element menu: list) {System.out.print(String.format("[%s]", menu.html()));} The output should look like the following screenshot after running the code: The complete example source code for this section is placed at sourceSection02. The API reference for this section is available at: http://jsoup.org/apidocs/org/jsoup/nodes/Element.html How it works... Let's have a look at the navigation structure: html > body.n1-home > div.wrap > div.header > div.nav-sections > ul >li.n1-news > a The div class="nav-sections" tag is the parent of the navigation section, so by using getElementsByClass("nav-sections"), it will move to this tag. Since there is only one tag with this class value in this example, we only need to extract the first found element; we will get it at index 0 (first item of results). Elements navDivTag = doc.getElementsByClass("nav-sections"); The Elements object in jsoup represents a collection ( Collection<>) or a list (List<>); therefore, you can easily iterate through this object to get each element, which is known as an Element object. When at a parent tag, there are several ways to get to the children. Navigate from subtag <ul>, and deeper to each <li> tag, and then to the <a> tag. Or, you can directly make a query to find all the <a> tags. That's how we retrieved the list that we found, as shown in the following code: Elements list = navDivTag.get(0).getElementsByTag("a"); The final part is to print the extracted HTML content of each <a> tag. Beware of the list value; even if the navigation fails to find any element, it is always not null, and therefore, it is good practice to check the size of the list before doing any other task. Additionally, the Element.html() method is used to return the HTML content of a tag. There's more... jsoup is quite a powerful library for DOM navigation. Besides the following mentioned methods, the other navigation types to find and extract elements are also supported in the Element class. The following are the common methods for DOM navigation: Methods   Descriptions   getElementById(String id)   Finds an element by ID, including its children.   getElementsByTag(String c)   Finds elements, including and recursively under the element that calls this method, with the specified tag name (in this case, c).   getElementsByClass(String className)   Finds elements that have this class, including or under the element that calls this method. Case insensitive.   getElementsByAttribute(String key)   Find elements that have a named attribute set. Case insensitive. This method has several relatives, such as: getElementsByAttribute Starting(String keyPrefix) getElementsByAttributeValue (String key, String value) getElementsByAttributeValue Not(String key, String value) getElementsMatchingText(Pattern pattern)   Finds elements whose text matches the supplied regular expression.   getAllElements()   Finds all elements under the specified element (including self and children of children).   There is a need to mention all methods that are used to extract content from an HTML element. The following table shows the common methods for extracting elements: Methods   Descriptions   id()   This retrieves the ID value of an element.   className()   This retrieves the class name value of an element.   attr(String key)   This gets the value of a specific attribute.   attributes()   This is used to retrieve all the attributes.   html()   This is used to retrieve the inner HTML value of an element.   data()   This is used to retrieve the data content, usually applied for getting content from the <script> and <style> tags.   text()   This is used to retrieve the text content. This method will return the combined text of all inner children and removes all HTML tags, while the html() method returns everything between its open and closed tags.   tag()   This retrieves the tag of the element.   Summary In this article we saw to extract data using DOM from an HTML page. It was seen that jsoup is quite a powerful library for DOM navigation. Resources for Article : Further resources on this subject: HTML5 Presentations - creating our initial presentation [Article] Building HTML5 Pages from Scratch [Article] JBoss Tools Palette [Article]
Read more
  • 0
  • 0
  • 7865

article-image-working-simple-associations-using-cakephp
Packt
24 Oct 2009
5 min read
Save for later

Working with Simple Associations using CakePHP

Packt
24 Oct 2009
5 min read
Database relationship is hard to maintain even for a mid-sized PHP/MySQL application, particularly, when multiple levels of relationships are involved because complicated SQL queries are needed. CakePHP offers a simple yet powerful feature called 'object relational mapping' or ORM to handle database relationships with ease.In CakePHP, relations between the database tables are defined through association—a way to represent the database table relationship inside CakePHP. Once the associations are defined in models according to the table relationships, we are ready to use its wonderful functionalities. Using CakePHP's ORM, we can save, retrieve, and delete related data into and from different database tables with simplicity, in a better way—no need to write complex SQL queries with multiple JOINs anymore! In this article by Ahsanul Bari and Anupom Syam, we will have a deep look at various types of associations and their uses. In particular, the purpose of this article is to learn: How to figure out association types from database table relations How to define different types of associations in CakePHP models How to utilize the association for fetching related model data How to relate associated data while saving There are basically 3 types of relationship that can take place between database tables: one-to-one one-to-many many-to-many The first two of them are simple as they don't require any additional table to relate the tables in relationship. In this article, we will first see how to define associations in models for one-to-one and one-to-many relations. Then we will look at how to retrieve and delete related data from, and save data into, database tables using model associations for these simple associations. Defining One-To-Many Relationship in Models To see how to define a one-to-many relationship in models, we will think of a situation where we need to store information about some authors and their books and the relation between authors and books is one-to-many. This means an author can have multiple books but a book belongs to only one author (which is rather absurd, as in real life scenario a book can also have multiple authors). We are now going to define associations in models for this one-to-many relation, so that our models recognize their relations and can deal with them accordingly. Time for Action: Defining One-To-Many Relation Create a new database and put a fresh copy of CakePHP inside the web root. Name the database whatever you like but rename the cake folder to relationship. Configure the database in the new Cake installation. Execute the following SQL statements in the database to create a table named authors, CREATE TABLE `authors` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `name` varchar( 127 ) NOT NULL , `email` varchar( 127 ) NOT NULL , `website` varchar( 127 ) NOT NULL ); Create a books table in our database by executing the following SQL commands: CREATE TABLE `books` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `isbn` varchar( 13 ) NOT NULL , `title` varchar( 64 ) NOT NULL , `description` text NOT NULL , `author_id` int( 11 ) NOT NULL ) Create the Author model using the following code (/app/models/authors.php): <?php class Author extends AppModel{ var $name = 'Author'; var $hasMany = 'Book';} ?> Use the following code to create the Book model (/app/models/books.php): <?phpclass Book extends AppModel{ var $name = 'Book'; var $belongsTo = 'Author';}?> Create a controller for the Author model with the following code: (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?>   Use the following code to create a controller for the Book model (/app/controllers/books_controller.php): <?php class BooksController extends AppController { var $name = 'Books'; var $scaffold; } ?> Now, go to the following URLs and add some test data: http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We have created two tables: authors and books for storing author and book information. A foreign-key named author_id is added to the books table to establish the one-to-many relation between authors and books. Through this foreign-key, an author is related to multiple books, as well as, a book is related to one single author. By Cake convention, the name of a foreign-key should be underscored, singular name of target model, suffixed with _id. Once the database tables are created and relations are established between them, we can define associations in models. In both of the model classes, Author and Book, we defined associations to represent the one-to-many relationship between the corresponding two tables. CakePHP provides two types of association: hasMany and belongsTo to define one-to-many relations in models. These associations are very appropriately named: As an author 'has many' books, Author model should have hasMany association to represent its relation with the Book model. As a book 'belongs to' one author, Book model should have belongsTo association to denote its relation with the Author model. In the Author model, an association attribute $hasMany is defined with the value Book to inform the model that every author can be related to many books. We also added a $belongsTo attribute in the Book model and set its value to Author to let the Book model know that every book is related to only one author. After defining the associations, two controllers were created for both of these models with scaffolding to see how the associations are working.
Read more
  • 0
  • 0
  • 7846

article-image-overview-architecture-and-modeling-cassandra
Packt
21 Jan 2014
5 min read
Save for later

An overview of architecture and modeling in Cassandra

Packt
21 Jan 2014
5 min read
(For more resources related to this topic, see here.) Cassandra uses a peer-to-peer architecture, unlike a master-slave architecture, which is prone to single point of failure (SPOF) problems. Cassandra is deployed on multiple machines with each machine acting as a node in a cluster. Data is autosharded, that is, automatically distributed across nodes using key-based sharding, which means that the keys are used to distribute the data across the cluster. Each key-value data element in Cassandra is replicated across the cluster on other nodes (the default replication is 3) for high availability and fault tolerance. If a node goes down, the data can be served from another node having a copy of the original data. Sharding is an old concept used for distributing data across different systems. Sharding can be horizontal or vertical. In horizontal sharding, in case of RDBMS, data is distributed on the basis of rows, with some rows residing on a single machine and the other rows residing on other machines. Vertical sharding is similar to columnar storage, where columns can be stored separately in different locations. Hadoop Distributed File Systems (HDFS) use data-volumes-based sharding, where a single big file is sharded and distributed across multiple machines using the block size. So, as an example, if the block size is 64 MB, a 640 MB file will be split into 10 chunks and placed in multiple machines. The same autosharding capability is used when new nodes are added to Cassandra, where the new node becomes responsible for a specific key range of data. The details of what node holds what key ranges is coordinated and shared across the cluster using the gossip protocol. So, whenever a client wants to access a specific key, each node locates the key and its associated data quickly within a few milliseconds. When the client writes data to the cluster, the data will be written to the nodes responsible for that key range. However, if the node responsible for that key range is down or not reachable, Cassandra uses a clever solution called Hinted Handoff that allows the data to be managed by another node in the cluster and to be written back on the responsible node once that node is back in the cluster. The replication of data raises the concern of data inconsistency when the replicas might have different states for the same data. Cassandra uses mechanisms such as anti-entropy and read repair for solving this problem and synchronizing data across the replicas. Anti-entropy is used at the time of compaction, where compaction is a concept borrowed from Google BigTable. Compaction in Cassandra refers to the merging of SSTable and helps in optimizing data storage and increasing read performance by reducing the number of seeks across SSTables. Another problem that compaction solves is handling deletion in Cassandra. Unlike traditional RDBMS, all deletes in Cassandra are soft deletes, which means that the records still exist in the underlying data store but are marked with a special flag so that these deleted records do not appear in query results. The records marked as deleted records are called tombstone records. Major compactions handle these soft deletes or tombstones by removing them from the SSTable in the underlying file stores. Cassandra, like Dynamo, uses a Merkle tree data structure to represent the data state at a column family level in a node. This Merkle tree representation is used during major compactions to find the difference in the data states across nodes and reconciled. The Merkle tree or Hash tree is a data structure in the form of a tree where every non-leaf node is labeled with the hash of children nodes, allowing the efficient and secure verification of the contents of the large data structure. Cassandra, like Dynamo, falls under the AP part of the CAP theorem and offers a tunable consistency level. Cassandra provides multiple consistency levels, as illustrated in the following table: Operation ZERO ANY ONE QUORUM ALL Read Not supported Not supported Reads from one node   Read from a majority of nodes with replicas Read from all the nodes with replicas Write Asynchronous write Writes on one node including hints Writes on one node with commit log and Memtable Writes on a majority of nodes with replicas Writes on all the nodes with replicas A summary of the features in Cassandra The following table summarizes the key features of Cassandra with respect to its origins in Google BigTable and Amazon Dynamo: Feature Cassandra implementation Google BigTable Amazon Dynamo Architecture Peer-to-peer architecture, ring-based deployment architecture No Yes   Data model Multidimensional map (row,column, timestamp) -> bytes Yes   No CAP theorem AP with tunable consistency No Yes   Storage architecture SSTable, Memtables Yes   No Storage layer Local filesystem storage No No Fast reads and efficient storage Bloom filters, compactions Yes   No Programming language Java No Yes   Client programming language Multiple languages supported: Java, PHP, Python, REST, C++, .NET, and so on. Not known Not known Scalability model Horizontal scalability; multiple nodes deployment than a single machine deployment Yes   Yes   Version conflicts Timestamp field (not a vector clock as usually assumed) No No Hard deletes/updates Data is always appended using the timestamp field—deletes/updates are soft appends and are cleaned asynchronously as part of major compactions Yes   No Summary Cassandra packs the best features of two technologies proven at scale—Google BigTable and Amazon Dynamo. However, today Cassandra has evolved beyond these origins with new unique and enterprise-ready features such as Cassandra Query Language (CQL), support for collection columns, lightweight transactions, and triggers. Resources for Article: Further resources on this subject: Basic Concepts and Architecture of Cassandra [Article] About Cassandra [Article] Getting Started with Apache Cassandra [Article]
Read more
  • 0
  • 0
  • 7820

article-image-visual-studio-2008-test-types
Packt
22 Oct 2009
15 min read
Save for later

Visual Studio 2008 Test Types

Packt
22 Oct 2009
15 min read
Software testing in Visual Studio Team System 2008 Before going into the details of the actual testing using Visual Studio 2008, we need to understand the different tools provided by Visual Studio Team System (VSTS) and their usage. Once we understand the tools usage, then we should be able to perform different types of testing using VSTS. As we go along creating a number of different tests, we will encounter difficulty in managing the test similar to the code and its different versions during application development. There are different features such as the Test List Editor and Test View and the Team Foundation Server (TFS) for managing and maintaining all the tests created using VSTS. Using this Test List Editor, we can group similar tests, create number of lists, add, or delete tests from the list. The other aspect of this article is to see the different file types getting created in Visual Studio during testing. Most of these files are in XML format, which get created automatically whenever the corresponding test is created. The tools such as the Team Explorer, Code Coverage, Test View, and Test Results are not new to Visual Studio 2008 but actually available since Visual Studio 2005. While we go through the windows and their purposes, we can check the IDE and the tools integration into Visual Studio 2008. Testing as part of Software Development Life Cycle The main objective of testing is to find the defects early in the SDLC. If the defect is found early, then the cost will be less, but if the defect is found during production or implementation stage, then the cost will be higher. Moreover, testing is carried out to assure the quality and reliability of the software. In order to find the defect earlier, the testing activities should start early, that is, in the Requirement phase of SDLC and continue till the end of SDLC. In the Coding phase, various testing activities takes place. Based on the design, the developers start coding the modules. Static and dynamic testing is carried out by the developers. Code reviews and code walkthroughs are also conducted. Once the coding is completed, then comes the Validation phase, where different phases or forms of testing are performed. Unit Testing: This is the first stage of testing in SDLC. This is performed by the developer to check whether the developed code meets the stated requirements. If there are any defects, the developer logs them against the code and fixes the code. The code is retested and then moved to the testers after confirming the code without any defects for the piece of functionality. This phase identifies a lot of defects and also reduces the cost and time involved in testing the application and fixing the code. Integration Testing: This testing is carried out between two or more modules or functions together with the intent of finding interface defects between them. This testing is completed as a part of unit or functional testing, and sometimes becomes its own standalone test phase. On a larger level, integration testing can involve putting together groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. Defects found are logged and later fixed by the developers. There are different ways of integration testing such as top-down testing and bottom-up testing: The Top-Down approach is followed to test the highest level of components and integrate first to test the high-level logic and the flow. The low-level components are tested later. The Bottom-Up approach is the exact opposite of the top-down approach. In this case, the low-level functionalities are tested and integrated first and then the high-level functionalities are tested. The disadvantage in this approach is that the high-level or the most complex functionalities are tested later. The Umbrella approach uses both the top-down and bottom-up patterns. The inputs for functions are integrated in the bottom-up approach and then the outputs for the functions are integrated in the top-down approach. System Testing: It compares the system specifications against the actual system. The system test design is derived from the system design documents and is used in this phase. Sometimes, system testing is automated using testing tools. Once all the modules are integrated, several errors may arise. Testing done at this stage is called system testing. Defects found in this testing are logged and fixed by the developers. Regression Testing: This is not mentioned in the testing phase, but is carried out once the defects are fixed by the developers. The main objective of this type of testing is to determine if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred and to check if any new functionality was introduced in the software. Types of testing Visual Studio provides a range of testing types and tools for software applications. Following are some of those types: Unit test Manual test Web test Load test Stress test Performance test Capacity Planning test Generic test Ordered test In addition to these types, there are additional tools provided to manage, order the listing, and execute tests created in Visual Studio. Some of these are the Test View, Test List Editor, and the Test Results window. We will look at these testing tools and the supporting tools for managing the testing in Visual Studio 2008 in detail later. Unit test As soon as the developer finishes the code, the developer wants to know if it is producing the expected result before getting into any more detailed testing or handing over the component to the tester. The type of testing performed by the developers to test their own code is called Unit testing. Visual Studio has great support for Unit testing. The main goal of the unit testing is to isolate each piece of the code or individual functionality and test if the method is returning the expected result for different set of parameter values. It is extremely important to run unit tests to catch the defects in the early stage. The methods generated by the automated unit testing tool call the methods in the classes from the source code and test the output of each of the methods by comparing them with the expected values. The unit test tool produces a separate set of test code for the source. Using the test code we can pass the parameter values to the method and test the value returned by the method, and then compare them with the expected result. Unit testing code can be easily created by using the code generation feature, which creates the testing source code for the source application code. The generated unit testing code will contain several attributes to identify the Test Class, Test Method, and Test Project. These attributes are assigned when the unit test code gets generated from the original source code. Then using this code, the developer has to change the values and assert methods to compare the expected result from these methods. The Unit test class is similar to the other classes in any other project. The good thing here is that we can create new test classes by inheriting the base test class. The base test class will contain the common or reusable testing methods. This is the new Unit testing feature which helps us reduce the code and reuse the existing test classes. Whenever any code change occurs, it is easy to figure out the fault with the help of Unit tests, rerun those tests, and check whether the code is giving the intended output. This is to verify the code change the developer has made and to confirm that it is not affecting the other parts of the application. All the methods and classes generated for the automated unit testing are inherited from the namespace Microsoft.VisualStudio.TestTools.UnitTesting. Manual test Manual testing is the oldest and the simplest type of testing, but yet very crucial for software testing. It requires a tester to run all the tests without any automation tool. It helps us to validate whether the application meets various standards defined for effective and efficient accessibility and usage. Manual testing comes to play in the following scenarios: There is not enough budget for automation. The tests are more complicated, or are too difficult to be converted into automated tests. The tests are going to be executed only once. There is not enough time to automate the tests. Automated tests would be time-consuming to create and run. Manual tests can be created either using a Word document or Text format in Visual Studio 2008. This is a form of describing the test steps that should be performed by the tester. The step should also mention the expected result out of testing the step. Web tests Web tests are used for testing the functionality of the web page, web application, web site, web services, and a combination of all these. Web tests can be created by recording the interactions that are performed in the browser. These can be played back to test the web application. Web tests are normally a series of HTTP requests (GET/POST). Web tests can be used for testing the application performance as well as for stress testing. During HTTP requests, the web test takes care of testing the web page redirects, validations, viewstate information, authentication, and JavaScript executions. There are different validation rules and extraction rules used in web testing. The validation rules are used for validating the form field names, texts, and tags in the requested web page. We can validate the results or values against the expected result as per business needs. These validation rules are also used for checking the time taken for the HTTP request. At some point in time, we need to extract the data returned by the web pages. We may need the data for future use, or we may have to collect the data for testing purposes. In this case, we have to use the extraction rules for extracting the data returned by the page requested. Using this process, we can extract the form fields, texts, or values in the web page and store it in the web test context or collection. Web tests cannot be performed only with the existence of a web page. We need some data to be populated from the database or some other source to test the web page functionality and performance. There is a data binding mechanism used in Web test, which is used for providing the data required for the requested page. We can bind the data from a database or any other data source. For example, the web page would be a reporting page that might require some query string parameters as well as the data to be shown in the page according to the parameters passed. To provide data for all these data-driven testing, we have to use the concept of data binding with the data source. Web tests can be classified into Simple Web tests and Coded Web tests. Both these are supported by VSTS. Simple Web tests are very simple to create and execute. It executes on its own as per the recording. Once the test is started, there won't be any intervention. The disadvantage is that it is not conditional. It's a series of valid flow of events. Coded Web tests are bit more complex, but provide a lot of flexibility. For example, if we need some conditional execution of tests based on some values then we have to depend on this coded web test. These tests are created using either C# or Visual Basic code. Using the generated code we can control the flow of test events. But the disadvantage is its high complexity and maintenance cost. Load test Load testing is a method of testing used in different types of testing. The important thing with Load testing is that it is about performance. This type of testing is conducted with other types of testing, which means that it can be performed along with either Web testing or Unit testing. The main purpose of load testing is to identify the performance of application based on different scenarios. Most of the time, we can predict the performance of the application that we develop, if it is running on one machine or a desktop. But in the case of web applications such as online ordering systems, we know the estimated maximum number of users, but do not know the connection speeds and location from where the users will access the web site. For such scenarios, the web application should support all the end users with good performance irrespective of the system they use, their Internet connection, the place, and the tool they use to access the web site. So before we release this web site to the customers or the end users, we should check the performance of the application so that it can support the mass end user group. This is where load testing will be very useful in testing the application along with Web test or Unit test. When a Web test is added to a Load test, it will simulate multiple users opening simultaneous connections to the same web application and making multiple HTTP requests. Load testing in Visual Studio comes with lots of properties which can be set to test the web application with different browsers, different user profiles, light loads, and heavy loads. Results of different tests can be saved in a repository to compare the set of results and improve their performance. In case of client server and multi-tier applications, we will be having a lot of components which will reside in the server and serve the client requests. To get the performance of these components, we have to make use of a Load test with a set of Unit tests. One good example would be to test the data access service component that calls a stored procedure in the backend database and returns the results to the application that is using this service. Load tests can be run either from the local machine or by submitting to a rig, which is a group of computers used for simulating the tests remotely. A rig consists of a single controller and one or more agents. Load tests can be used in different scenarios of testing: Stress testing: This checks the functionality of the application under heavy load. The resource provided to the application could vary based on the input file size or the size of the data set, for example, uploading a file which is more than 50MB in size. Smoke testing: This checks if the application performs well for a short duration with a light load. Performance testing: This checks the responsiveness and throughput of the application with different loads. Capacity Planning test: This checks the application performance with various capacities. Ordered test As we know, there are different types of testing required to build quality software. We take care of running all these tests for the applications we develop. But we also have an order in which to execute all these different tests. For example, we do the unit testing first, then the integration test, then the smoke test, and then we go for the functional test. We can order the execution of these tests using Visual Studio. Another example would be to test the configurations for the application before actually testing the functionality of the application. If we don't order the test, we would never know whether the end result is correct or not. Sometimes, the tests will not go through successfully if the tests are not run in order. Ordering of tests is done using the Test View window in Visual Studio. We can list all the available tests in the Test View and choose the tests in the same order using different options provided by Visual Studio and then run the tests. Visual Studio will take care of running the tests in the same order we have chosen in the list. So once we are able to run the test successfully in an order, we can also expect the same ordering in getting the results. Visual Studio provides the results of all the tests in a single row in the Test Results window. Actually, this single row result will contain the results of all the tests run in the order. We can just double-click the single row result to get the details of each tests run in the ordered test. Ordered test is the best way of controlling the tests and running the tests in an order. Generic test We have seen different types and ways of testing the applications using VSTS. There are situations where we might end up having other applications for testing, which are not developed using Visual Studio. We might have only the executables or binaries for those applications. But we may not have the supported testing tool for those applications. This is where we need the generic testing method. This is just a way of testing third-party applications using Visual Studio. Generic tests are used to wrap the existing tests. Once the wrapping is done, then it is just another test in VSTS. Using Visual Studio, we can collect the test results, and gather the code coverage data too. We can manage and run the generic tests in Visual Studio just like the others tests.
Read more
  • 0
  • 0
  • 7780

article-image-getting-started-odoo-development
Packt
06 Apr 2015
14 min read
Save for later

Getting Started with Odoo Development

Packt
06 Apr 2015
14 min read
In this article by Daniel Reis, author of the book Odoo Development Essentials, we will see how to get started with Odoo. Odoo is a powerful open source platform for business applications. A suite of closely integrated applications was built on it, covering all business areas from CRM and Sales to Accounting and Stocks. Odoo has a dynamic and growing community around it, constantly adding features, connectors, and additional business apps. Many can be found at Odoo.com. In this article, we will guide you to install Odoo from the source code and create your first Odoo application. Inspired by the todomvc.com project, we will build a simple to-do application. It should allow us to add new tasks, mark them as completed, and finally, clear the task list from all already completed tasks. (For more resources related to this topic, see here.) Installing Odoo from source We will use a Debian/Ubuntu system for our Odoo server, so you will need to have it installed and available to work on. If you don't have one, you might want to set up a virtual machine with a recent version of Ubuntu Server before proceeding. For a development environment, we will install it directly from Odoo's Git repository. This will end up giving more control on versions and updates. We will need to make sure Git is installed. In the terminal, type the following command: $ sudo apt-get update && sudo apt-get upgrade # Update system $ sudo apt-get install git # Install Git To keep things tidy, we will keep all our work in the /odoo-dev directory inside our home directory: $ mkdir ~/odoo-dev # Create a directory to work in $ cd ~/odoo-dev # Go into our work directory Now, we can use this script to show how to install Odoo from source code in a Debian system: $ git clone https://github.com/odoo/odoo.git -b 8.0 --depth=1 $ ./odoo/odoo.py setup_deps # Installs Odoo system dependencies $ ./odoo/odoo.py setup_pg   # Installs PostgreSQL & db superuser Quick start an Odoo instance In Odoo 8.0, we can create a directory and quick start a server instance for it. We can start by creating the directory called todo-app for our instance as shown here: $ mkdir ~/odoo-dev/todo-app $ cd ~/odoo-dev/todo-app Now we can create our todo_minimal module in it and initialize the Odoo instance: $ ~/odoo-dev/odoo/odoo.py scaffold todo_minimal $ ~/odoo-dev/odoo/odoo.py start -i todo_minimal The scaffold command creates a module directory using a predefined template. The start command creates a database with the current directory name and automatically adds it to the addons path so that its modules are available to be installed. Additionally, we used the -i option to also install our todo_minimal module. It will take a moment to initialize the database, and eventually we will see an INFO log message Modules loaded. Then, the server will be ready to listen to client requests. By default, the database is initialized with demonstration data, which is useful for development databases. Open http://<server-name>:8069 in your browser to be presented with the login screen. The default administrator account is admin with the password admin. Whenever you want to stop the Odoo server instance and return to the command line, press CTRL + C. If you are hosting Odoo in a virtual machine, you might need to do some network configuration to be able to use it as a server. The simplest solution is to change the VM network type from NAT to Bridged. Hopefully, this can help you find the appropriate solution in your virtualization software documentation. Creating the application models Now that we have an Odoo instance and a new module to work with, let's start by creating the data model. Models describe business objects, such as an opportunity, a sales order, or a partner (customer, supplier, and so on). A model has data fields and can also define specific business logic. Odoo models are implemented using a Python class derived from a template class. They translate directly to database objects, and Odoo automatically takes care of that when installing or upgrading the module. Let's edit the models.py file in the todo_minimal module directory so that it contains this: # -*- coding: utf-8 -*- from openerp import models, fields, api   class TodoTask(models.Model):    _name = 'todo.task'    name = fields.Char()    is_done = fields.Boolean()    active = fields.Boolean(default=True) Our to-do tasks will have a name title text, a done flag, and an active flag. The active field has a special meaning for Odoo; by default, records with a False value in it won't be visible to the user. We will use it to clear the tasks out of sight without actually deleting them. Upgrading a module For our changes to take effect, the module has to be upgraded. The simplest and fastest way to make all our changes to a module effective is to go to the terminal window where you have Odoo running, stop it (CTRL + C), and then restart it requesting the module upgrade. To start upgrading the server, the todo_minimal module in the todo-app database, use the following command: $ cd ~/odoo-dev/todo-app # we should be in the right directory $ ./odoo.py start -u todo_minimal The -u option performs an upgrade on a given list of modules. In this case, we upgrade just the todo_minimal module. Developing a module is an iterative process. You should make your changes in gradual steps and frequently install them with a module upgrade. Doing so will make it easier to detect mistakes sooner, and narrowing down the culprit in case the error message is not clear enough. And this can be very frequent when starting with Odoo development. Adding menu options Now that we have a model to store our data, let's make it available on the user interface. All we need is to add a menu option to open the to-do task model, so it can be used. This is done using an XML data file. Let's reuse the templates.xml data file and edit it so that it look like this: <openerp> <data>    <act_window id="todo_task_action"                name="To-do Task Action"                    res_model="todo.task" view_mode="tree,form" />        <menuitem id="todo_task_menu"                 name="To-do Tasks"                  action="todo_task_action"                  parent="mail.mail_feeds"                  sequence="20" /> </data> </openerp> Here, we have two records: a menu option and a window action. The Communication top menu to the user interface was added by the mail module dependency. We can know the identifier of the specific menu option where we want to add our own menu option by inspecting that module, it is mail_feeds. Also, our menu option executes the todo_task_action action we created. And that window action opens a tree view for the todo.task model. If we upgrade the module now and try the menu option just added, it will open an automatically generated view for our model, allowing to add and edit records. Views should be defined for models to be exposed to the users, but Odoo is nice enough to do that automatically if we don't, so we can work with our model right away, without having any form or list views defined yet. So far so good. Let's improve our user interface now. Creating Views Odoo supports several types of views, but the more important ones are list (also known as "tree"), form, and search views. For our simple module, we will just add a list view. Edit the templates.xml file to add the following <record> element just after the <data> opening tag at the top:    <record id="todo_task_tree" model="ir.ui.view">        <field name="name">To-do Task Form</field>        <field name="model">todo.task</field>        <field name="arch" type="xml">            <tree editable="top" colors="gray:is_done==True">                <field name="name" />                <field name="is_done" />            </tree>          </field>    </record> This creates a tree view for the todo.task model with two columns: the title name and the is_done flag. Additionally, it has a color rule to display the tasks done in gray. Adding business logic We want to add business logic to be able to clear the already completed tasks. Our plan is to add an option on the More button, shown at the top of the list when we select lines. We will use a very simple wizard for this, opening a confirmation dialog, where we can execute a method to inactivate the done tasks. Wizards use a special type of model for temporary data: a Transient model. We will now add it to the models.py file as follows: class TodoTaskClear(models.TransientModel):    _name = 'todo.task.clear'      @api.multi    def do_clear_done(self):        Task = self.env['todo.task']        done_recs = Task.search([('is_done', '=', True)])        done_recs.write({'active': False)}        return True Transient models work just like regular models, but their data is temporary and will eventually be purged from the database. In this case, we don't need any fields, since no additional input is going to be asked to the user. It just has a method that will be called when the confirmation button is pressed. It lists all tasks that are done and then sets their active flag to False. Next, we need to add the corresponding user interface. In the templates.xml file, add the following code:    <record id="todo_task_clear_dialog" model="ir.ui.view">      <field name="name">To-do Clear Wizard</field>      <field name="model">todo.task.clear</field>      <field name="arch" type="xml">        <form>           All done tasks will be cleared, even if            unselected.<br/>Continue?            <footer>                <button type="object"                        name="do_clear_done"                        string="Clear                        " class="oe_highlight" />                or <button special="cancel"                            string="Cancel"/>            </footer>        </form>      </field>    </record>      <!-- More button Action -->    <act_window id="todo_task_clear_action"        name="Clear Done"        src_model="todo.task"        res_model="todo.task.clear"        view_mode="form"        target="new" multi="True" /> The first record defines the form for the dialog window. It has a confirmation text and two buttons on the footer: Clear and Cancel. The Clear button when pressed will call the do_clear_done() method defined earlier. The second record is an action that adds the corresponding option in the More button for the to-do tasks model. Configuring security Finally, we need to set the default security configurations for our module. These configurations are usually stored inside the security/ directory. We need to add them to the __openerp__.py manifest file. Change the data attribute to the following: 'data': [  'security/ir.model.access.csv',    'security/todo_access_rules.xml',    'templates.xml'], The access control lists are defined for models and user groups in the ir.model.access.csv file. There is a pre-generated template. Edit it to look like this: id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink access_todo_task_user,To-do Task User Access,model_todo_task,base.group_user,1,1,1,1 This gives full access to all users in the base group named Employees. However, we want each user to see only their own to-do tasks. For that, we need a record rule setting a filter on the records the base group can see. Inside the security/ directory, add a todo_access_rules.xml file to define the record rule: <openerp> <data>    <record id="todo_task_user_rule" model="ir.rule">        <field name="name">ToDo Tasks only for owner</field>        <field name="model_id" ref="model_todo_task"/>        <field name="domain_force">[('create_uid','=',user.id)]              </field>        <field name="groups"                eval="[(4, ref('base.group_user'))]"/>    </record> </data> </openerp> This is all we need to set up the module security. Summary We created a new module from start, covering the most frequently used elements in a module: models, user interface views, business logic in model methods, and access security. In the process, we got familiar with the module development process, involving module upgrades and application server restarts to make the gradual changes effective in Odoo. Resources for Article: Further resources on this subject: Making Goods with Manufacturing Resource Planning [article] Machine Learning in IPython with scikit-learn [article] Administrating Solr [article]
Read more
  • 0
  • 0
  • 7710

article-image-overview-oracle-advanced-pricing
Packt
23 Nov 2010
4 min read
Save for later

An Overview of Oracle Advanced Pricing

Packt
23 Nov 2010
4 min read
  Oracle E-Business Suite R12 Supply Chain Management Drive your supply chain processes with Oracle E-Business R12 Supply Chain Management to achieve measurable business gains Put supply chain management principles to practice with Oracle EBS SCM Develop insight into the process and business flow of supply chain management Set up all of the Oracle EBS SCM modules to automate your supply chain processes Case study to learn how Oracle EBS implementation takes place Oracle Advanced Pricing is the pricing engine for the Oracle E-Business Suite. This pricing engine works using the following scenario: What: This talks about "what" the context of the product is that is finalized by product attribute—all items, item category, or item code. Who: This tells us "who" the qualifier is that tells us who will be charged. At this step, the qualifier decides which modifier will give the price. How: This shows "how" the modifiers will be applicable for the selected qualifier. These modifiers can be used to avail the discounts at sales, promotions, special duties, and charges for special customers of special locations, and so on. After these three steps, prices for an item are finalized by the pricing engine. The key functionalities of Oracle Advanced Pricing The key functionalities of Oracle Advanced Pricing include the following: Defining and assigning rules for pricing products. Applying different types of discounts and surcharges to pricing. Creating a price list for different pricing criteria. Creating formulas to calculate pricing. Creating conversion rates for the usage of multiple currencies. Integration with different EBS modules for optimized pricing Supporting TCA party hierarchy for price list. Using Oracle Advanced Pricing, with the efficient use of qualifiers, modifiers, and formulas, we can efficiently manage all business scenarios. Targeting the specific item definition with the help of the pricing attribute. Making our own rules using the qualifier. For example, if today is Saturday then there will be 15 percent discount on the product. Multi-level responsibility available, such as pricing administrator, manager, and pricing user. Oracle Advanced Pricing process The Oracle Advanced Pricing process normally initiates when a price for an item is created in the price list; the price for the item is called by the application. The qualifier and pricing attribute are used to select the eligible price or modifier. The price or the modified price adjustment, in the form of discount or surcharge, will be applied and final price is obtained. This final price is then applied against the item on the requested application. (Move the mouse over the image to enlarge.) Price list The price list is the list of prices for different items and products. Each price list can have one or more price lines for an item. It contains the qualifier and pricing attributes. The prices of items in a price list can be constant values that can be picked up at the time of ordering. These prices can also be derived using formulas and percentages. Qualifier Qualifiers are rules that control who will be priced. Qualifiers contain the qualifier context and qualifier attribute that creates a logical grouping and explains who is eligible for these prices. Qualifier attributes can be order type, source type, order category, customer PO, and so on. In qualifiers we have operators that can create a condition such as equal to, between, not equal to, and so on. Modifiers Modifiers allow us to adjust the prices. Using a modifier, we can either increase or decrease the current price list for price adjustment surcharges, promotions, and discounts that are available to us these values are from list. Type code with a system access level. Formulas In Oracle Advanced Pricing, formulas are used to price items. These formulas actually contain the arithmetic and mathematical expressions used by the pricing process. Using these formulas, arithmetic equations provide us with the final price of items. If a formula is associated with any price list then we cannot use the constant and absolute values for that particular item. Integration of Oracle Advanced Pricing with other modules Oracle Advanced Pricing is fully integrated with other Oracle E-Business Suite modules. The following are the modules that are integrated with Oracle Advanced Pricing: Oracle Purchasing Oracle Order Management Oracle Service Contract Oracle Sales Contract Oracle iStore Oracle Transportation
Read more
  • 0
  • 0
  • 7703
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-documenting-your-python-project-part2
Packt
12 Oct 2009
11 min read
Save for later

Documenting Your Python Project-part2

Packt
12 Oct 2009
11 min read
Building the Documentation An easier way to guide your readers and your writers is to provide each one of them with helpers and guidelines, as we have learned in the previous section of this article. From a writer's point of view, this is done by having a set of reusable templates together with a guide that describes how and when to use them in a project. It is called a documentation portfolio. From a reader point of view, being able to browse the documentation with no pain, and getting used to finding the info efficiently, is done by building a document landscape. Building the Portfolio There are many kinds of documents a software project can have, from low-level documents that refer directly to the code, to design papers that provide a high-level overview of the application. For instance, Scott Ambler defines an extensive list of document types in his book Agile Modeling (http://www.agilemodeling.com/essays/agileArchitecture.htm). He builds a portfolio from early specifications to operations documents. Even the project management documents are covered, so the whole documenting needs are built with a standardized set of templates. Since a complete portfolio is tightly related to the methodologies used to build the software, this article will only focus on a common subset that you can complete with your specific needs. Building an efficient portfolio takes a long time, as it captures your working habits. A common set of documents in software projects can be classified in three categories: Design: All documents that provide architectural information, and low-level design information, such as class diagrams, or database diagrams Usage: Documents on how to use the software; this can be in the shape of a cookbook and tutorials, or a module-level help Operations: Provide guidelines on how to deploy, upgrade, or operate the software Design The purpose of design documentation is to describe how the software works and how the code is organized. It is used by developers to understand the system but is also a good entry point for people who are trying to understand how the application works. The different kinds of design documents a software can have are: Architecture overview Database models Class diagrams with dependencies and hierarchy relations User interface wireframes Infrastructure description Mostly, these documents are composed of some diagrams and a minimum amount of text. The conventions used for the diagrams are very specific to the team and the project, and this is perfectly fine as long as it is consistent. UML provides thirteen diagrams that cover most aspects in a software design. The class diagram is probably the most used one, but it is possible to describe every aspect of software with it. See http://en.wikipedia.org/wiki/Unified_Modeling_Language#Diagrams. Following a specific modeling language such as UML is not often fully done, and teams just make up their own way throughout their common experience. They pick up good practice from UML or other modeling languages, and create their own recipes. For instance, for architecture overview diagrams, some designers just draw boxes and arrows on a whiteboard without following any particular design rules and take a picture of it. Others work with simple drawing programs such as Dia (http://www.gnome.org/projects/dia) or Microsoft Visio (not open source, so not free), since it is enough to understand the design. Database model diagrams depend on the kind of database you are using. There are complete data modeling software applications that provide drawing tools to automatically generate tables and their relations. But this is overkill in Python most of the time. If you are using an ORM such as SQLAlchemy (for instance), simple boxes with lists of fields, together with table relations are enough to describe your mappings before you start to write them. Class diagrams are often simplified UML class diagrams: There is no need in Python to specify the protected members of a class, for instance. So the tools used for an architectural overview diagram fit this need too. User interface diagrams depend on whether you are writing a web or a desktop application. Web applications often describe the center of the screen, since the header, footer, left, and right panels are common. Many web developers just handwrite those screens and capture them with a camera or a scanner. Others create prototypes in HTML and make screen snapshots. For desktop applications, snapshots on prototype screens, or annotated mock-ups made with tools such as Gimp or Photoshop are the most common way. Infrastructure overview diagrams are like architecture diagrams, but they focus on how the software interacts with third-party elements, such as mail servers, databases, or any kind of data streams. Common Template The important point when creating such documents is to make sure the target readership is perfectly known, and the content scope is limited. So a generic template for design documents can provide a light structure with a little advice for the writer. Such a structure can include: Title Author Tags (keywords) Description (abstract) Target (Who should read this?) Content (with diagrams) References to other documents The content should be three or four screens (a 1024x768 average screen) at the most, to be sure to limit the scope. If it gets bigger, it should be split into several documents or summarized. The template also provides the author's name and a list of tags to manage its evolutions and ease its classification. This will be covered later in the article. Paster is the right tool to use to provide templates for documentation. pbp.skels implements the design template described, and can be used exactly like code generation. A target folder is provided and a few questions are answered: $ paster create -t pbp_design_doc designSelected and implied templates:pbp.skels#pbp_design_doc A Design documentVariables:egg: designpackage: designproject: designEnter title ['Title']: Database specifications for atomisator.dbEnter short_name ['recipe']: mappersEnter author (Author name) ['John Doe']: TarekEnter keywords ['tag1 tag2']: database mapping sqlCreating template pbp_design_docCreating directory ./designCopying +short_name+.txt_tmpl to ./design/mappers.txt The result can then be completed: =========================================Database specifications for atomisator.db=========================================:Author: Tarek:Tags: database mapping sql:abstract:Write here a small abstract about your design document... contents ::Who should read this ?::::::::::::::::::::::Explain here who is the target readership.Content:::::::Write your document here. Do not hesitate to split it in severalsections.References::::::::::Put here references, and links to other documents. Usage Usage documentation describes how a particular part of the software works. This documentation can describe low-level parts such as how a function works, but also high-level parts such command-line arguments for calling the program. This is the most important part of documentation in framework applications, since the target readership is mainly the developers that are going to reuse the code. The three main kinds of documents are: Recipe: A short document that explains how to do something. This kind of document targets one readership and focuses on one specific topic. Tutorial: A step-by-step document that explains how to use a feature of the software. This document can refer to recipes, and each instance is intended to one readership. Module helper: A low-level document that explains what a module contains. This document could be shown (for instance) when you call the help built-in over a module. Recipe A recipe answers a very specific problem and provides a solution to resolve it. For example, ActiveState provides a Python Cookbook online (a cookbook is a collection of recipes), where developers can describe how to do something in Python (http://aspn.activestate.com/ASPN/Python/Cookbook). These recipes must be short and are structured like this: Title Submitter Last updated Version Category Description Source (the source code) Discussion (the text explaining the code) Comments (from the web) Often, they are one-screen long and do not go into great details. This structure perfectly fits a software's needs and can be adapted in a generic structure, where the target readership is added and the category replaced by tags: Title (short sentence) Author Tags (keywords) Who should read this? Prerequisites (other documents to read, for example) Problem (a short description) Solution (the main text, one or two screens) References (links to other documents) The date and version are not useful here, since we will see later that the documentation is managed like source code in the project. Like the design template, pbp.skels provide a pbp_recipe_doc template that can be used to generate this structure: $ paster create -t pbp_recipe_doc recipesSelected and implied templates:pbp.skels#pbp_recipe_doc A recipeVariables:egg: recipespackage: recipesproject: recipesEnter title (use a short question): How to use atomisator.dbEnter short_name ['recipe'] : atomisator-dbEnter author (Author name) ['John Doe']: TarekEnter keywords ['tag1 tag2']: atomisator dbCreating template pbp_recipe_docCreating directory ./recipesCopying +short_name+.txt_tmpl to ./recipes/atomisator-db.txt The result can then be completed by the writer: ========================How to use atomisator.db========================:Author: Tarek:Tags: atomisator db.. contents ::Who should read this ?::::::::::::::::::::::Explain here who is the target readership.Prerequisites:::::::::::::Put here the prerequisites for people to follow this recipe.Problem:::::::Explain here the problem resolved in a few sentences.Solution::::::::Put here the solution.References::::::::::Put here references, and links to other recipes. Tutorial A tutorial differs from a recipe in its purpose. It is not intended to resolve an isolated problem, but rather describes how to use a feature of the application step by step. This can be longer than a recipe and can concern many parts of the application. For example, Django provides a list of tutorials on its website. Writing your first Django App, part 1 (http://www.djangoproject.com/documentation/tutorial01) explains in ten screens how to build an application with Django. A structure for such a document can be: Title (short sentence) Author Tags (words) Description (abstract) Who should read this? Prerequisites (other documents to read, for example) Tutorial (the main text) References (links to other documents) The pbp_tutorial_doc template is provided in pbp.skels as well with this structure, which is similar to the design template. Module Helper The last template that can be added in our collection is the module helper template. A module helper refers to a single module and provides a description of its contents, together with usage examples. Some tools can automatically build such documents by extracting the docstrings and computing module help using pydoc, like Epydoc ( http://epydoc.sourceforge.net). So it is possible to generate an extensive documentation based on API introspection. This kind of documentation is often provided in Python frameworks. For instance Plone provides an http://api.plone.org server that keeps an up-to-date collection of module helpers. The main problems with this approach are: There is no smart selection performed over the modules that are really interesting to document. The code can be obfuscated by the documentation. Furthermore, module documentation provides examples that sometimes refer to several parts of the module, and are hard to split between the functions' and classes' docstrings. The module docstring could be used for that purpose by writing a text at the top of the module. But this ends in having a hybrid file composed of a block of text, then a block of code. This is rather obfuscating when the code represents less than 50% of the total length. If you are the author, this is perfectly fine. But when people try to read the code (not the documentation), they will have to jump the docstrings part. Another approach is to separate the text in its own file. A manual selection can then be operated to decide which Python module will have its module helper file. The documents can then be separated from the code base and allowed to live their own life, as we will see in the next part. This is how Python is documented. Many developers will disagree on the fact that doc and code separation is better than docstrings. This approach means that the documentation process is fully integrated in the development cycle; otherwise it will quickly become obsolete. The docstrings approach solves this problem by providing proximity between the code and its usage example, but doesn't bring it to a higher level: a document that can be used as part of a plain documentation. The template for Module Helper is really simple, as it contains just a little metadata before the content is written. The target is not defined since it is the developers who wish to use the module: Title (module name) Author Tags (words) Content
Read more
  • 0
  • 0
  • 7658

article-image-tiered-application-architecture-docker-compose-part-3
Darwin Corn
08 Aug 2016
6 min read
Save for later

Tiered Application Architecture with Docker Compose, Part 3

Darwin Corn
08 Aug 2016
6 min read
This is the third part in a series that introduces you to basic web application containerization and deployment principles. If you're new to the topic, I suggest reading Part 1 and Part 2 . In this post, I attempt to take the training wheels off, and focus on using Docker Compose. Speaking of training wheels, I rode my bike with training wheels until I was six or seven. So in the interest of full disclosure, I have to admit that to a certain degree I'm still riding the containerization wave with my training wheels on. That's not to say I’m not fully using container technology. Before transitioning to the cloud, I had a private registry running on a Git server that my build scripts pushed to and pulled from to automate deployments. Now, we deploy and maintain containers in much the same way as I've detailed in the first two Parts in this series, and I take advantage of the built-in registry covered in Part 2 of this series. Either way, our use case multi-tiered application architecture was just overkill. Adding to that, when we were still doing contract work, Docker was just getting 1.6 off the ground. Now that I'm working on a couple of projects where this will be a necessity, I'm thankful that Docker has expanded their offerings to include tools like Compose, Machine and Swarm. This post will provide a brief overview of a multi-tiered application setup with Docker Compose, so look for future posts to deal with the latter two. Of course, you can just hold out for a mature Kitematic and do it all in a GUI, but you probably won't be reading this post if that applies to you. All three of these Docker extensions are relatively new, and so the entirety of this post is subject to a huge disclaimer that even Docker hasn't fully developed these extensions to be production-ready for large or intricate deployments. If you're looking to do that, you're best off holding out for my post on alternative deployment options like CoreOS and Kubernetes. But that's beyond the scope of what we're looking at here, so let's get started. First, you need to install the binary. Since this is part 3, I'm going to assume that you have the Docker Engine already installed somewhere. If you're on Mac or Windows, the Docker Toolbox you used to install it also contained an option to install Compose. I'm going to assume your daily driver is a Linux box, so these instructions are for Linux. Fortunately, the installation should just be a couple of commands--curling it from the web and making it executable: # curl -L https://github.com/docker/compose/releases/download/1.6.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose # chmod +x /usr/local/bin/docker-compose # docker-compose -v That last command should output version info if you've installed it correctly. For some reason, the linked installation doc thinks you can run that chmod as a regular user. I'm not sure of any distro that lets regular users write to /usr/local/bin, so I ran them both as root. Docker has its own security issues that are beyond the scope of this series, but I suggest reading about them if you're using this in production. My lazy way around it is to run every Docker-related command elevated, and I'm sure someone will let me have it for that in the comments. Seems like a better policy than making /usr/local/bin writeable by anyone other than root. Now that you have Compose installed, let's look at how to use it to coordinate and deploy a layered application. I'm abandoning my sample music player of the previous two posts in favor of something that's already separated its functionality, namely the Taiga project. If you're not familiar, it's a slick flat JIRA-killer, and the best part is that it's open source with a thorough installation guide. I've done the heavy lifting, so all you have to do is clone the docker-taiga repo into wherever you keep your source code and get to Composin'. $ git clone https://github.com/ndarwincorn/docker-taiga.git $ cd docker-taiga You'll notice a few things. In the root of the app, there's an .envfile where you can set all the environmental variables in one place. Next, there are two folders with taiga- prefixes. They correspond to the layers of the application, from the Angular frontend to the websocket and backend Django server. Each contains a Dockerfile for building the container, as well as relevant configuration files. There's also a docker-entrypoint-initdb.d folder that contains a shell script that creates the Taiga database when the postgres container is built. Having covered container creation in part 1, I'm more concerned with the YAML file in the root of the application, docker-compose.yml. This file coordinates the container/image creation for the application, and full reference can be found on Docker's website. Long story short, the compose YAML file gives the containers a creation order (databases, backend/websocket, frontend) and links them together, so that ports exposed in each container don't need to be published to the host machine. So, from the root of the application, let's run a # docker-compose up and see what happens. Provided there are no errors, you should be able to navigate to localhost:8080 and see your new Taiga deployment! You should be able to log in with the admin user and password 123123. Of course, there's much more to do--configure automated e-mails, link it to your Github organization, configure TLS. I'll leave that as an exercise for you. For now, enjoy your brand-new layered project management application. Of course, if you're deploying such an application for an organization, you don't want all your eggs in one basket. The next two parts in the series will deal with leveraging Docker tools and alternatives to deploy the application in a clustered, high-availability setup. About the Author Darwin Corn is a systems analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the information technology world.
Read more
  • 0
  • 0
  • 7655

article-image-validating-and-using-model-data
Packt
07 May 2013
14 min read
Save for later

Validating and Using the Model Data

Packt
07 May 2013
14 min read
(For more resources related to this topic, see here.) Declarative validation It's easy to set up declarative validation for an entity object to validate the data that is passed through the metadata file. Declarative validation is the validation added for an attribute or an entity object to fulfill a particular business validation. It is called declarative validation because we don't write any code to achieve the validation as all the business validations are achieved declaratively. The entity object holds the business rules that are defined to fulfill specific business needs such as a range check for an attribute value or to check if the attribute value provided by the user is a valid value from the list defined. The rules are incorporated to maintain a standard way to validate the data. Knowing the lifecycle of an entity object It is important to know the lifecycle of an entity object before knowing the validation that is applied to an entity object. The following diagram depicts the lifecycle of an entity: When a new row is created using an entity object, the status of the entity is set to NEW. When an entity is initialized with some values, the status is changed from NEW to INITIALIZED. At this time, the entity is marked invalid or dirty; this means that the state of the entity is changed from the value that was previously checked with the database value. The status of an entity is changed to UNMODIFIED, and the entity is marked valid after applying validation rules and committing to the database. When the value of an unmodified entity is changed, the status is changed to MODIFIED and the entity is marked dirty again. The modified entity again goes to an UNMODIFIED state when it is saved to the database. When an entity is removed from the database, the status is changed to DELETED. When the value is committed, the status changes to DEAD. Types of validation Validation rules are applied to an entity to make sure that only valid values are committed to the database and to prevent any invalid data from getting saved to the database. In ADF, we use validation rules for the entity object to make sure the row is valid all the time. There are three types of validation rules that can be set for the entity objects; they are as follows: Entity-level validation Attribute-level validation Transaction-level validation Entity-level validation As we know, an entity represents a row in the database table. Entity-level validation is the business rule that is added to the database row. For example, the validation rule that has to be applied to a row is termed as entity-level validation. There are two unique declarative validators that will be available only for entity-level validation—Collection and UniqueKey. The following diagram explains that entity-level validations are applied on a single row in the EMP table. The validated row is highlighted in bold. Attribute-level validation Attribute-level validations are applied to attributes. Business logic mostly involves specific validations to compare different attribute values or to restrict the attributes to a specific range. These kinds of validations are done in attribute-level validation. Some of the declarative validators available in ADF are Compare, Length, and Range. The Precision and Mandatory attribute validations are added, by default, to the attributes from the column definition in the underlying database table. We can only set the display message for the validation. The following diagram explains that the validation is happening on the attributes in the second row: There can be any number of validations defined on a single attribute or on multiple attributes in an entity. In the diagram, Empno has a validation that is different from the validation defined for Ename. Validation for the Job attribute is different from that for the Sal attribute. Similarly, we can define validations for attributes in the entity object. Transaction-level validation Transaction-level validations are done after all entity-level validations are completed. If you want to add any kind of validation at the end of the process, you can defer the validation to the transaction level to ensure that the validation is performed only once. Built-in declarative validators ADF Business Components includes some built-in validators to support and apply validations for entity objects. The following screenshot explains how a declarative validation will show up in the Overview tab: The Business Rules section for the EmpEO.xml file will list all the validations for the EmpEO entity. In the previous screenshot, we will see that the there are no entity-level validators defined and some of the attribute-level validations are listed in the Attributes folder. Collection validator A Collection validator is available only for entity-level validation. To perform operations such as average, min, max, count, and sum for the collection of rows, we use the collection validator. Collection validators are compared to the GROUP BY operation in an SQL query with a validation. The aggregate functions, such as count, sum, min, and max are added to validate the entity row. The validator is operated against the literal value, expression, query result, and so on. You must have the association accessor to add a collection validation. Time for action – adding a collection validator for the DeptEO file Now, we will add a Collection validator to DeptEO.xml for adding a count validation rule. Imagine a business rule that says that the number of employees added to department number 10 should be more than five. In this case, you will have a count operation for the employees added to department number 10 and show a message if the count is less than 5 for a particular department. We will break this action into the following three parts: Adding a declarative validation: In this case, the number of employees added to the department should be greater than five Specifying the execution rule: In our case, the execution of this validation should be fired only for department number 10 Displaying the error message: We have to show an error message to the user stating that the number of employees added to the department is less than five Adding the validation Following are the steps to add the validation: Go to the Business Rules section of DeptEO.xml. You will find the Business Rules section in the Overview tab. Select Entity Validators and click on the + button. You may right-click on the Entity Validators folder and then select New Validator to add a validator. Select Collection as Rule Type and move on to the Rule Definition tab. In this section, select Count for the Operation field; Accessor is the association accessor that gets added through a composition association relationship. Only the composition association accessor will be listed in the Accessor drop-down menu. Select the accessor for EmpEO listed in the dropdown, with Empno as the value for Attribute. In order to create a composition association accessor, you will have to create an association between DeptEO.xml and EmpEO.xml based on the Deptno attribute with cardinality of 1 to *. The Composition Association option has to be selected to enable a composition relationship between the two entities. The value of the Operator option should be selected as Greater Than. Compare with will be a literal value, which is 5 that can be entered in the Enter Literal Value section below. Specifying the execution rule Following are the steps to specify the execution: Now to set the execution rule, we will move to the Validation Execution tab. In the Conditional Execution section, add Deptno = '10' as the value for Conditional Execution Expression. In the Triggering Attribute section, select the Execute only if one of the Selected Attributes has been changed checkbox. Move the Empno attribute to the Selected Attributes list. This will make sure that the validation is fired only if the Empno attribute is changed: Displaying the error message Following are the steps to display the error message: Go to the Failure Handling section and select the Error option for Validation Failure Severity. In the Failure Message section, enter the following text: Please enter more than 5 Employees You can add the message stored in a resource bundle to Failure Message by clicking on the magnifying glass icon. What just happened? We have added a collection validation for our EmpEO.xml object. Every time a new employee is added to the department, the validation rule fires as we have selected Empno as our triggering attribute. The rule is also validated against the condition that we have provided to check if the department number is 10. If the department number is 10, the count for that department is calculated. When the user is ready to commit the data to the database, the rule is validated to check if the count is greater than 5. If the number of employees added is less than 5, the error message is displayed to the user. When we add a collection validator, the EmpEO.xml file gets updated with appropriate entries. The following entries get added for the aforementioned validation in the EmpEO.xml file: <validation:CollectionValidationBean Name="EmpEO_Rule_0" ResId= "com.empdirectory.model.entity.EmpEO_Rule_0" OnAttribute="Empno" OperandType="LITERAL" Inverse="false" CompareType="GREATERTHAN" CompareValue="5" Operation="count"> <validation:OnCondition> <![CDATA[Deptno = '10']]> </validation:OnCondition> </validation:CollectionValidationBean> <ResourceBundle> <PropertiesBundle PropertiesFile= "com.empdirectory.model.ModelBundle"/> </ResourceBundle> The error message that is added in the Failure Handling section is automatically added to the resource bundle. The Compare validator The Compare validator is used to compare the current attribute value with other values. The attribute value can be compared against the literal value, query result, expression, view object attribute, and so on. The operators supported are equal, not-equal, less-than, greater-than, less-than or equal to, and greater-than or equal to. The Key Exists validator This validator is used to check if the key value exists for an entity object. The key value can be a primary key, foreign key, or an alternate key. The Key Exists validator is used to find the key from the entity cache, and if the key is not found, the value is determined from the database. Because of this reason, the Key Exists validator is considered to give better performance. For example, when an employee is assigned to a department deptNo 50 and you want to make sure that deptNo 50 already exists in the DEPT table. The Length validator This validator is used to check the string length of an attribute value. The comparison is based on the character or byte length. The List validator This validator is used to create a validation for the attribute in a list. The operators included in this validation are In and NotIn. These two operators help the validation rule check if an attribute value is in a list. The Method validator Sometimes, we would like to add our own validation with some extra logic coded in our Java class file. For this purpose, ADF provides a declarative validator to map the validation rule against a method in the entity-implementation class. The implementation class is generated in the Java section of the entity object. We need to create and select a method to handle method validation. The method is named as validateXXX(), and the returned value will be of the Boolean type. The Range validator This validator is used to add a rule to validate a range for the attribute value. The operators included are Between and NotBetween. The range will have a minimum and maximum value that can be entered for the attribute. The Regular Expression validator For example, let us consider that we have a validation rule to check if the e-mail ID provided by the user is in the correct format. For the e-mail validation, we have some common rules such as the following: The e-mail ID should start with a string and end with the @ character The e-mail ID's last character cannot be the dot (.) character Two @ characters are not allowed within an e-mail ID For this purpose, ADF provides a declarative Regular Expression validator. We can use the regex pattern to check the value of the attribute. The e-mail address and the US phone number pattern is provided by default: Email: [A-Z0-9._%+-]+@[A-Z0-,9.-]+.[A-Z]{2,4} Phone Number (US): [0-9]{3}-?[0-9]{3}-?[0-9]{4} You should select the required pattern and then click on the Use Pattern button to use it. Matches and NotMatches are the two operators that are included with this validator. The Script validator If we want to include an expression and validate the business rule, the Script validator is the best choice. ADF supports Groovy expressions to provide Script validation for an attribute. The UniqueKey validator This validator is available for use only for entity-level validation. To check for uniqueness in the record, we would be using this validator. If we have a primary key defined for the entity object, the Uniqueness Check Definition section will list the primary keys defined to check for uniqueness, as shown in the following screenshot: If we have to perform a uniqueness check against any attribute other than the primary key attributes, we will have to create an alternate key for the entity object. Time for action – creating an alternate key for DeptEO Currently, the DeptEO.xml file has Deptno as the primary key. We would add business validation that states that there should not be a way to create a duplicate of the department name that is already available. The following steps show how to create an alternate key: Go to the General section of the DeptEO.xml file and expand the Alternate Keys section. Alternate keys are keys that are not part of the primary key. Click on the little + icon to add a new alternate key. Move the Dname attribute from the Available list to the Selected list and click on the OK button. What just happened? We have created an alternate key against the Dname attribute to prepare for a unique check validation for the department name. When the alternate key is added to an entity object, we will see the AltKey attribute listed in the Alternate Key section of the General tab. In the DeptEO.xml file, you will find the following code that gets added for the alternate key definition: <Key Name="AltKey" AltKey="true"> <DesignTime> <Attr Name="_isUnique" Value="true"/> <Attr Name="_DBObjectName" Value="HR.DEPT"/> </DesignTime> <AttrArray Name="Attributes"> <Item Value= "com.empdirectory.model.entity.DeptEO.Dname"/> </AttrArray> </Key> Have a go hero – compare the attributes For the first time, we have learned about the validations in ADF. So it's time for you to create your own validation for the EmpEO and DeptEO entity objects. Add validations for the following business scenarios: Continue with the creation of the uniqueness check for the department name in the DeptEO.xml file. The salary of the employees should not be greater than 1000. Display the following message if otherwise: Please enter Salary less than 1000. Display the message invalid date if the employee's hire date is after 10-10-2001. The length of the characters entered for Dname of DeptEO.xml should not be greater than 10. The location of a department can only be NEWYORK, CALIFORNIA, or CHICAGO. The department name should always be entered in uppercase. If the user enters a value in lowercase, display a message. The salary of an employee with the MANAGER job role should be between 800 and 1000. Display an error message if the value is not in this range. The employee name should always start with an uppercase letter and should end with any character other than special characters such as :, ;, and _. After creating all the validations, check the code and tags generated in the entity's XML file for each of the aforementioned validations.
Read more
  • 0
  • 0
  • 7654

article-image-working-webstart-and-browser-plugin
Packt
06 Feb 2015
12 min read
Save for later

Working with WebStart and the Browser Plugin

Packt
06 Feb 2015
12 min read
 In this article by Alex Kasko, Stanislav Kobyl yanskiy, and Alexey Mironchenko, authors of the book OpenJDK Cookbook, we will cover the following topics: Building the IcedTea browser plugin on Linux Using the IcedTea Java WebStart implementation on Linux Preparing the IcedTea Java WebStart implementation for Mac OS X Preparing the IcedTea Java WebStart implementation for Windows Introduction For a long time, for end users, the Java applets technology was the face of the whole Java world. For a lot of non-developers, the word Java itself is a synonym for the Java browser plugin that allows running Java applets inside web browsers. The Java WebStart technology is similar to the Java browser plugin but runs remotely on loaded Java applications as separate applications outside of web browsers. The OpenJDK open source project does not contain the implementations for the browser plugin nor for the WebStart technologies. The Oracle Java distribution, otherwise matching closely to OpenJDK codebases, provided its own closed source implementation for these technologies. The IcedTea-Web project contains free and open source implementations of the browser plugin and WebStart technologies. The IcedTea-Web browser plugin supports only GNU/Linux operating systems and the WebStart implementation is cross-platform. While the IcedTea implementation of WebStart is well-tested and production-ready, it has numerous incompatibilities with the Oracle WebStart implementation. These differences can be seen as corner cases; some of them are: Different behavior when parsing not well-formed JNLP descriptor files: The Oracle implementation is generally more lenient for malformed descriptors. Differences in JAR (re)downloading and caching behavior: The Oracle implementation uses caching more aggressively. Differences in sound support: This is due to differences in sound support between Oracle Java and IcedTea on Linux. Linux historically has multiple different sound providers (ALSA, PulseAudio, and so on) and IcedTea has more wide support for different providers, which can lead to sound misconfiguration. The IcedTea-Web browser plugin (as it is built on WebStart) has these incompatibilities too. On top of them, it can have more incompatibilities in relation to browser integration. User interface forms and general browser-related operations such as access from/to JavaScript code should work fine with both implementations. But historically, the browser plugin was widely used for security-critical applications like online bank clients. Such applications usually require security facilities from browsers, such as access to certificate stores or hardware crypto-devices that can differ from browser to browser, depending on the OS (for example, supports only Windows), browser version, Java version, and so on. Because of that, many real-world applications can have problems running the IcedTea-Web browser plugin on Linux. Both WebStart and the browser plugin are built on the idea of downloading (possibly untrusted) code from remote locations, and proper privilege checking and sandboxed execution of that code is a notoriously complex task. Usually reported security issues in the Oracle browser plugin (most widely known are issues during the year 2012) are also fixed separately in IcedTea-Web. Building the IcedTea browser plugin on Linux The IcedTea-Web project is not inherently cross-platform; it is developed on Linux and for Linux, and so it can be built quite easily on popular Linux distributions. The two main parts of it (stored in corresponding directories in the source code repository) are netx and plugin. NetX is a pure Java implementation of the WebStart technology. We will look at it more thoroughly in the following recipes of this article. Plugin is an implementation of the browser plugin using the NPAPI plugin architecture that is supported by multiple browsers. Plugin is written partly in Java and partly in native code (C++), and it officially supports only Linux-based operating systems. There exists an opinion about NPAPI that this architecture is dated, overcomplicated, and insecure, and that modern web browsers have enough built-in capabilities to not require external plugins. And browsers have gradually reduced support for NPAPI. Despite that, at the time of writing this book, the IcedTea-Web browser plugin worked on all major Linux browsers (Firefox and derivatives, Chromium and derivatives, and Konqueror). We will build the IcedTea-Web browser plugin from sources using Ubuntu 12.04 LTS amd64. Getting ready For this recipe, we will need a clean Ubuntu 12.04 running with the Firefox web browser installed. How to do it... The following procedure will help you to build the IcedTea-Web browser plugin: Install prepackaged binaries of OpenJDK 7: sudo apt-get install openjdk-7-jdk Install the GCC toolchain and build dependencies: sudo apt-get build-dep openjdk-7 Install the specific dependency for the browser plugin: sudo apt-get install firefox-dev Download and decompress the IcedTea-Web source code tarball: wget http://icedtea.wildebeest.org/download/source/icedtea-web-1.4.2.tar.gz tar xzvf icedtea-web-1.4.2.tar.gz Run the configure script to set up the build environment: ./configure Run the build process: make Install the newly built plugin into the /usr/local directory: sudo make install Configure the Firefox web browser to use the newly built plugin library: mkdir ~/.mozilla/plugins cd ~/.mozilla/plugins ln -s /usr/local/IcedTeaPlugin.so libjavaplugin.so Check whether the IcedTea-Web plugin has appeared under Tools | Add-ons | Plugins. Open the http://java.com/en/download/installed.jsp web page to verify that the browser plugin works. How it works... The IcedTea browser plugin requires the IcedTea Java implementation to be compiled successfully. The prepackaged OpenJDK 7 binaries in Ubuntu 12.04 are based on IcedTea, so we installed them first. The plugin uses the GNU Autconf build system that is common between free software tools. The xulrunner-dev package is required to access the NPAPI headers. The built plugin may be installed into Firefox for the current user only without requiring administrator privileges. For that, we created a symbolic link to our plugin in the place where Firefox expects to find the libjavaplugin.so plugin library. There's more... The plugin can also be installed into other browsers with NPAPI support, but installation instructions can be different for different browsers and different Linux distributions. As the NPAPI architecture does not depend on the operating system, in theory, a plugin can be built for non-Linux operating systems. But currently, no such ports are planned. Using the IcedTea Java WebStart implementation on Linux On the Java platform, the JVM needs to perform the class load process for each class it wants to use. This process is opaque for the JVM and actual bytecode for loaded classes may come from one of many sources. For example, this method allows the Java Applet classes to be loaded from a remote server to the Java process inside the web browser. Remote class loading also may be used to run remotely loaded Java applications in standalone mode without integration with the web browser. This technique is called Java WebStart and was developed under Java Specification Request (JSR) number 56. To run the Java application remotely, WebStart requires an application descriptor file that should be written using the Java Network Launching Protocol (JNLP) syntax. This file is used to define the remote server to load the application form along with some metainformation. The WebStart application may be launched from the web page by clicking on the JNLP link, or without the web browser using the JNLP file obtained beforehand. In either case, running the application is completely separate from the web browser, but uses a sandboxed security model similar to Java Applets. The OpenJDK project does not contain the WebStart implementation; the Oracle Java distribution provides its own closed-source WebStart implementation. The open source WebStart implementation exists as part of the IcedTea-Web project. It was initially based on the NETwork eXecute (NetX) project. Contrary to the Applet technology, WebStart does not require any web browser integration. This allowed developers to implement the NetX module using pure Java without native code. For integration with Linux-based operating systems, IcedTea-Web implements the javaws command as shell script that launches the netx.jar file with proper arguments. In this recipe, we will build the NetX module from the official IcedTea-Web source tarball. Getting ready For this recipe, we will need a clean Ubuntu 12.04 running with the Firefox web browser installed. How to do it... The following procedure will help you to build a NetX module: Install prepackaged binaries of OpenJDK 7: sudo apt-get install openjdk-7-jdk Install the GCC toolchain and build dependencies: sudo apt-get build-dep openjdk-7 Download and decompress the IcedTea-Web source code tarball: wget http://icedtea.wildebeest.org/download/source/icedtea-web-1.4.2.tar.gz tar xzvf icedtea-web-1.4.2.tar.gz Run the configure script to set up a build environment excluding the browser plugin from the build: ./configure –disable-plugin Run the build process: make Install the newly-built plugin into the /usr/local directory: sudo make install Run the WebStart application example from the Java tutorial: javaws http://docs.oracle.com/javase/tutorialJWS/samples/ deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp How it works... The javaws shell script is installed into the /usr/local/* directory. When launched with a path or a link to the JNLP file, javaws launches the netx.jar file, adding it to the boot classpath (for security reasons) and providing the JNLP link as an argument. Preparing the IcedTea Java WebStart implementation for Mac OS X The NetX WebStart implementation from the IcedTea-Web project is written in pure Java, so it can also be used on Mac OS X. IcedTea-Web provides the javaws launcher implementation only for Linux-based operating systems. In this recipe, we will create a simple implementation of the WebStart launcher script for Mac OS X. Getting ready For this recipe, we will need Mac OS X Lion with Java 7 (the prebuilt OpenJDK or Oracle one) installed. We will also need the netx.jar module from the IcedTea-Web project, which can be built using instructions from the previous recipe. How to do it... The following procedure will help you to run WebStart applications on Mac OS X: Download the JNLP descriptor example from the Java tutorials at http://docs.oracle.com/javase/tutorialJWS/samples/deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp. Test that this application can be run from the terminal using netx.jar: java -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot dynamictree_webstart.jnlp Create the wslauncher.sh bash script with the following contents: #!/bin/bash if [ "x$JAVA_HOME" = "x" ] ; then JAVA="$( which java 2>/dev/null )" else JAVA="$JAVA_HOME"/bin/java fi if [ "x$JAVA" = "x" ] ; then echo "Java executable not found" exit 1 fi if [ "x$1" = "x" ] ; then echo "Please provide JNLP file as first argument" exit 1 fi $JAVA -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot $1 Mark the launcher script as executable: chmod 755 wslauncher.sh Run the application using the launcher script: ./wslauncher.sh dynamictree_webstart.jnlp How it works... The next.jar file contains a Java application that can read JNLP files and download and run classes described in JNLP. But for security reasons, next.jar cannot be launched directly as an application (using the java -jar netx.jar syntax). Instead, netx.jar is added to the privileged boot classpath and is run specifying the main class directly. This allows us to download applications in sandbox mode. The wslauncher.sh script tries to find the Java executable file using the PATH and JAVA_HOME environment variables and then launches specified JNLP through netx.jar. There's more... The wslauncher.sh script provides a basic solution to run WebStart applications from the terminal. To integrate netx.jar into your operating system environment properly (to be able to launch WebStart apps using JNLP links from the web browser), a native launcher or custom platform scripting solution may be used. Such solutions lay down the scope of this book. Preparing the IcedTea Java WebStart implementation for Windows The NetX WebStart implementation from the IcedTea-Web project is written in pure Java, so it can also be used on Windows; we also used it on Linux and Mac OS X in previous recipes in this article. In this recipe, we will create a simple implementation of the WebStart launcher script for Windows. Getting ready For this recipe, we will need a version of Windows running with Java 7 (the prebuilt OpenJDK or Oracle one) installed. We will also need the netx.jar module from the IcedTea-Web project, which can be built using instructions from the previous recipe in this article. How to do it... The following procedure will help you to run WebStart applications on Windows: Download the JNLP descriptor example from the Java tutorials at http://docs.oracle.com/javase/tutorialJWS/samples/deployment/dynamictree_webstartJWSProject/dynamictree_webstart.jnlp. Test that this application can be run from the terminal using netx.jar: java -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot dynamictree_webstart.jnlp Create the wslauncher.sh bash script with the following contents: #!/bin/bash if [ "x$JAVA_HOME" = "x" ] ; then JAVA="$( which java 2>/dev/null )" else JAVA="$JAVA_HOME"/bin/java fi if [ "x$JAVA" = "x" ] ; then echo "Java executable not found" exit 1 fi if [ "x$1" = "x" ] ; then echo "Please provide JNLP file as first argument" exit 1 fi $JAVA -Xbootclasspath/a:netx.jar net.sourceforge.jnlp.runtime.Boot $1 Mark the launcher script as executable: chmod 755 wslauncher.sh Run the application using the launcher script: ./wslauncher.sh dynamictree_webstart.jnlp How it works... The netx.jar module must be added to the boot classpath as it cannot be run directly because of security reasons. The wslauncher.bat script tries to find the Java executable using the JAVA_HOME environment variable and then launches specified JNLP through netx.jar. There's more... The wslauncher.bat script may be registered as a default application to run the JNLP files. This will allow you to run WebStart applications from the web browser. But the current script will show the batch window for a short period of time before launching the application. It also does not support looking for Java executables in the Windows Registry. A more advanced script without those problems may be written using Visual Basic script (or any other native scripting solution) or as a native executable launcher. Such solutions lay down the scope of this book. Summary In this article we covered the configuration and installation of WebStart and browser plugin components, which are the biggest parts of the Iced Tea project.
Read more
  • 0
  • 0
  • 7645
article-image-scipy-computational-geometry
Packt
11 Apr 2013
8 min read
Save for later

SciPy for Computational Geometry

Packt
11 Apr 2013
8 min read
(For more resources related to this topic, see here.) >>> data = scipy.stats.randint.rvs(0.4,10,size=(10,2))>>> triangulation = scipy.spatial.Delaunay(data) Any Delaunay class has the basic search attributes such as points (to obtain the set of points in the triangulation), vertices (that offers the indices of vertices forming simplices in the triangulation), neighbors (for the indices of neighbor simplices for each simplex—with the convention that "-1" indicates no neighbor for simplices at the boundary). More advanced attributes, for example convex_hull, indicate the indices of the vertices that form the convex hull of the given points. If we desire to search for the simplices that share a given vertex, we may do so with the vertex_to_simplex method. If, instead, we desire to locate the simplices that contain any given point in the space, we do so with the find_simplex method. At this stage we would like to point out the intimate relationship between triangulations and Voronoi diagrams, and offer a simple coding exercise. Let us start by choosing first a random set of points, and obtaining the corresponding triangulation. >>> locations=scipy.stats.randint.rvs(0,511,size=(2,8))>>> triangulation=scipy.spatial.Delaunay(locations.T) We may use the matplotlib.pyplot routine triplot to obtain a graphical representation of this triangulation. We first need to obtain the set of computed simplices. Delaunay offers us this set, but by means of the indices of the vertices instead of their coordinates. We thus need to map these indices to actual points before feeding the set of simplices to the triplot routine: >>>assign_vertex = lambda index: triangulation.points[index]>>>triangle_set = map(assign_vertex, triangulation.vertices)>>>matplotlib.pyplot.triplot(locations[1], locations[0], ... triangles=triangle_set, color='r') We will now obtain the edge map of the Voronoi diagram in a similar fashion as we did before, and plot it below the triangulation (since the former needs to be with either a pcolormesh or imshow command). Note how the triangulation and the corresponding Voronoi diagrams are dual of each other; each edge in the triangulation (red) is perpendicular with an edge in the Voronoi diagram (white). How should we use this observation to code an actual Voronoi diagram for a cloud of points? The actual Voronoi diagram is the set of vertices and edges that composes it, rather than a binary image containing an approximation to the edges as we have computed. Let us finish this Article with two applications to scientific computing that use these techniques extensively, in combination with routines from other SciPy modules. Structural model of oxides In this example we will cover the extraction of the structural model of a molecule of a bronze-type Niobium oxide, from HAADF-STEM micrographs. The following diagram shows HAADF-STEM micrograph of a bronze-type Niobium oxide (taken from http://www.microscopy.ethz.ch/BFDF-STEM.htm, courtesy of ETH Zurich): For pedagogical purposes, we took the following approach to solving this problem: Segmentation of the atoms by thresholding and morphological operations. Connected component labeling to extract each single atom for posterior examination. Computation of the centers of mass of each label identified as an atom. This presents us with a lattice of points in the plane that shows a first insight in the structural model of the oxide. Computation of the Voronoi diagram of the previous lattice of points. The combination of information with the output of the previous step will lead us to a decent (approximation of the actual) structural model of our sample. Let us proceed in this direction. Once retrieved, our HAADF-STEM images will be stored as big matrices with float32 precision. For this project, it is enough to retrieve some tools from the scipy.ndimage module, and some procedures from the matplotlib library. The preamble then looks like the following code: import numpyimport scipyfrom scipy.ndimage import *from scipy.misc import imfilterimport matplotlib.pyplot as plt The image is loaded with the imread(filename) command. This stores the image as a numpy.array with dtype = float32. Notice that the maxima and minima are 1.0 and 0.0, respectively. Other interesting information about the image can be retrieved: img=imread('/Users/blanco/Desktop/NbW-STEM.png')print "Image dtype: %s"%(img.dtype)print "Image size: %6d"%(img.size)print "Image shape: %3dx%3d"%(img.shape[0],img.shape[1])print "Max value %1.2f at pixel %6d"%(img.max(),img.argmax())print "Min value %1.2f at pixel %6d"%(img.min(),img.argmin())print "Variance: %1.5fnStandard deviation:%1.5f"%(img.var(),img.std()) This outputs the following information: Image dtype: float32Image size: 87025Image shape: 295x295Max value 1.00 at pixel 75440Min value 0.00 at pixel 5703Variance: 0.02580Standard deviation: 0.16062 We perform thresholding by imposing an inequality in the array holding the data. The output is a Boolean array where True (white) indicates that the inequality is fulfilled, and False (black) otherwise. We may perform at this point several thresholding operations and visualize them to obtain the best threshold for segmentation purposes. The following images show several examples (different thresholdings applied to the oxide image): By visual inspection of several different thresholds, we choose 0.62 as one that gives us a good map showing what we need for segmentation. We need to get rid of "outliers", though; small particles that might fulfill the given threshold but are small enough not to be considered as an actual atom. Therefore, in the next step we perform a morphological operation of opening to get rid of those small particles. We decided that anything smaller than a square of size 2 x 2 is to be eliminated from the output of thresholding: BWatoms = (img> 0.62)BWatoms = binary_opening(BWatoms,structure=numpy.ones((2,2))) We are ready for segmentation, which will be performed with the label routine from the scipy.ndimage module. It collects one slice per segmented atom, and offers the number of slices computed. We need to indicate the connectivity type. For example, in the following toy example, do we want to consider that situation as two atoms or one atom? It depends; we would rather have it now as two different connected components, but for some other applications we might consider that they are one. The way we indicate the connectivity to the label routine is by means of a structuring element that defines feature connections. For example, if our criterion for connectivity between two pixels is that they are in adjacent edges, and then the structuring element looks like the image shown on the left-hand side from the images shown next. If our criterion for connectivity between two pixels is that they are also allowed to share a corner, then the structuring element looks like the image on the right-hand side. For each pixel we impose the chosen structuring element and count the intersections; if there are no intersections, then the two pixels are not connected. Otherwise, they belong to the same connected component. We need to make sure that atoms that are too close in a diagonal direction are counted as two, rather than one, so we chose the structuring element on the left. The script then reads as follows: structuring_element = [[0,1,0],[1,1,1],[0,1,0]]segmentation,segments = label(BWatoms,structuring_element) The segmentation object contains a list of slices, each of them with a Boolean matrix containing each of the found atoms of the oxide. We may obtain for each slice a great deal of useful information. For example, the coordinates of the centers of mass of each atom can be retrieved with the following commands: coords = center_of_mass(img, segmentation, range(1,segments+1))xcoords = numpy.array([x[1] for x in coords])ycoords = numpy.array([x[0] for x in coords]) Note that, because of the way matrices are stored in memory, there is a transposition of the x and y coordinates of the locations of the pixels. We need to take it into account. Notice the overlap of the computed lattice of points over the original image (the left-hand side image from the two images shown next). We may obtain it with the following commands: >>>plt.imshow(img); plt.gray(); plt.axis('off')>>>plt.plot(xcoords,ycoords,'b.') We have successfully found the centers of mass for most atoms, although there are still about a dozen regions where we are not too satisfied with the result. It is time to fine-tune by the simple method of changing the values of some variables; play with the threshold, with the structuring element, with different morphological operations, and so on. We can even add all the obtained information for a wide range of those variables, and filter out outliers. An example with optimized segmentation is shown, as follows (look at the right-hand side image): For the purposes of this exposition, we are happy to keep it simple and continue working with the set of coordinates that we have already computed. We will be now offering an approximation to the lattice of the oxide, computed as the edge map of the Voronoi diagram of the lattice. L1,L2 = distance_transform_edt(segmentation==0,return_distances=False,return_indices=True)Voronoi = segmentation[L1,L2]Voronoi_edges= imfilter(Voronoi,'find_edges')Voronoi_edges=(Voronoi_edges>0) Let us overlay the result of Voronoi_edges with the locations of the found atoms: >>>plt.imshow(Voronoi_edges); plt.axis('off'); plt.gray()>>>plt.plot(xcoords,ycoords,'r.',markersize=2.0) This gives the following output, which represents the structural model we were searching for:
Read more
  • 0
  • 0
  • 7639

article-image-working-liferay-user-user-group-organization
Packt
04 Jun 2015
23 min read
Save for later

Working with a Liferay User / User Group / Organization

Packt
04 Jun 2015
23 min read
In this article by Piotr Filipowicz and Katarzyna Ziółkowska, authors of the book Liferay 6.x Portal Enterprise Intranets Cookbook, we will cover the basic functionalities that will allow us to manage the structure and users of the intranet. In this article, we will cover the following topics: Managing an organization structure Creating a new user group Adding a new user Assigning users to organizations Assigning users to user groups Exporting users (For more resources related to this topic, see here.) The first step in creating an intranet, beyond answering the question of who the users will be, is to determine its structure. The structure of the intranet is often a derivative of the organizational structure of the company or institution. Liferay Portal CMS provides several tools that allow mapping of a company's structure in the system. The hierarchy is built by organizations that match functional or localization departments of the company. Each organization represents one department or localization and assembles users who represent employees of these departments. However, sometimes, there are other groups of employees in the company. These groups exist beyond the company's organizational structure, and can be reflected in the system by the User Groups functionality. Managing an organization structure Building an organizational structure in Liferay resembles the process of managing folders on a computer drive. An organization may have its suborganizations and—except the first level organization—at the same time, it can be a suborganization of another one. This folder-similar mechanism allows you to create a tree structure of organizations. Let's imagine that we are obliged to create an intranet for a software development company. The company's headquarter is located in London. There are also two other offices in Liverpool and Glasgow. The company is divided into finance, marketing, sales, IT, human resources, and legal departments. Employees from Glasgow and Liverpool belong to the IT department. How to do it… In order to create a structure described previously, these are the steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the Add button. Choose the type of organization you want to create (in our example, it will be a regular organization called software development company, but it is also possible to choose a location). Provide a name for the top-level organization. Choose the parent organization (if a top-level organization is created, this must be skipped). Click on the Save button: Click on the Change button and upload a file, with a graphic representation of your company (for example, logo). Use the right column menu to navigate to data sections you want to fill in with the information. Click on the Save button. Go back to the Users and Organizations list by clicking on the back icon (the left-arrow icon next to the Edit Software Development Company header). Click on the Actions button, located near the name of the newly created organization. Choose the Add Regular Organization option. Provide a name for the organization (in our example, it is IT). Click on the Save button. Go back to the Users and Organizations list by clicking on the back icon (left-arrow icon next to Edit IT header). Click on the Actions button, located near the name of the newly created organization (in our case, it is IT). Choose the Add Location option. Provide a name for the organization (for instance, IT Liverpool). Provide a country. Provide a region (if available). Click on the Save button. How it works… Let's take a look at what we did throughout the previous recipe. In steps 1 through 6, we created a new top-level organization called software development company. With steps 7 through 9, we defined a set of attributes of the newly created organization. Starting from step 11, we created suborganizations: standard organization (IT) and its location (IT Liverpool). Creating an organization There are two types of organizations: regular organizations and locations. The regular organization provides the possibility to create a multilevel structure, each unit of which can have parent organizations and suborganizations (there is one exception: the top-level organization cannot have any parent organizations). The localization is a special kind of organization that allows us to provide some additional data, such as country and region. However, it does not enable us to create suborganizations. When creating the tree of organizations, it is possible to combine regular organizations and locations, where, for instance, the top-level organization will be the regular organization and, both locations and regular organizations will be used as child organizations. When creating a new organization, it is very important to choose the organization type wisely, because it is the only organization parameter, which cannot be modified further. As was described previously, organizations can be arranged in a tree structure. The position of the organization in a tree is determined by the parent organization parameter, which is set by creating a new organization or by editing an existing one. If the parent organization is not set, a top-level organization is always created. There are two ways of creating a suborganization. It is possible to add a new organization by using the Add button and choosing a parent organization manually. The other way is to go to a specific organization's action menu and choose the Add Regular Organization action. While creating a new organization using this option, the parent organization parameter will be set automatically. Setting attributes Similarly, just like its counterpart in reality, every organization in Liferay has a set of attributes that are grouped and can be modified through the organization profile form. This form is available after clicking on the Edit button from the organization's action list (see the There's more… section). All the available attributes are divided into the following groups: The ORGANIZATION INFORMATION group, which contains the following sections: The Details section, which allows us to change the organization name, parent organization, country, or region (available for locations only). The name of the organization is the only required organization parameter. It is used by the search mechanism to search for organizations. It is also a part of an URL address of the organization's sites. The Organization Sites section, which allows us to enable the private and public pages of the organization's website. The Categorization section, which provides tags and categories. They can be assigned to an organization. IDENTIFICATION, which groups the Addresses, Phone Numbers, Additional Email Addresses, Websites, and Services sections. MISCELLANEOUS, which consists of: The Comments section, which allows us to manage an organization's comments The Reminder Queries section, in which reminder queries for different languages can be set The Custom Fields section, which provides a tool to manage values of custom attributes defined for the organization Customizing an organization functionalities Liferay provides the possibility to customize an organization's functionality. In the portal.properties file located in the portal-impl/src folder, there is a section called Organizations. All these settings can be overridden in the portal-ext.properties file. We mentioned that top-level organization cannot have any parent organizations. If we look deeper into portal settings, we can dig out the following properties: organizations.rootable[regular-Organization]=true organizations.rootable[location]=false These properties determine which type of organization can be created as a root organization. In many cases, users want to add a new organization's type. To achieve this goal, it is necessary to set a few properties that describe a new type: organizations.types=regular-Organization,location,my-Organization organizations.rootable[my-organization]=false organizations.children.types[my-organization]=location organizations.country.enabled[my-organization]=false organizations.country.required[my-organization]=false The first property defines a list of available types. The second one denies the possibility to create an organization as a root. The next one specifies a list of types that we can create as children. In our case, this is only the location type. The last two properties turn off the country list in the creation process. This option is useful when the location is not important. Another interesting feature is the ability to customize an organization's profile form. It is possible to indicate which sections are available on the creation form and which are available on the modification form. The following properties aggregate this feature: organizations.form.add.main=details,organization-site organizations.form.add.identification= organizations.form.add.miscellaneous=   organizations.form.update.main=details,organization-site,categorization organizations.form.update.identification=addresses,phone-numbers,additional-email-addresses,websites,services organizations.form.update.miscellaneous=comments,reminder-queries,custom-fields There's more… It is also possible to modify an existing organization and its attributes and to manage its members using actions available in the organization Actions menu. There are several possible actions that can be performed on an organization: The Edit action allows us to modify the attributes of an organization. The Manage Site action redirects the user to the Site Settings section in Control Panel and allows us to manage the organization's public and private sites (if the organization site has been already created). The Assign Organization Roles action allows us to set organization roles to members of an organization. The Assign Users action allows us to assign users already existing in the Liferay database to the specific organization. The Add User action allows us to create a new user, who will be automatically assigned to this specific organization. The Add Regular Organization action enables us to create a new child regular organization (the current organization will be automatically set as a parent organization of a new one). The Add Location action enables us to create a new location (the current organization will be automatically set as a parent organization of a new one). The Delete action allows us to remove an organization. While removing an organization, all pages with portlets and content are also removed. An organization cannot be removed if there are suborganizations or users assigned to it. In order to edit an organization, assign or add users, create a new suborganization (regular organization or location) or delete an organization. Perform the following steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button, located near the name of the organization you want to modify. Click on the name of the chosen action. Creating a new user group Sometimes, in addition to the hierarchy, within the company, there are other groups of people linked by common interests or occupations, such as people working on a specific project, people occupying the same post, and so on. Such groups in Liferay are represented by user groups. This functionality is similar to the LDAP users group where it is possible to set group permissions. One user can be assigned into many user groups. How to do it… In order to create a new user group, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | User Groups. Click on the Add button. Provide Name (required) and Description of the user group. Leave the default values in the User Group Site section. Click on the Save button. How it works… The user groups functionality allows us to create a collection of users and provide them with a public and/or private site, which contain a bunch of tools for collaboration. Unlike the organization, the user group cannot be used to produce a multilevel structure. It enables us to create non-hierarchical groups of users, which can be used by other functionalities. For example, a user group can be used as an additional information targeting tool for the announcements portlet, which presents short messages sent by authorized users (the announcements portlet allows us to direct a message to all users from a specific organization or user group). It is also possible to set permissions to a user group and decide which actions can be performed by which roles within this particular user group. It is worth noting that user groups can assemble users who are already members of organizations. This mechanism is often used when, aside from the company organizational structure, there exist other groups of people who need a common place to store data or for information exchange. There's more… It is also possible to modify an existing user group and its attributes and to manage its members using actions available in the user group Actions menu. There are several possible actions that can be performed on a user group. They are as follows: The Edit action allows us to modify attributes of a user group The Permissions action allows us to decide which roles can assign members of this user group, delete the user group, manage announcements, set permissions, and update or view the user group The Manage Site Pages action redirects the user to the site settings section in Control Panel and allows us to manage the user group's public and private sites The Go to the Site's Public Pages action opens the user group's public pages in a new window (if any public pages of User Group Site has been created) The Go to the Site's Private Pages action opens the user group's private pages in a new window (if any public pages of User Group Site has been created) The Assign Members action allows us to assign users already existing in the Liferay database to this specific user group The Delete action allows us to delete a user group A user group cannot be removed if there are users assigned to it. In order to edit a user group, set permissions, assign members, manage site pages, or delete a user group, perform these steps: Go to Admin | Control panel | Users | User Groups. Click on the Actions button, located near the name of the user group you want to modify: Click on the name of the chosen action. Adding a new user Each system is created for users. Liferay Portal CMS provides a few different ways of adding users to the system that can be enabled or disabled depending on the requirements. The first way is to enable users by creating their own accounts via the Create Account form. This functionality allows all users who can enter the site containing the form to register and gain access to the designated content of the website. In this case, the system automatically assigns the default user account parameters, which indicate the range of activities that may be carried by them in the system. The second solution (which we presented in this recipe) is to reserve the users' account creation to the administrators, who will decide what parameters should be assigned to each account. How to do it… To add a new user, you need to follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Add button. Choose the User option. Fill in the form by providing the user's details in the Email Address (Required), Title, First Name (Required), Middle Name, Last Name, Suffix, Birthday, and Job Title fields (if the Autogenerated User Screen Names option in the Portal Settings | Users section is disabled, the screen name field will be available): Click on the Save button: Using the right column menu, navigate to the data sections you want to fill in with the information. Click on the Save button. How it works… In steps 1 through 5, we created a new user. With steps 6 and 7, we defined a set of attributes of the newly created user. This user is active and can already perform activities according to their memberships and roles. To understand all the mechanisms that influence the user's possible behavior in the system, we have to take a deeper look at these attributes. User as a member of organizations, user groups, and sites The first and most important thing to know about users is that they can be members of organizations, user groups, and sites. The range of activities performed by users within each organization, user group, or site they belong to is determined by the roles assigned to them. All the roles must be assigned for each user of an organization and site individually. This means it is possible, for instance, to make a user the administrator of one organization and only a power user of another. User attributes Each user in Liferay has a set of attributes that are grouped and can be modified through the user profile form. This form is available after clicking on the Edit button from the user's actions list (see, the There's more… section). All the available attributes are divided into the following groups: USER INFORMATION, which contains the following sections: The Details section enables us to provide basic user information, such as Screen Name, Email Address, Title, First Name, Middle Name, Last Name, Suffix, Birthday, Job Title, and Avatar The Password section allows us to set a new password or force a user to change their current password The Organizations section enables us to choose the organizations of which the user is a member The Sites section enables us to choose the sites of which the user is a member The User Groups section enables us to choose user groups of which the user is a member The Roles tab allows us to assign user roles The Personal Site section helps direct the public and private sites to the user The Categorization section provides tags and categories, which can be assigned to a user IDENTIFICATION allows us to to set additional user information, such as Addresses, Phone Numbers, Additional Email Addresses, Websites, Instant Messenger, Social Network, SMS, and OpenID MISCELLANEOUS, which contains the following sections: The Announcements section allows us to set the delivery options for alerts and announcements The Display Settings section covers the Language, Time Zone, and Greeting text options The Comments section allows us to manage the user's comments The Custom Fields section provides a tool to manage values of custom attributes defined for the user User site As it was mentioned earlier, each user in Liferay may have access to different kinds of sites: organization sites, user group sites, and standalone sites. In addition to these, however, users may also have their own public and private sites, which can be managed by them. The user's public and private sites can be reached from the user's menu located on the dockbar (the My Profile and My Dashboard links). It is also possible to enter these sites using their addresses, which are /web/username/home and /user/username/home, respectively. Customizing users Liferay gives us a whole bunch of settings in portal.properties under the Users section. If you want to override some of the properties, put them into the portal-ext.properties file. It is possible to deny deleting a user by setting the following property: users.delete=false As in the case of organizations, there is a functionality that lets us customize sections on the creation or modification form: users.form.add.main=details,Organizations,personal-site users.form.add.identification= users.form.add.miscellaneous=   users.form.update.main=details,password,Organizations,sites,user-groups,roles,personal-site,categorization users.form.update.identification=addresses,phone-numbers,additional-email-addresses,websites,instant-messenger,social-network,sms,open-id users.form.update.miscellaneous=announcements,display-settings,comments,custom-fields There are many other properties, but we will not discuss all of them. In portal.properties, located in the portal-impl/src folder, under the Users section, it is possible to find all the settings, and every line is documented by comment. There's more… Each user in the system can be active or inactive. An active user can log into their user account and use all resources available for them within their roles and memberships. Inactive user cannot enter his account, access places and perform activities, which are reserved for authorized and authenticated users only. It is worth noticing that active users cannot be deleted. In order to remove a user from Liferay, you need to to deactivate them first. To deactivate a user, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button located near the name of the user. Go to the All Users tab. Find the active user you want to deactivate. Click on the Deactivate button. Confirm this action by clicking on the Ok button. To activate a user, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Go to the All Users tab. Find the inactive user you want to activate. Click on the Actions button located near the name of the user. Click on the Activate button. Sometimes, when using the system, users report some irregularities or get a little confused and require assistance. You need to look at the page through the user's eyes. Liferay provides a very useful functionality that allows authorized users to impersonate another user. In order to use this functionality, perform these steps: Log in as an administrator and go to Control Panel | Users | Users and Organizations. Click on the Actions button located near the name of the user. Click on the Impersonate user button. See also For more information on managing users, refer to the Exporting users recipe from this article Assigning users to organizations There are several ways a user can be assigned to an organization. It can be done by editing the user account that has already been created (see the User attributes section in Adding a new user recipe) or using the Assign Users action from the organization actions menu. In this recipe, we will show you how to assign a user to an organization using the option available in the organization actions menu. Getting ready To go through this recipe, you will need an organization and a user (refer to Managing an organization structure and Adding a new user recipes from this article). How to do it… In order to assign a user to an organization from the organization menu, follow these steps: Log in as an administrator and go to Admin | Control panel | Users | Users and Organizations. Click on the Actions button located near the name of the organization to which you want to assign the user. Choose the Assign Users option. Click on the Available tab. Mark a user or group of users you want to assign. Click on the Update Associations button. How it works… Each user in Liferay can be assigned to as many regular organizations as required and to exactly one location. When a user is assigned to the organization, they appear on the list of users of the organization. They become members of the organization and gain access to the organization's public and private pages according to the assigned roles and permissions. As was shown in the previous recipe, while editing the list of assigned users in the organization menu, it is possible to assign multiple users. It is worth noting that an administrator can assign the users of the organizations and suborganizations tasks that she or he can manage. To allow any administrator of an organization to be able to assign any user to that organization, set the following property in the portal-ext.properties file: Organizations.assignment.strict=true In many cases, when our organizations have a tree structure, it is not necessary that a member of a child organization has access to the ancestral ones. To disable this structure set the following property: Organizations.membership.strict=true See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information on assigning users to user groups, refer to the Assigning users to a user group recipe from this article Assigning users to a user group In addition to being a member of the organization, each user can be a member of one or more user groups. As a member of a user group, a user can profit by getting access to the user group's sites or other information directed exclusively to its members, for instance, messages sent by the Announcements portlet. A user becomes a member of the group when they are assigned to it. This assignment can be done by editing the user account that has already been created (see the User attributes description in Adding a new user recipe) or using the Assign Members action from the User Groups actions menu. In this recipe, we will show you how to assign a user to a user group using the option available in the User Groups actions menu. Getting ready To step through this recipe, first, you have to create a user group and a user (see the Creating a new user group and Adding a new user recipes). How to do it… In order to assign a user to a user group from the User Groups menu, perform these steps: Log in as an administrator and go to Admin | Control panel | Users | User Groups. Click on the Actions button located near the name of the user group to which you want to assign the user. Click on the Assign Members button. Click on the Available tab. Mark a user or group of users you want to assign. Click on the Update Associations button. How it works… As was shown in this recipe, one or more users can be assigned to a user group by editing the list of assigned users in the user group menu. Each user assigned to a user group becomes a member of this group and gains access to the user group's public and private pages according to assigned roles and permissions. See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information about assigning users to organization, refer to the Assigning users to organizations recipe from this article Exporting users Liferay Portal CMS provides a simple export mechanism, which allows us to export a list of all the users stored in the database or a list of all the users from a specific organization to a file. How to do it… In order to export the list of all users from the database to a file, follow these steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the Export Users button. In order to export the list of all users from the specific organization to a file, follow these steps: Log in as an administrator and go to Admin | Control Panel | Users | Users and Organizations. Click on the All Organizations tab. Click on the name of an organization to which the users are supposed to be exported. Click on the Export Users button. How it works… As mentioned previously, Liferay allows us to export users from a particular organization to a .csv file. The .csv file contains a list of user names and corresponding e-mail addresses. It is also possible to export all the users by clicking on the Export Users button located on the All Users tab. You will find this tab by going to Admin | Control panel | Users | Users and Organizations. See also For information on how to create user accounts, refer to the Adding a new user recipe from this article For information on how to assign users to organizations, refer to the Assigning users to organizations recipe from this article Summary In this article, you have learnt how to manage an organization structure by creating users and assigning them to organizations and user groups. You have also learnt how to export users using Liferay's export mechanism. Resources for Article: Further resources on this subject: Cache replication [article] Portlet [article] Liferay, its Installation and setup [article]
Read more
  • 0
  • 1
  • 7584

article-image-automation-python-and-stafstax
Packt
23 Oct 2009
13 min read
Save for later

Automation with Python and STAF/STAX

Packt
23 Oct 2009
13 min read
The reader should note that the solution is only intended to explain how Python and STAF may be used. No claim is made that the solution presented here is the best one in any way, just that is one more option that the reader may consider in future developments. The Problem Let's imagine that we have a computer network in which a machine periodically generates some kind of file with information that is of interest to other machines in that network. For example, let's say that this file is a new software build of a product that must transferred to a group of remote machines, in which its functionality has to be tested to make sure it can be delivered to the client. The Python-only solution Sequential A simple solution to make the software build available to all the testing machines could be to copy it to a specific directory whenever a new file is available. For additional security, let's suppose that we're required to verify that the md5 sum for both original and destination files is equal to ensure that build file was copied correctly. If it is considered that /tmp is a good destination directory, then the following script will do the job: 1 #!/usr/bin/python 2 """ 3 Copy a given file to a list of destination machines sequentially 4 """ 5 6 import os, argparse 7 import subprocess 8 import logging 9 10 def main(args): 11 logging.basicConfig(level=logging.INFO, format="%(message)s") 12 13 # Calculate md5 sum before copyin the file 14 orig_md5 = run_command("md5sum %s" % args.file).split()[0] 15 16 # Copy the file to every requested machine and verify 17 # that md5 sum of the destination file is equal 18 # to the md5 sum of the original file 19 for machine in args.machines: 20 run_command("scp %s %s:/tmp/" % (args.file, machine)) 21 dest_md5 = run_command("ssh %s md5sum /tmp/%s" 22 % (machine, os.path.basename(args.file))).split()[0] 23 assert orig_md5 == dest_md5 24 25 def run_command(command_str): 26 """ 27 Run a given command and another process and return stdout 28 """ 29 logging.info(command_str) 30 return subprocess.Popen(command_str, stdout=subprocess.PIPE, 31 shell=True).communicate()[0] 32 33 if __name__ == "__main__": 34 parser = argparse.ArgumentParser(description=__doc__) 35 parser.add_argument("file", 36 help="File to copy") 37 parser.add_argument(metavar="machine", dest="machines", nargs="+", 38 help="List of machines to which file must be copied") 39 40 args = parser.parse_args() 41 args.file = os.path.realpath(args.file) 42 main(args) Here it is assumed that ssh keys have been exchanged between origin and destination machines for automatic authentication without human intervention. The script makes use of the Popen class in the subprocess python standard library. This powerful library provides the capability to launch new operating system processes and capture not only the result code, but also the standard output and error streams. However, it should be taken into account that the Popen class cannot be used to invoke commands on a remote machine by itself. However, as it can be seen in the code, ssh and related commands may be used to launch processes on remote machines when configured properly. For example, if the file of interest was STAF325-src.tar.gz (STAF 3.2.5 source) and the remote machines were 192.168.1.1 and 192.168.1.2, then the file would be copied using the copy.py script in the following way: $ ./copy.py STAF325-src.tar.gz 192.168.1.{1,2}md5sum STAF325-src.tar.gzscp STAF325-src.tar.gz 192.168.1.1:/tmp/ssh 192.168.1.1 md5sum /tmp/STAF325-src.tar.gzscp STAF325-src.tar.gz 192.168.1.2:/tmp/ssh 192.168.1.2 md5sum /tmp/STAF325-src.tar.gz Parallel What would happen if the files were copied in parallel? For this example, it might not make much sense given that probably the network is at bottleneck and there isn't any increase in performance. However, in the case of the md5sum operation, it's a waste of time waiting for the operation to complete on one machine while the other is essentially idle waiting for the next command. Clearly, it would be more interesting to make both machines do the job in parallel to take advantage of CPU cycles. A parallel implementation similar to the sequential one is displayed below: 1 #!/usr/bin/python 2 """ 3 Copy a given file to a list of destination machines in parallel 4 """ 5 6 import os, argparse 7 import subprocess 8 import logging 9 import threading 10 11 def main(args): 12 logging.basicConfig(level=logging.INFO, format="%(threadName)s: %(message)s") 13 orig_md5 = run_command("md5sum %s" % args.file).split()[0] 14 15 # Create one thread for machine 16 threads = [ WorkingThread(machine, args.file, orig_md5) 17 for machine in args.machines] 18 19 # Run all threads 20 for thread in threads: 21 thread.start() 22 23 # Wait for all threads to finish 24 for thread in threads: 25 thread.join() 26 27 class WorkingThread(threading.Thread): 28 """ 29 Thread that performs the copy operation for one machine 30 """ 31 def __init__(self, machine, orig_file, orig_md5): 32 threading.Thread.__init__(self) 33 34 self.machine = machine 35 self.file = orig_file 36 self.orig_md5 = orig_md5 37 38 def run(self): 39 # Copy file to remote machine 40 run_command("scp %s %s:/tmp/" % (self.file, self.machine)) 41 42 # Calculate md5 sum of the file copied at the remote machine 43 dest_md5 = run_command("ssh %s md5sum /tmp/%s" 44 % (self.machine, os.path.basename(self.file))).split()[0] 45 assert self.orig_md5 == dest_md5 46 47 def run_command(command_str): 48 """ 49 Run a given command and another process and return stdout 50 """ 51 logging.info(command_str) 52 return subprocess.Popen(command_str, stdout=subprocess.PIPE, 53 shell=True).communicate()[0] 54 55 if __name__ == "__main__": 56 parser = argparse.ArgumentParser(description=__doc__) 57 parser.add_argument("file", 58 help="File to copy") 59 parser.add_argument(metavar="machine", dest="machines", nargs="+", 60 help="List of machines to which file must be copied") 61 62 args = parser.parse_args() 63 args.file = os.path.realpath(args.file) 64 main(args) Here the same assumptions as in the sequential case are made. In this solution the work that was done inside the for loop is now implemented in the run method of a class that is inherited from threading.Thread class, which is a class that provides an easy way to create working threads such as the ones in the example. In this case, the output of the command, using the same arguments as in the previous example, is: $ ./copy_parallel.py STAF325-src.tar.gz 192.168.1.{1,2}MainThread: md5sum STAF325-src.tar.gzThread-1: scp STAF325-src.tar.gz 192.168.1.1:/tmp/Thread-2: scp STAF325-src.tar.gz 192.168.1.2:/tmp/Thread-2: ssh 192.168.1.2 md5sum /tmp/STAF325-src.tar.gzThread-1: ssh 192.168.1.1 md5sum /tmp/STAF325-src.tar.gz As it can be seen in the logs, md5sum command execution isn't necessarily executed in the same order as threads were created. This solution isn't much more complex than the sequential one, but it finishes earlier. Hence, in the case in which a CPU intensive task must be performed in every machine, the parallel solution will be more convenient since the small increment in coding complex will pay off in execution performance. The Python+STAF solution Sequential The solutions to the problem presented in the previous section are perfectly fine. However, some developers may find it cumbersome to write scripts from scratch using Popen class and desire to work with a platform with feature such as launching process on remote machines already implemented. That's were STAF (Software Testing Automation Framework) might be helpful. STAF is a framework that provides the ability to automate jobs specially, but not uniquely, for testing environments. STAF is implemented as a process which runs on every machine that provides services that may be used by clients to accomplish different tasks. For more information regarding STAF, please refer to the project homepage. The Python+STAF sequential version of the program that has been used as example throughout this article is below: 1 #!/usr/bin/python 2 """ 3 Copy a given file to a list of destination machines sequentially 4 """ 5 6 import os, argparse 7 import subprocess 8 import logging 9 import PySTAF 10 11 def main(args): 12 logging.basicConfig(level=logging.INFO, format="%(message)s") 13 handle = PySTAF.STAFHandle(__file__) 14 15 # Calculate md5 sum before copyin the file 16 orig_md5 = run_process_command(handle, "local", "md5sum %s" % args.file).split()[0] 17 18 # Copy the file to every requested machine and verify 19 # that md5 sum of the destination file is equal 20 # to the md5 sum of the original file 21 for machine in args.machines: 22 copy_file(handle, args.file, machine) 23 dest_md5 = run_process_command(handle, machine, "md5sum /tmp/%s" 24 % os.path.basename(args.file)).split()[0] 25 assert orig_md5 == dest_md5 26 27 handle.unregister() 28 29 def run_process_command(handle, location, command_str): 30 """ 31 Run a given command and another process and return stdout 32 """ 33 logging.info(command_str) 34 35 result = handle.submit(location, "PROCESS", "START SHELL COMMAND %s WAIT RETURNSTDOUT" 36 % PySTAF.STAFWrapData(command_str)) 37 assert result.rc == PySTAF.STAFResult.Ok 38 39 mc = PySTAF.unmarshall(result.result) 40 return mc.getRootObject()['fileList'][0]['data'] 41 42 def copy_file(handle, filename, destination): 43 """ 44 Run a given command and another process and return stdout 45 """ 46 logging.info("copying %s to %s" % (filename, destination)) 47 48 result = handle.submit("local", "FS", "COPY FILE %s TODIRECTORY /tmp TOMACHINE %s" 49 % (PySTAF.STAFWrapData(filename), 50 PySTAF.STAFWrapData(destination))) 51 assert result.rc == PySTAF.STAFResult.Ok 52 53 if __name__ == "__main__": 54 parser = argparse.ArgumentParser(description=__doc__) 55 parser.add_argument("file", 56 help="File to copy") 57 parser.add_argument(metavar="machine", dest="machines", nargs="+", 58 help="List of machines to which file must be copied") 59 60 args = parser.parse_args() 61 args.file = os.path.realpath(args.file) 62 main(args) The code makes use of PySTAF, a python library, which is shipped with the STAF software that provides the ability to interact with the framework as a client. The typical usage of the library may summarized as follows: Register a handle in STAF (line 13): The communication with the server process is managed using handles. A client must have a handle to be able to send requests to local and/or remote machines. Submit requests (lines 35 and 48): Once the handle is available at the client, the client can use it to submit requests to any location and service. The two basic services that are used in this example are PROCESS, which is used to launch processes on a machine the same way ssh was used in the python-only version of the example; and FS, which is used to copy files between different machines as scp was used in the python-only solution. Check result code (lines 37 and 51): After a request has been submitted, result code should be checked to make sure that there wasn't any communication or syntax problem. Unmarshall results (lines 39-40): When the standard output is captured, it must be unmarshalled before using it in python since responses are encoded in a language independent format. Unregister handle (line 27): When STAF isn't needed anymore, it's advisable to unregister the handle to free resources allocated to the client in the server. Compared with the python-only solution, the advantages of STAF aren't appreciable at first sight. The handler syntax isn't easier than creating Popen objects and we have to deal with marshalling when we previously were just parsing text. However, as a framework, if has to be taken into account that it is has a learning curve and has much more functionality to offer than this one that makes it worthwhile. Please bear with me until section 5, in which the STAX solution we'll be shown, with an example with a completely different approach to the problem. Using the script in this section, the output would be pretty much the same as the previous sequential example: $ ./staf_copy.py STAF325-src.tar.gz 192.168.1.{1,2}md5sum STAF325-src.tar.gzcopying STAF325-src.tar.gz to 192.168.1.1md5sum /tmp/STAF325-src.tar.gzcopying STAF325-src.tar.gz to 192.168.1.2md5sum /tmp/STAF325-src.tar.gz As in the previous section, the sequential solution suffers the same problems when CPU intensive tasks are to be performed. Hence, the same comments apply. Parallel When using STAF, the parallel solution requires the same changes that were explained before. That is, create a new class that inherits from threading.Thread and implement the working threads. The code below shows how this might be implemented: 1 #!/usr/bin/python 2 """ 3 Copy a given file to a list of destination machines in parallel 4 """ 5 6 import os, argparse 7 import subprocess 8 import logging 9 import threading 10 import PySTAF 11 12 def main(args): 13 logging.basicConfig(level=logging.INFO, format="%(threadName)s %(message)s") 14 handle = PySTAF.STAFHandle(__file__) 15 orig_md5 = run_process_command(handle, "local", "md5sum %s" % args.file).split()[0] 16 17 # Create one thread for machine 18 threads = [ WorkingThread(machine, args.file, orig_md5) 19 for machine in args.machines] 20 21 # Run all threads 22 for thread in threads: 23 thread.start() 24 25 # Wait for all threads to finish 26 for thread in threads: 27 thread.join() 28 29 handle.unregister() 30 31 class WorkingThread(threading.Thread): 32 """ 33 Thread that performs the copy operation for one machine 34 """ 35 def __init__(self, machine, orig_file, orig_md5): 36 threading.Thread.__init__(self) 37 38 self.machine = machine 39 self.file = orig_file 40 self.orig_md5 = orig_md5 41 self.handle = PySTAF.STAFHandle("%s:%s" % (__file__, self.getName())) 42 43 def run(self): 44 # Copy file to remote machine 45 copy_file(self.handle, self.file, self.machine) 46 47 # Calculate md5 sum of the file copied at the remote machine 48 dest_md5 = run_process_command(self.handle, self.machine, "md5sum /tmp/%s" 49 % os.path.basename(self.file)).split()[0] 50 assert self.orig_md5 == dest_md5 51 self.handle.unregister() 52 53 def run_process_command(handle, location, command_str): 54 """ 55 Run a given command and another process and return stdout 56 """ 57 logging.info(command_str) 58 59 result = handle.submit(location, "PROCESS", "START SHELL COMMAND %s WAIT RETURNSTDOUT" 60 % PySTAF.STAFWrapData(command_str)) 61 assert result.rc == PySTAF.STAFResult.Ok 62 63 mc = PySTAF.unmarshall(result.result) 64 return mc.getRootObject()['fileList'][0]['data'] 65 66 def copy_file(handle, filename, destination): 67 """ 68 Run a given command and another process and return stdout 69 """ 70 logging.info("copying %s to %s" % (filename, destination)) 71 72 result = handle.submit("local", "FS", "COPY FILE %s TODIRECTORY /tmp TOMACHINE %s" 73 % (PySTAF.STAFWrapData(filename), 74 PySTAF.STAFWrapData(destination))) 75 assert result.rc == PySTAF.STAFResult.Ok 76 77 if __name__ == "__main__": 78 parser = argparse.ArgumentParser(description=__doc__) 79 parser.add_argument("file", 80 help="File to copy") 81 parser.add_argument(metavar="machine", dest="machines", nargs="+", 82 help="List of machines to which file must be copied") 83 84 args = parser.parse_args() 85 args.file = os.path.realpath(args.file) 86 main(args) As it happened before, this solution is faster since it takes advantage of having multiple CPUs working on md5sum calculation instead of just one at a time. The output we get invoking the script could be: $ ./staf_copy_parallel.py STAF325-src.tar.gz 192.168.1.{1,2}MainThread md5sum STAF325-src.tar.gzThread-1 copying STAF325-src.tar.gz to 192.168.1.1Thread-2 copying STAF325-src.tar.gz to 192.168.1.2Thread-2 md5sum /tmp/STAF325-src.tar.gzThread-1 md5sum /tmp/STAF325-src.tar.gz This time it can be seen that md5sum calculation mustn't necessarily start in the same order as file copy operation. Once again, this solution is slightly more complex, but the gain in performance makes it convenient when dealing with tasks with high computational cost.    
Read more
  • 0
  • 0
  • 7581
article-image-filtering-sequence
Packt
02 Jun 2015
5 min read
Save for later

Filtering a sequence

Packt
02 Jun 2015
5 min read
In this article by Ivan Morgillo, the author of RxJava Essentials, we will approach Observable filtering with RxJava filter(). We will manipulate a list on installed app to show only a subset of this list, according to our criteria. (For more resources related to this topic, see here.) Filtering a sequence with RxJava RxJava lets us use filter() to keep certain values that we don't want, out of the sequence that we are observing. In this example, we will use a list, but we will filter it, passing to the filter() function the proper predicate to include only the values we want. We are using loadList() to create an Observable sequence, filter it, and populate our adapter: private void loadList(List<AppInfo> apps) {    mRecyclerView.setVisibility(View.VISIBLE);      Observable.from(apps)           .filter((appInfo) ->                appInfo.getName().startsWith("C"))            .subscribe(new Observer<AppInfo>() {                @Override                public void onCompleted() {                    mSwipeRefreshLayout.setRefreshing(false);                }                  @Override                public void onError(Throwable e) {                    Toast.makeText(getActivity(), "Something went                      south!", Toast.LENGTH_SHORT).show();                    mSwipeRefreshLayout.setRefreshing(false);                }                  @Override                public void onNext(AppInfo appInfo) {                    mAddedApps.add(appInfo);                    mAdapter.addApplication(mAddedApps.size() - 1,                   appInfo);                }            }); } We have added the following line to the loadList() function: .filter((appInfo) -> appInfo.getName().startsWith("C")) After the creation of the Observable, we are filtering out every emitted element that has a name starting with a letter that is not a C. Let's have it in Java 7 syntax too, to clarify the types here: .filter(new Func1<AppInfo, Boolean>() {    @Override    public Boolean call(AppInfo appInfo) {        return appInfo.getName().startsWith("C");    } }) We are passing a new Func1 object to filter(), that is, a function having just one parameter. The Func1 object has an AppInfo object as parameter type and it returns a Boolean object. The filter() function will return true only if the condition will be verified. At that point, the value will be emitted and received by all the Observers. As you can imagine, filter() is critically useful to create the perfect sequence that we need from the Observable sequence we get. We don't need to know the source of the Observable sequence or why it's emitting tons of different elements. We just want a useful subset of those elements to create a new sequence we can use in our app. This mindset enforces the separation and the abstraction skills of our coding day life. One of the most common use of filter() is filtering null objects: .filter(new Func1<AppInfo, Boolean>() {    @Override    public Boolean call(AppInfo appInfo) {        return appInfo != null;    } }) This seems to be trivial and there is a lot of boilerplate code for something that trivial, but this will save us from checking for null values in the onNext() call, letting us focus on the actual app logic. As result of our filtering, the next figure shows the installed apps list, filtered by name starting with C: Summary In this article, we introduced RxJava filter() function and we used it in a real-world example in an Android app. RxJava offers a lot more functions allowing you to filter and manipulate Observable sequences. A comprehensive list of methods, scenarios and example are available in RxJava that will drive you in a step-by-step journey, from the basic of the Observer pattern to composing Observables and querying REST API using RxJava. Resources for Article: Further resources on this subject: Android Native Application API [article] Android Virtual Device Manager [article] Putting It All Together – Community Radio [article]
Read more
  • 0
  • 0
  • 7574

article-image-reactive-python-asynchronous-programming-rescue-part-1
Xavier Bruhiere
05 Oct 2016
7 min read
Save for later

Reactive Python – Asynchronous programming to the rescue, Part 1

Xavier Bruhiere
05 Oct 2016
7 min read
On the Confluent website, you can find this title: Stream data changes everything From the createors of Kafka, a real-time messaging system, this is not a surprising assertion. Yet, data streaming infrastructures have gained in popularity and many projects require the data to be processed as soon as it shows up. This contributed to the development of famous technologies like Spark Stremaing, Apache Storm and more broadly websockets. This latest piece of software in particular brought real-time data feeds to web applications, trying to solve low-latency connections. Coupled with the asynchronous Node.js, you can build a powerful event-based reactive system. But what about Python? Given the popularity of the language in data science, would it be possible to bring the benefits of this kind of data ingestion? As this two-part post series will show, it turns out that modern Python (Python 3.4 or later) supports asynchronous data streaming apps. Introducing asyncio Python 3.4 introduced in the standard library the module asyncio to provision the language with: Asynchronous I/O, event loop, coroutines and tasks While Python treats functions as first-class objects (meaning you can assign them to variables and pass them as arguments), most developers follow an imperative programming style. It seems on purpose: It requires super human discipline to write readable code in callbacks and if you don’t believe me look at any piece of JavaScript code. - Guido van Rossum So Asyncio is the pythonic answer to asynchronous programming. This paradigm makes a lot of sense for otherwise costly I/O operations or when we need events to trigger code. Scenario For fun and profit, let's build such a project. We will simulate a dummy electrical circuit composed of three components: A clock regularly ticking A board I/O pin randomly choosing to toggle its binary state on clock events A buzzer buzzing when the I/O pin flips to one This set us up with an interesting machine-to-machine communication problem to solve. Note that the code snippets in this post make use of features like async and await introduced in Python 3.5. While it would be possible to backport to Python 3.4, I highly recommend that you follow along with the same version or newer. Anaconda or Pyenv can ease the installation process if necessary. $ python --version Python 3.5.1 $ pip --version pip 8.1.2 Asynchronous webscoket Client/Server Our first step, the clock, will introduce both asyncio and websocket basics. We need a straightforward method that fires tick signals through a websocket and wait for acknowledgement. # filename: sketch.py async def clock(socket, port, tacks=3, delay=1) The async keyword is sugar syntaxing introduced in Python 3.5 to replace the previous @asyncio.coroutine. The official pep 492 explains it all but the tldr : API quality. To simplify websocket connection plumbing, we can take advantage of the eponymous package: pip install websockets==3.5.1. It hides the protocol's complexity behind an elegant context manager. # filename: sketch.py # the path "datafeed" in this uri will be a parameter available in the other side but we won't use it for this example uri = 'ws://{socket}:{port}/datafeed'.format(socket=socket, port=port) # manage asynchronously the connection async with websockets.connect(uri) as ws: for payload in range(tacks): print('[ clock ] > {}'.format(payload)) # send payload and wait for acknowledgement await ws.send(str(payload)) print('[ clock ] < {}'.format(await ws.recv())) time.sleep(delay) The keyword await was introduced with async and replaces the old yield from to read values from asynchronous functions. Inside the context manager the connection stays open and we can stream data to the server we contacted. The server: IOPin At the core of our application are entities capable of speaking to each other directly. To make things fun, we will expose the same API as Arduino sketches, or a setup method that runs once at startup and a loop called when new data is available. # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: factory.py import abc import asyncio import websockets class FactoryLoop(object): """ Glue components to manage the evented-loop model. """ __metaclass__ = abc.ABCMeta def__init__(self, *args, **kwargs): # call user-defined initialization self.setup(*args, **kwargs) def out(self, text): print('[ {} ] {}'.format(type(self).__name__, text)) @abc.abstractmethod def setup(self, *args, **kwargs): pass @abc.abstractmethod async def loop(self, channel, data): pass def run(self, host, port): try: server = websockets.serve(self.loop, host, port) self.out('serving on {}:{}'.format(host, port)) asyncio.get_event_loop().run_until_complete(server) asyncio.get_event_loop().run_forever() exceptOSError: self.out('Cannot bind to this port! Is the server already running?') exceptKeyboardInterrupt: self.out('Keyboard interruption, aborting.') asyncio.get_event_loop().stop() finally: asyncio.get_event_loop().close() The child objects will be required to implement setup and loop, while this class will take care of: Initializing the sketch Registering a websocket server based on a asynchronous callback (loop) Telling the event loop to poll for... events The websockets states the server callback is expected to have the signature on_connection(websocket, path). This is too low-level for our purpose. Instead, we can write a decorator to manage asyncio details, message passing, or error handling. We will only call self.loop with application-level-relevant information: the actual message and the websocket path. # filename: factory.py import functools import websockets def reactive(fn): @functools.wraps(fn) async def on_connection(klass, websocket, path): """Dispatch events and wrap execution.""" klass.out('** new client connected, path={}'.format(path)) # process messages as long as the connection is opened or # an error is raised whileTrue: try: message = await websocket.recv() aknowledgement = await fn(klass, path, message) await websocket.send(aknowledgement or 'n/a') except websockets.exceptions.ConnectionClosed as e: klass.out('done processing messages: {}n'.format(e)) break return on_connection Now we can develop a readable IOPin object. # filename: sketch.py import factory class IOPin(factory.FactoryLoop): """Set an IO pin to 0 or 1 randomly.""" def setup(self, chance=0.5, sequence=3): self.chance = chance self.sequence = chance def state(self): """Toggle state, sometimes.""" return0if random.random() < self.chance else1 @factory.reactive async def loop(self, channel, msg): """Callback on new data.""" self.out('new tick triggered on {}: {}'.format(channel, msg)) bits_stream = [self.state() for _ in range(self.sequence)] self.out('toggling pin state: {}'.format(bits_stream)) # ... # ... toggle pin state here # ... return'acknowledged' We finally need some glue to run both the clock and IOPin and test if the latter toggles its state when the former fires new ticks. The following snippet uses a convenient library, click 6.6, to parse command-line arguments. #! /usr/bin/env python # -*- coding: utf-8 -*- # vim_fenc=utf-8 # # filename: arduino.py import sys import asyncio import click import sketchs @click.command() @click.argument('sketch') @click.option('-s', '--socket', default='localhost', help='Websocket to bind to') @click.option('-p', '--port', default=8765, help='Websocket port to bind to') @click.option('-t', '--tacks', default=5, help='Number of clock ticks') @click.option('-d', '--delay', default=1, help='Clock intervals') def main(sketch, **flags): if sketch == 'clock': # delegate the asynchronous execution to the event loop asyncio.get_event_loop().run_until_complete(sketchs.clock(**flags)) elif sketch == 'iopin': # arguments in the constructor go as is to our `setup` method sketchs.IOPin(chance=0.6).run(flags['socket'], flags['port']) else: print('unknown sketch, please choose clock, iopin or buzzer') return1 return0 if__name__ == '__main__': sys.exit(main()) Don't forget to chmod +x the script and start the server in a first terminal ./arduino.py iopin. When it is listening for connections, start the clock with ./arduino.py clock and watch them communicate! Note that we used here common default host and port so they can find each other. We have a good start with our app, and now in Part 2 we will further explore peer-to-peer communication, service discovery, and the streaming machine-to-machine concept. About the author Xavier Bruhiere is a lead developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 7553
Modal Close icon
Modal Close icon