Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-enabling-your-new-theme-magento
Packt
18 Dec 2013
3 min read
Save for later

Enabling your new theme in Magento

Packt
18 Dec 2013
3 min read
(For more resources related to this topic, see here.) After your new theme is in place, you can enable it in Magento. Log in to your Magento store's administration panel. Once you have logged in, navigate to System | Configuration, as shown in the following screenshot: From there, select the global configuration scope (labeled Default Config in the following screenshot) you want to apply your new theme to, from the Current Configuration Scope dropdown in the top left of your screen: Once this has loaded, navigate to the Design tab under GENERAL in the left-hand column and expand the Themes block in the right-hand column, as shown in the following screenshot: From here, you can tell Magento to use your new theme. The values given here correspond to the name you gave to the directories when creating your theme. The example uses responsive as the value here, as shown in the following screenshot: Click on the Save Config button at the top right of your screen to save the changes. Next, check that your new theme has been activated. Remember the styles.css file you added in the skin/frontend/default/responsive/css directory? The presence of that file is telling Magento to load your new theme's CSS file instead of the default styles.css file for Magento from the default package, so your store now has none of the original CSS styling it. As such, you should see the following screenshot when you attempt to view the frontend of your Magento store: Overwriting the default Magento templates Noticed the name of your Magento theme appearing next to the logo in the header of your store? You can overwrite the default header.phtml that's causing it by copying the contents of app/design/frontend/base/default/template/page/html/header.phtml into app/design/frontend/default/responsive/template/ page/html/header.phtml. Open the file and find the following lines: <?php if ($this->getIsHomePage()):?> <h1 class="logo"><strong><?php echo $this->getLogoAlt() ?></strong><a href="<?php echo $this->getUrl('') ?>" title= "<?php echo $this->getLogoAlt() ?>" class="logo"><img src = "<?php echo $this->getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a></h1> <?php else:?> <a href="<?php echo $this->getUrl('') ?>" title="<?php echo $this->getLogoAlt() ?>" class="logo"><strong><?php echo $this->getLogoAlt() ?></strong><img src = "<?php echo $this->getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a> <?php endif?> Replace them with these lines: <a href="<?php echo $this->getUrl('') ?>" title="<?php echo $this- >getLogoAlt() ?>" class="logo"><img src = "<?php echo $this-> getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a> Now if you save that file (and upload it to your server, if needed), you can see that the logo now looks tidier, as shown in the following screenshot: That's it! Your basic responsive Magento theme is up and running. Summary Hopefully after reading this article you will get a better understanding of how to enable your new theme in Magento. Resources for Article: Further resources on this subject: Magento : Payment and shipping method [Article] Categories and Attributes in Magento: Part 2 [Article] Magento: Exploring Themes [Article]
Read more
  • 0
  • 0
  • 12541

Packt
21 Nov 2013
7 min read
Save for later

Zurb Foundation – an Overview

Packt
21 Nov 2013
7 min read
(For more resources related to this topic, see here.) Most importantly, you can apply your creativity to make the design your own. Foundation gives you the tools you need for this. Then it gets out of the way and your site becomes your own. Especially when you advance to using the Foundation's SASS variables, functions and mixins, you have the ability to make your site your own unique creation. Foundation's grid system The foundation (pun intended) of Zurb Foundation is its grid system—rows and columns—much like a spread sheet, a blank sheet of graph paper, or tables, similar to what we used to use for HTML layout. Think of it as the canvas upon which you design your website. Each cell is a content area that can be merged with other cells, beside or below it, to make larger content areas. A default installation of Foundation will be based on twelve cells in a row. A column is comprised of one or more individual cells. Lay out a website Let's put Foundation's grid system to work in an example. We'll build a basic website with a two part header, a two part content area, a sidebar, and a three part footer area. With the simple techniques we demonstrate here, you can craft mostly any layout you want. Here is the mobile view Foundation works best when you design for small devices first, so here is what we want our small device (mobile) view to look like: This is the layout we want on mobile or small devices. But we've labeled the content areas with a title that describes where we want them on a regular desktop. By doing this, we are thinking ahead and creating a view ready for the desktop as well. Here is the desktop view Since a desktop display is typically wider than a mobile display, we have more horizontal space and things that had to be presented vertically on the mobile view can be displayed horizontally on the desktop view. Here is how we want our regular desktop or laptop to display the same content areas: These are not necessarily drawn to scale. It is the layout we are interested in. The two part header went from being one above the other in the mobile view to being side-by-side in the desktop view. The header on the top went left and the bottom header went right. All these make perfect sense. However, the sidebar shifted from being above the content area in the mobile view and shifted to its right in the mobile view. That's not natural when rendering HTML. Something must have happened! The content areas, left and right, stayed the same in both the views. And that's exactly what we wanted. The three part footer got rearranged. The center footer appears to have slid down between the left and right footers. That makes sense from a design perspective but it isn't natural from an HTML rendering perspective. Foundation provides the classes to easily make all this magic happen. Here is the code Unlike the early days of mobile design where a separate website was built for mobile devices, with Foundation you build your site once, and use classes to specify how it should look on both mobile and regular displays. Here is the HTML code that generates the two layouts: <header class="row"> <div class="large-6 column">Header Left</div> <div class="large-6 column">Header Right</div> </header> <main class="row"> <aside class="large-3 push-9 column">Sidebar Right</aside> <section class="large-9 pull-3 columns"> <article class="row"> <div class="small-9 column">Content Left</div> <div class="small-3 column">Content Right</div> </article> </section> </main> <footer class="row"> <div class="small-6 small-centered large-4 large-uncentered push-4 column">Footer Center</div> <div class="small-6 large-4 pull-4 column">Footer Left</div> <div class="small-6 large-4 column">Footer Right</div> </footer> That's all there is to it. Replace the text we used for labels with real content and you have a design that displays on mobile and regular displays in the layouts we've shown in this article. Toss in some widgets What we've shown above is just the core of the Foundation framework. As a toolkit, it also includes numerous CSS components and JavaScript plugins. Foundation includes styles for labels, lists, and data tables. It has several navigation components including Breadcrumbs, Pagination, Side Nav, and Sub Nav. You can add regular buttons, drop-down buttons, and button groups. You can make unique content areas with Block Grids, a special variation of the underlying grid. You can add images as thumbnails, put content into panels, present your video feed using the Flex Video component, easily add pricing tables, and represent progress bars. All these components only require CSS and are the easiest to integrate. By tossing in Foundation's JavaScript plugins, you have even more capabilities. Plugins include things like Alerts, Tooltips, and Dropdowns. These can be used to pop up messages in various ways. The Section plugin is very powerful when you want to organize your content into horizontal or vertical tabs, or when you want horizontal or vertical navigation. Like most components and plugins, it understands the mobile and regular desktop views and adapts accordingly. The Top Bar plugin is a favorite for many developers. It is a multi-level fly out menu plugin. Build your menu in HTML the way Top Bar expects. Set it up with the appropriate classes and it just works. Magellan and Joyride are two plugins that you can put to work to help show your viewers where they are on a page or to help them navigate to various sections on a page. Orbit is Foundation's slide presentation plugin. You often see sliders on the home page of websites these days. Clearing is similar to Orbit except that it displays thumbnails of the images in a presentation below the main display window. A viewer clicks on a thumbnail to display the full image. Reveal is a plugin that allows you to put a link anywhere on your page and when the viewer clicks on it, a box pops up extra content, which could even be an Orbit slider, is revealed. Interchange is one of the most recent additions to Foundation's plugin factory. With it you can selectively load images depending on the target environment. This lets you optimize bandwidth between your web server and your viewer's browser. Foundation also provides a great Forms plugin. On its own it is capable. With the additional Abide plugin you have a great deal of control over form layout and editing. Summary As you can see, Foundation is very capable of laying out web page for mobile devices and regular displays. One set of code, two very different looks. And that's just the beginning. Foundation's CSS components and JavaScript plugins can be placed on a web page in almost any content area. With these widgets you can have much more interaction with your viewers than you otherwise would. Put Foundation to work in your website today! Resources for Article: Further resources on this subject: Quick start – using Foundation 4 components for your first website [Article] Introduction to RWD frameworks [Article] Nesting, Extend, Placeholders, and Mixins [Article]
Read more
  • 0
  • 0
  • 12509

article-image-getting-started-cmis
Packt
19 Mar 2014
9 min read
Save for later

Getting Started with CMIS

Packt
19 Mar 2014
9 min read
(For more resources related to this topic, see here.) What is CMIS? The goal of CMIS is to provide a standard method for accessing content from different content repositories. Using CMIS service calls, it is possible to navigate through and create content in a repository. CMIS also includes a query language for searching both the metadata and full-text content stored that is stored in a repository. The CMIS standard defines the protocols and formats for the requests and responses of API service calls made to a repository. CMIS acts as a standard interface and protocol for accessing content repositories, something similar to how ANSI-SQL acts as a common-denominator language for interacting with different databases. The use of the CMIS API for accessing repositories brings with it a number of benefits. Perhaps chief among these is the fact that access to CMIS is language neutral. Any language that supports HTTP services can be used to access a CMIS-enabled repository. Client software can be written to use a single API and be deployed to run against multiple CMIS-compliant repositories. Alfresco and CMIS The original draft for CMIS 0.5 was written by EMC, IBM and Microsoft. Shortly after that draft, Alfresco and other vendors joined the CMIS standards group. Alfresco was an early CMIS adopter and offered an implementation of CMIS version 0.5 in 2008. In 2009, Alfresco began hosting an on-line preview of the CMIS standard. The server, accessible via the http://cmis.alfresco.com URL, still exists and implements the latest CMIS standard. As of this writing, that URL hosts a preview of CMIS 1.1 features. In mid-2010, just after the CMIS 1.0 standard was approved, Alfresco released CMIS in both the Alfresco Community and Enterprise editions. In 2012, with Alfresco version 4.0, Alfresco moved from a home grown CMIS runtime implementation to one that uses the Apache Chemistry OpenCMIS Server Framework. From that release, developers have been able to customize Alfresco using the OpenCMIS Java API. Overview of the CMIS Standard Next we discuss the details of the CMIS specification, particularly the domain model, the different services that it provides, and the supported protocol bindings. Domain model (Object model) Every content repository vendor has their own definition of a content or object model. Alfresco, for example, has rich content modeling capabilities, such as types and aspects that can inherit from other types and aspects, and properties that can be assigned attributes like data-type, multi-valued and required. But there are wide differences in the ways in which different vendors have implemented content modeling. In the Documentum ECM system, for example, the generic content type is called dm_document, while in Alfresco it is called cm:content. Another example is the concept of an aspect as used in Alfresco – many repositories do not support that idea. The CMIS Domain Model is an attempt by the CMIS Standardization group to define a framework generic enough that can describe content models and map to concepts used by many different repository vendors. The CMIS Domain Model defines a Repository as a container and an entry point to all content items, from now on called objects. All objects are classified by an Object Type , which describes a common set of Properties (like Type ID, Parent, and Display Name). There are five base types of objects: Document, Folder, Relationship, Policy , Item(available from CMIS 1.1), and these all inherit from Object Type. In addition to the five base object types there are also a number of property types that can be used when defining new properties for an Object type. These are shown in the figure: String, Boolean, Decimal, Integer, and DateTime. Besides these property types there are also the URI, Id, and HTML property types, not shown in the figure. Taking a closer look at each one of the base types, we can see that: Document almost always corresponds to a file, although it need not have any content (when you upload a file via, for example, the AtomPub binding the metadata is created with the first request and the content for the file is posted with the second request). Folder is a container for file-able objects such as folders and documents. Immediately after filing a folder or document into a folder, an implicit parent-child relationship is automatically created. The fileable property of the object type definition specifies whether an object is file-able or not. Relationship object defines a relationship between a target and source object. Objects can have multiple relationships with other objects. The support for relationship objects is optional. Policy is a way of defining administrative policies to manage objects. An object to which a policy may be applied is called a controllable object (controllablePolicy property has to be set to true). For example, a CMIS policy could be used to define a retention policy. A policy is opaque and has no meaning to the repository. It must be implemented and enforced in a repository-specific way. For example, rules might be used in Alfresco to enforce a policy. The support for policy objects is optional. Item (CMIS 1.1) object represents a generic type of a CMIS information asset. For example, this could be a user or group object. Item objects are not versionable and do not have content streams like documents, but they do have properties like all other CMIS objects. The support for item objects is optional. Additional object types can be defined in a repository as custom subtypes of the base types. The Legal Case type shown in the figure above is an example. CMIS services are provided for the discovery of object types that are defined in a repository. However, object type management services, such as the creation, modification, and deletion of an object type, are not covered by the CMIS standard. An object has one primary base object type, such as Document or Folder, which cannot be changed. An object can also have secondary object types applied to it (CMIS 1.1). A secondary type is a named class that may add extra properties to an object in addition to the properties defined by the object's primary base object-type (This is similar to the concept of aspects in Alfresco). Every CMIS object has an opaque and immutable Object Identity (ID), which is assigned by the repository when the object is created. In the case of Alfresco, a Node Reference is created which becomes the Object ID. The ID uniquely identifies an object within a repository regardless of the type of the object. All CMIS objects have a set of named, but not explicitly ordered, properties. Within an object, each property is uniquely identified by its Property ID. In addition, a document object can have a Content Stream, which is then used to hold the actual byte content from a file. A document can also have one or more Renditions, like a thumbnail, a different sized image, or an alternate representation of the content stream. Document or folder objects can have one Access Control List (ACL), which controls access to the document or folder. An ACL is made up of a list of Access Control Entries (ACEs). An ACE in turn represents one or more permissions being granted to a principal, such as a user, group, role, or something similar. All objects and properties are defined in the cmis name-space. From now on we will refer to the different objects and properties via their fully qualified name, for example cmis:document or cmis:name. Services The following CMIS services can access and manage CMIS objects in the repository: Repository Services: These are used to discover information about the repository, including repository ids (more than one repository could be managed by the endpoint). Since many features are optional, this provides a way to find out which are supported. CMIS 1.1 compliant repositories also support the creation of new types dynamically. Methods: getRepositories, getRepositoryInfo , getTypeChildren, getTypeDescendants , getTypeDefinition , createType (CMIS 1.1), updateType (CMIS 1.1), deleteType (CMIS 1.1) Navigation Services: These are used to navigate the folder hierarchy. Methods: getChildren, getDescendants, getFolderTree, getFolderParent, getObjectParents, getCheckedOutDocs. Object Services: These services provide ID-based CRUD (Create, Read, Update, Delete) operations. Methods: createDocument, createDocumentFromSource, createFolder, createRelationship, createPolicy, createItem (CMIS 1.1), getAllowableActions, getObject, getProperties, getObjectByPath, getContentStream, getRenditions, updateProperties, bulkUpdateProperties (CMIS 1.1), moveObject, deleteObject, deleteTree, setContentStream, appendContentStream (CMIS 1.1), deleteContentStream. Multi-filing Services: These services (optional) makes it possible to put an object to several folders (multi-filing) or outside the folder hierarchy (un-filing). This service is not used to create or delete objects. Methods: addObjectToFolder, removeObjectFromFolder. Discovery Services: These are used to search for query-able objects within the Repository (objects with property queryable set to true ). Methods: query , getContentChanges. Versioning Services: These are used to manage versioning of document objects, other objects are not versionable. Whether or not a document can be versioned is controlled by the versionable property in the Object type. Methods: checkOut, cancelCheckOut, checkIn, getObjectOfLatestVersion, getPropertiesOfLatestVersion, getAllVersions. Relationship Services: These (optional) are used to retrieve the relationships in which an object is participating. Methods: getObjectRelationships. Policy Services: These (optional) are used to apply or remove a policy object to an object which has the property controllablePolicy set to true. Methods: applyPolicy, removePolicy, getAppliedPolicies. ACL Services: This service is used to discover and manage the Access Control List (ACL) for an object, if the object has one. Methods: applyACL, and getACL Summary In this article, we introduced the CMIS standard and how it came about. Then we covered the CMIS domain model with its five base object types: document, folder, relationship, policy, and item (CMIS 1.1.). We also learned that the CMIS standard defines a number of services, such as navigation and discovery, which makes it possible to manipulate objects in a content management system repository. Resources for Article: Further resources on this subject: Content Delivery in Alfresco 3 [Article] Getting Started with the Alfresco Records Management Module [Article] Managing Content in Alfresco [Article]
Read more
  • 0
  • 0
  • 12502

article-image-alfresco-web-scrpits
Packt
06 Nov 2014
15 min read
Save for later

Alfresco Web Scrpits

Packt
06 Nov 2014
15 min read
In this article by Ramesh Chauhan, the author of Learning Alfresco Web Scripts, we will cover the following topics: Reasons to use web scripts Executing a web script from standalone Java program Invoking a web script from Alfresco Share DeclarativeWebScript versus AbstractWebScript (For more resources related to this topic, see here.) Reasons to use web scripts It's now time to discover the answer to the next question—why web scripts? There are various alternate approaches available to interact with the Alfresco repository, such as CMIS, SOAP-based web services, and web scripts. Generally, web scripts are always chosen as a preferred option among developers and architects when it comes to interacting with the Alfresco repository from an external application. Let's take a look at the various reasons behind choosing a web script as an option instead of CMIS and SOAP-based web services. In comparison with CMIS, web scripts are explained as follows: In general, CMIS is a generic implementation, and it basically provides a common set of services to interact with any content repository. It does not attempt to incorporate the services that expose all features of each and every content repository. It basically tries to cover a basic common set of functionalities for interacting with any content repository and provide the services to access such functionalities. Alfresco provides an implementation of CMIS for interacting with the Alfresco repository. Having a common set of repository functionalities exposed using CMIS implementation, it may be possible that sometimes CMIS will not do everything that you are aiming to do when working with the Alfresco repository. While with web scripts, it will be possible to do the things you are planning to implement and access the Alfresco repository as required. Hence, one of the best alternatives is to use Alfresco web scripts in this case and develop custom APIs as required, using the Alfresco web scripts. Another important thing to note is, with the transaction support of web scripts, it is possible to perform a set of operations together in a web script, whereas in CMIS, there is a limitation for the transaction usage. It is possible to execute each operation individually, but it is not possible to execute a set of operations together in a single transaction as possible in web scripts. SOAP-based web services are not preferable for the following reasons: It takes a long time to develop them They depend on SOAP Heavier client-side requirements They need to maintain the resource directory Scalability is a challenge They only support XML In comparison, web scripts have the following properties: There are no complex specifications There is no dependency on SOAP There is no need to maintain the resource directory They are more scalable as there is no need to maintain session state They are a lightweight implementation They are simple and easy to develop They support multiple formats In a developer's opinion: They can be easily developed using any text editor No compilations required when using scripting language No need for server restarts when using scripting language No complex installations required In essence: Web scripts are a REST-based and powerful option to interact with the Alfresco repository in comparison to the traditional SOAP-based web services and CMIS alternatives They provide RESTful access to the content residing in the Alfresco repository and provide uniform access to a wide range of client applications They are easy to develop and provide some of the most useful features such as no server restart, no compilations, no complex installations, and no need of a specific tool to develop them All these points make web scripts the most preferred choice among developers and architects when it comes to interacting with the Alfresco repository Executing a web script from standalone Java program There are different options to invoke a web script from a Java program. Here, we will take a detailed walkthrough of the Apache commons HttpClient API with code snippets to understand how a web script can be executed from the Java program, and will briefly mention some other alternatives that can also be used to invoke web scripts from Java programs. HttpClient One way of executing a web script is to invoke web scripts using org.apache.commons.httpclient.HttpClient API. This class is available in commons-httpclient-3.1.jar. Executing a web script with HttpClient API also requires commons-logging-*.jar and commons-codec-*.jar as supporting JARs. These JARs are available at the tomcatwebappsalfrescoWEB-INFlib location inside your Alfresco installation directory. You will need to include them in the build path for your project. We will try to execute the hello world web script using the HttpClient from a standalone Java program. While using HttpClient, here are the steps in general you need to follow: Create a new instance of HttpClient. The next step is to create an instance of method (we will use GetMethod). The URL needs to be passed in the constructor of the method. Set any arguments if required. Provide the authentication details if required. Ask HttpClient to now execute the method. Read the response status code and response. Finally, release the connection. Understanding how to invoke a web script using HttpClient Let's take a look at the following code snippet considering the previous mentioned steps. In order to test this, you can create a standalone Java program with a main method and put the following code snippet in Java program and then modify the web script URLs/credentials as required. Comments are provided in the following code snippet for you to easily correlate the previous steps with the code: // Create a new instance of HttpClient HttpClient objHttpClient = new HttpClient(); // Create a new method instance as required. Here it is GetMethod. GetMethod objGetMethod = new GetMethod("http://localhost:8080/alfresco/service/helloworld"); // Set querystring parameters if required. objGetMethod.setQueryString(new NameValuePair[] { new NameValuePair("name", "Ramesh")}); // set the credentials if authentication is required. Credentials defaultcreds = new UsernamePasswordCredentials("admin","admin"); objHttpClient.getState().setCredentials(new AuthScope("localhost",8080, AuthScope.ANY_REALM), defaultcreds); try { // Now, execute the method using HttpClient. int statusCode = objHttpClient.executeMethod(objGetMethod); if (statusCode != HttpStatus.SC_OK) { System.err.println("Method invocation failed: " + objGetMethod.getStatusLine()); } // Read the response body. byte[] responseBody = objGetMethod.getResponseBody(); // Print the response body. System.out.println(new String(responseBody)); } catch (HttpException e) { System.err.println("Http exception: " + e.getMessage()); e.printStackTrace(); } catch (IOException e) { System.err.println("IO exception transport error: " + e.getMessage()); e.printStackTrace(); } finally { // Release the method connection. objGetMethod.releaseConnection(); } Note that the Apache commons client is a legacy project now and is not being developed anymore. This project has been replaced by the Apache HttpComponents project in HttpClient and HttpCore modules. We have used HttpClient from Apache commons client here to get an overall understanding. Some of the other options that you can use to invoke web scripts from a Java program are mentioned in subsequent sections. URLConnection One option to execute web script from Java program is by using java.net.URLConnection. For more details, you can refer to http://docs.oracle.com/javase/tutorial/networking/urls/readingWriting.html. Apache HTTP components Another option to execute web script from Java program is to use Apache HTTP components that are the latest available APIs for HTTP communication. These components offer better performance and more flexibility and are available in httpclient-*.jar and httpcore-*.jar. These JARs are available at the tomcatwebappsalfrescoWEBINFlib location inside your Alfresco installation directory. For more details, refer to https://hc.apache.org/httpcomponents-client-4.3.x/quickstart.html to get an understanding of how to execute HTTP calls from a Java program. RestTemplate Another alternative would be to use org.springframework.web.client.RestTemplate available in org.springframework.web-*.jar located at tomcatwebappsalfrescoWEB-INFlib inside your Alfresco installation directory. If you are using Alfresco community 5, the RestTemplate class is available in spring-web-*.jar. Generally, RestTemplate is used in Spring-based services to invoke an HTTP communication. Calling web scripts from Spring-based services If you need to invoke an Alfresco web script from Spring-based services, then you need to use RestTemplate to invoke HTTP calls. This is the most commonly used technique to execute HTTP calls from Spring-based classes. In order to do this, the following are the steps to be performed. The code snippets are also provided: Define RestTemplate in your Spring context file: <bean id="restTemplate" class="org.springframework.web.client.RestTemplate" /> In the Spring context file, inject restTemplate in your Spring class as shown in the following example: <bean id="httpCommService" class="com.test.HTTPCallService"> <property name="restTemplate" value="restTemplate" /> </bean> In the Java class, define the setter method for restTemplate as follows: private RestTemplate restTemplate; public void setRestTemplate(RestTemplate restTemplate) {    this.restTemplate = restTemplate; } In order to invoke a web script that has an authentication level set as user authentication, you can use RestTemplate in your Java class as shown in the following code snippet. The following code snippet is an example to invoke the hello world web script using RestTemplate from a Spring-based service: // setup authentication String plainCredentials = "admin:admin"; byte[] plainCredBytes = plainCredentials.getBytes(); byte[] base64CredBytes = Base64.encodeBase64(plainCredBytes); String base64Credentials = new String(base64CredBytes); // setup request headers HttpHeaders reqHeaders = new HttpHeaders(); reqHeaders.add("Authorization", "Basic " + base64Credentials); HttpEntity<String> requestEntity = new HttpEntity<String>(reqHeaders); // Execute method ResponseEntity<String> responseEntity = restTemplate.exchange("http://localhost:8080/alfresco/service/helloworld?name=Ramesh", HttpMethod.GET, requestEntity, String.class); System.out.println("Response:"+responseEntity.getBody()); Invoking a web script from Alfresco Share When working on customizing Alfresco Share, you will need to make a call to Alfresco repository web scripts. In Alfresco Share, you can invoke repository web scripts from two places. One is the component level the presentation web scripts, and the other is client-side JavaScript. Calling a web script from presentation web script JavaScript controller Alfresco Share renders the user interface using the presentation web scripts. These presentation web scripts make a call to the repository web script to render the repository data. Repository web script is called before the component rendering file (for example, get.html.ftl) loads. In out-of-the-box Alfresco installation, you should be able to see the components’ presentation web script available under tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts. When developing a custom component, you will be required to write a presentation web script. A presentation web script will make a call to the repository web script. You can make a call to the repository web script as follows: var reponse = remote.call("url of web script as defined in description document"); var obj = eval('(' + response + ')'); In the preceding code snippet, we have used the out-of-the-box available remote object to make a repository web script call. The important thing to notice is that we have to provide the URL of the web script as defined in the description document. There is no need to provide the initial part such as host or port name, application name, and service path the way we use while calling web script from a web browser. Once the response is received, web script response can be parsed with the use of the eval function. In the out-of-the-box code of Alfresco Share, you can find the presentation web scripts invoking the repository web scripts, as we have seen in the previous code snippet. For example, take a look at the main() method in the site-members.get.js file, which is available at the tomcatwebappssharecomponentssite-members location inside your Alfresco installed directory. You can take a look at the other JavaScript controller implementation for out-of-the-box presentation web scripts available at tomcatwebappsshareWEB-INFclassesalfrescosite-webscripts making repository web script calls using the previously mentioned technique. When specifying the path to provide references to the out-of-the-box web scripts, it is mentioned starting with tomcatwebapps. This location is available in your Alfresco installation directory. Invoking a web script from client-side JavaScript The client-side JavaScript control file can be associated with components in Alfresco Share. If you need to make a repository web script call, you can do this from the client-side JavaScript control files generally located at tomcatwebappssharecomponents. There are different ways you can make a repository web script call using a YUI-based client-side JavaScript file. The following are some of the ways to do invoke web script from client-side JavaScript files. References are also provided along with each of the ways to look in the Alfresco out-of-the-box implementation to understand its usage practically: Alfresco.util.Ajax.request: Take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the _removeUser function. Alfresco.util.Ajax.jsonRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarydocumentlist.js and refer to the onOptionSelect function. Alfresco.util.Ajax.jsonGet: To directly make a call to get web script, take a look at tomcatwebappssharecomponentsconsolegroups.js and refer to the getParentGroups function. YAHOO.util.Connect.asyncRequest: Take a look at tomcatwebappssharecomponentsdocumentlibrarytree.js and refer to the _sortNodeChildren function. In alfresco.js located at tomcatwebappssharejs, the wrapper implementation of YAHOO.util.Connect.asyncRequest is provided and various available methods such as the ones we saw in the preceding list, Alfresco.util.Ajax.request, Alfresco.util.Ajax.jsonRequest, and Alfresco.util.Ajax.jsonGet can be found in alfresco.js. Hence, the first three options in the previous list internally make a call using the YAHOO.util.Connect.asyncRequest (the last option in the previous list) only. Calling a web script from the command line Sometimes while working on your project, it might be required that from the Linux machine you need to invoke a web script or create a shell script to invoke a web script. It is possible to invoke a web script from the command line using cURL, which is a valuable tool to use while working on web scripts. You can install cURL on Linux, Mac, or Windows and execute a web script from the command line. Refer to http://curl.haxx.se/ for more details on cURL. You will be required to install cURL first. On Linux, you can install cURL using apt-get. On Mac, you should be able to install cURL through MacPorts and on Windows using Cygwin you can install cURL. Once cURL is installed, you can invoke web script from the command line as follows: curl -u admin:admin "http://localhost:8080/alfresco/service/helloworld?name=Ramesh" This will display the web script response. DeclarativeWebScript versus AbstractWebScript The web script framework in Alfresco provides two different helper classes from which the Java-backed controller can be derived. It's important to understand the difference between them. The first helper class is the one we used while developing the web script in this article, org.springframework.extensions.webscripts.DeclarativeWebScript. The second one is org.springframework.extensions.webscripts.AbstractWebScript. DeclarativeWebScript in turn only extends the AbstractWebScript class. If the Java-backed controller is derived from DeclarativeWebScript, then execution assistance is provided by the DeclarativeWebScript class. This helper class basically encapsulates the execution of the web script and checks if any controller written in JavaScript is associated with the web script or not. If any JavaScript controller is found for the web script, then this helper class will execute it. This class will locate the associated response template of the web script for the requested format and will pass the populated model object to the response template. For the controller extending DeclarativeWebScript, the controller logic for a web script should be provided in the Map<String, Object> executeImpl(WebScriptRequest req, Status status, Cache cache) method. Most of the time while developing a Java-backed web script, the controller will extend DeclarativeWebScript only. AbstractWebScript does not provide execution assistance in the way DeclarativeWebScript does. It gives full control over the entire execution process to the derived class and allows the extending class to decide how the output is to be rendered. One good example of this is the DeclarativeWebScript class itself. It extends the AbstractWebScript class and provides a mechanism to render the response using FTL templates. In a scenario like streaming the content, there won't be any need for a response template; instead, the content itself needs to be rendered directly. In this case, the Java-backed controller class can extend from AbstractWebScript. If a web script has both a JavaScript-based controller and a Java-backed controller, then: If a Java-backed controller is derived from DeclarativeWebScript, then first the Java-backed controller will get executed and then the control would be passed to the JavaScript-backed controller prior to returning the model object to the response template. If the Java-backed controller is derived from AbstractWebScript, then, only the Java-backed controller will be executed. The JavaScript controller will not get executed. Summary In this article, we took a look at the reasons of using web scripts. Then we executed a web script from standalone Java program and move on to invoke a web script from Alfresco Share. Lastly, we saw the difference between DeclarativeWebScript versus AbstractWebScript. Resources for Article: Further resources on this subject: Alfresco 3 Business Solutions: Types of E-mail Integration [article] Alfresco 3: Writing and Executing Scripts [article] Overview of REST Concepts and Developing your First Web Script using Alfresco [article]
Read more
  • 0
  • 0
  • 12450

article-image-how-create-tax-rule-magento
Packt
27 Oct 2009
11 min read
Save for later

How to create a Tax Rule in Magento

Packt
27 Oct 2009
11 min read
In this article by William Rice, we will see how to create Tax Rules in Magento. In the real world, the tax rate that you pay is based on three things: location, product type, and purchaser type. In Magento, we can create Tax Rules that determine the amount of tax that a customer pays, based upon the shipping address, product class, and customer class. When you buy a product, you sometimes pay sales tax on that product. The sales tax that you pay is based on: Where you purchased the product from. Tax rules vary in different cities, states, and countries. The type of product that you purchased. For example, many places don't tax clothing purchases. And, some places tax only some kinds of clothing. This means that you must be able to apply different tax rates to different kinds of products. The type of purchaser you are. For example, if you buy a laser printer for your home, it is likely that you will pay sales tax. This is because you are a retail customer. If you buy the same printer for your business, in most places you will not pay sales tax. This is because you are a business customer. The amount of the purchase. For example, some places tax clothing purchases only above a specific amount. Anatomy of a Tax Rule A Tax Rule is a combination of the tax rate, shipping address, product class, customer class, and amount of purchase. A Tax Rule states that you pay this amount of tax if you are this class of purchaser, and you bought this class of product for this amount, and are shipping it to this place. The components of a Tax Rule are shown in the following screenshot. This screen is found under Sales | Tax | Manage Tax Rules | Add New Tax Rule. You will see the Name of the Tax Rule while working in the backend. Customer Tax Class Customer Tax Class is a type of customer that is making a purchase. Before creating a Tax Rule, you will need to have at least one Customer Tax Class. Magento provides you with a Tax Rule called Retail Customer. If you serve different types of customers—retail, business, and nonprofit—you will need to create different Customer Tax Classes. Product Tax Class Product Tax Class is a type of Product that is being purchased. When you create a Product, you will assign a Product Tax Class to that Product. Magento comes with two Product Tax Classes:Taxable Goods and Shipping. The class Shipping is applied to shipping charges because some places charge sales tax on shipping. If your customer's sales tax is different for different types of Products, then you will need to create a Product Tax Class for each type of Product. Tax Rate Tax Rate is a combination of place, or tax zone, and percentage. A zone can be a country, state, or zip code. Each zone that you specify can have up to five sales tax percentages. For example, in the default installation of Magento, there is one tax rate for the zone New York. This is 8.3750 percent, and applies to retail customers. The following window can be found at Sales | Tax | Manage Tax Zones & Rates and then clicking on US-NY-*-Rate 1: So in the screenshot of our Tax Rule, the Tax Rate US-NY-*-Rate 1 doesn't mean "a sales tax of 1 percent." It means "Tax rate number 1 for New York, which is 8.3750 percent." In this scenario, New York charges 8.3750 percent sales tax on retail sales. If New York does not charge sales tax for wholesale customers, and you sell to wholesale customers, then you will need to create another Tax Rate for New York: Whenever a zone has different sales taxes for different types of products or customers, you will need to create different Tax Rates for that zone. Priority If several Tax Rules try to apply several Tax Rates at the same time, how should Magento handle them? Should it add them all together? Or, should it apply one rate, calculate the total, and then apply the second rate to that total? That is, should Magento add them or compound them? For example, suppose you sell a product in Philadelphia, Pennsylvania. Further suppose that according to the Tax Rule for Pennsylvania, the sales tax for that item is 6 percent, and that the Tax Rule for Philadelphia adds another 1 percent. In this case, you want Magento to add the two sales taxes. So, you would give the two Tax Rates the same Priority. By contrast, Tax Rates that belong to Tax Rules with different Priorities are compounded. The Tax Rate with the higher Priority (the lower number) is applied, and the next higher Priority is applied to that total, and so on. Sort Order Sort Order determines the Tax Rules' position in the list of Tax Rules. Why create Tax Rules now? Why create a Tax Rule now, before adding our first Product? When you add a Product to your store, you put that Product into a Category, assign an Attribute Set, and select a Tax Class for that Product. By default, Magento comes with two Product Tax Classes and one Tax Rule already created. The Product Tax Classes are Taxable Goods and Shipping. The Tax Rule is Retail Customer-Taxable Goods-Rate 1. If you sell anything other than taxable goods, or sell to anyone other than retail customers, you will need to create a new Tax Rule to cover that situation. Creating a Tax Rule The process for creating a Tax Rule is: Create the Customer Tax Classes that you need, or confirm that you have them. Create the Product Tax Classes that you need, or confirm that you have them. Create the Tax Rates that you need, or confirm that you have them and that they apply to the zones that you need. Create and name the Tax Rule: Assign Customer Tax Class, Product Tax Class, and Tax Rates to the Rule. Use the Priority to determine whether the Rule is added, or compounded, with other Rules. Determine the Sort Order of the Rule and save it. Each of these steps is covered in the subsections that follow. Time for action: Creating a Customer Tax Class From the Admin Panel, select Sales | Tax | Customer Tax Classes. The Customer Tax Classes page is displayed. If this is a new installation, only one Class is listed, Retail Customer as shown in the following screenshot: Click on Add New. A Customer Tax Class Information page is displayed. Enter a name for the Customer Tax Class. In our demo store, we are going to create Customer Tax Classes for Business and Nonprofit customers. Click on Save Class. Repeat these steps until all of the Customer Tax Classes that you need have been created. What just happened? A Tax Rule is composed of a Customer Class, Product Class, Tax Rate, and the location of the purchaser. You have just created the first part of that formula: the Customer Class. Time for action: Creating a Product Tax Class From the Admin Panel, select Sales | Tax | Product Tax Classes. The Product Tax Classes page is displayed. If this is a new installation, only two Classes are listed: Shipping and Taxable Goods. Click on Add New. The Product Tax Class Information page gets displayed: Enter a name for the Product Tax Class. In our demo store, we are going to create Product Tax Classes for Food and Nonfood products. We will apply the Food class to the coffee that we sell. We will apply the Nonfood class to the mugs, coffee presses, and other coffee accessories that we sell. Click on Save Class. Repeat these steps until all of the Product Tax Classes that you need have been created. What just happened? A Tax Rule is composed of a Customer Class, Product Class, Tax Rate, and the location of the purchaser. You have just created the second part of that formula: the Product Class. Creating Tax Rates In Magento, you can create Tax Rates one at a time. You can also import Tax Rates in bulk. Each method is covered in the next section. Time for action: Creating a Tax Rate in Magento From the Admin Panel, select Sales | Tax | Manage Tax Zones & Rates. The Manage Tax Rates page is displayed. If this is a new installation, only two Tax Rates are listed: US-CA-*-Rate 1 and US-NY-*-Rate 1. Click on Add New Tax Rate. The Add New Tax Rate page gets displayed: Tax Identifier is the name that you give this Tax Rate. You will see this name when you select this Tax Rate. The example that we saw is named US-CA-*-Rate 1. Notice how this name tells you the Country, State, and Zip/Post code for the Tax Rate. (The asterisk indicates that it applies to all zip codes in California.) It also tells which rate applies. Notice that the name doesn't give the actual percentage, which is 8.25%. Instead, it says Rate 1. This is because the percentage can change when California changes its tax rate. If you include the actual rate in the name, you would need to rename this Tax Rate when California changes the rate. Another way this rate could have been named is US-CA-All- Retail. Before creating new Tax Rates, you should develop a naming scheme that works for you and your business. Country, State, and Zip/Post Code determine the zone to which this Tax Rate applies. Magento calculates sales tax based upon the billing address, and not the shipping address. Country and State are drop-down lists. You must select from the options given to you. Zip/Post Code accepts both numbers and letters. You can enter an asterisk in this field and it will be a wild card. That is, the rate will apply to all zip/post codes in the selected country and state. You can enter a zip/post code without entering a country or state. If you do this, you should be sure that zip/post code is unique in the entire world. Suppose you have one tax rate for all zip codes in a country/state, such as 6% for United States/Pennsylvania. Also, suppose that you want to have a different tax rate for a few zip codes in that state. In this case, you would create separate tax rates for those few zip codes. The rates for the specific zip codes would override the rates for the wild card. So in a Tax Rate, a wild card means, "All zones unless this is overridden by a specific zone." In our demo store, we are going to create a Tax Rate for retail customers who live in the state of Pennsylvania, but not in the city of Philadelphia as shown: Click on Save Rate. You are taken back to the Manage Tax Rates page. The Tax Rate that you just added should be listed on the page. This procedure is useful for adding Tax Rates one at a time. However, if you need to add many Tax Rates at once, you will probably want to use the Import Tax Rates feature. This enables you to import a .csv, or a text-only file. You usually create the file in a spreadsheet such as OpenOffice Calc or Excel. The next section covers importing Tax Rates. What just happened? A Tax Rule is composed of a Customer Class, Product Class, Tax Rate, and the location of the purchaser. You have just created the third part of that formula: the Tax Rate. The Tax Rate included the location and the percentage of tax. You created the Tax Rate by manually entering the information into the system, which is suitable if you don't have too many Tax Rates to type. Time for action: Exporting and importing Tax Rates In my demo store, I have created a Tax Rate for the state of Pennsylvania. The Tax Rate for the city of Philadelphia is different. However, Magento doesn't enable me to choose a separate Tax Rate based on the city. So I must create a Tax Rate for each zip code in the city of Philadelphia. At this time there are 84 zip codes, and are shown here:     19019 19092 19093 19099 19101 19102 19103 19104 19105 19106 19107 19108 19109 19110 19111 19112 19113 19114 19115 19116 19118 19119 19120 19121 19122 19123 19124 19125 19126 19127 19128 19129 19130 19131 19132 19133 19134 19135 19136 19137 19138 19139 19140 19141 19142 19143 19144 19145 19146 19147 19148 19149 19150 19151 19152 19153 19154 19155 19160 19161 19162 19170 19171 19172 19173 19175 19177 19178 19179 19181 19182 19183 19184 19185 19187 19188 19191 19192 19193 19194 19196 19197 19244 19255        
Read more
  • 0
  • 0
  • 12449

article-image-getting-started-ansible
Packt
18 Nov 2013
8 min read
Save for later

Getting Started with Ansible

Packt
18 Nov 2013
8 min read
(For more resources related to this topic, see here.) First steps with Ansible Ansible modules take arguments in key value pairs that look similar to key=value, perform a job on the remote server, and return information about the job as JSON. The key value pairs allow the module to know what to do when requested. The data returned from the module lets Ansible know if anything changed or if any variables should be changed or set afterwards. Modules are usually run within playbooks as this lets you chain many together, but they can also be used on the command line. Previously, we used the ping command to check that Ansible had been correctly setup and was able to access the configured node. The ping module only checks that the core of Ansible is able to run on the remote machine but effectively does nothing. A slightly more useful module is called setup. This module connects to the configured node, gathers data about the system, and then returns those values. This isn't particularly handy for us while running from the command line, however, in a playbook you can use the gathered values later in other modules. To run Ansible from the command line, you need to pass two things, though usually three. First is a host pattern to match the machine that you want to apply the module to. Second you need to provide the name of the module that you wish to run and optionally any arguments that you wish to give to the module. For the host pattern, you can use a group name, a machine name, a glob, and a tilde (~), followed by a regular expression matching hostnames, or to symbolize all of these, you can either use the word all or simply *. To run the setup module on one of your nodes, you need the following command line: $ ansible machinename -u root -k -m setup The setup module will then connect to the machine and give you a number of useful facts back. All the facts provided by the setup module itself are prepended with ansible_ to differentiate them from variables. The following is a table of the most common values you will use, example values, and a short description of the fields: Field Example Description ansible_architecture x86_64 The architecture of the managed machine ansible_distribution CentOS The Linux or Unix Distribution on the managed machine ansible_distribution_version 6.3 The version of the preceding distribution ansible_domain example.com The domain name part of the server's hostname ansible_fqdn machinename.example.com This is the fully qualified domain name of the managed machine. ansible_interfaces ["lo", "eth0"] A list of all the interfaces the machine has, including the loopback interface ansible_kernel 2.6.32-279.el6.x86_64 The kernel version installed on the managed machine ansible_memtotal_mb 996 The total memory in megabytes available on the managed machine ansible_processor_count 1 The total CPUs available on the managed machine ansible_virtualization_role guest Whether the machine is a guest or a host machine ansible_virtualization_type kvm The type of virtualization setup on the managed machine These variables are gathered using Python from the host system; if you have facter or ohai installed on the remote node, the setup module will execute them and return their data as well. As with other facts, ohai facts are prepended with ohai_ and facter facts with facter_. While the setup module doesn't appear to be too useful on the command line, once you start writing playbooks, it will come into its own. If all the modules in Ansible do as little as the setup and the ping module, we will not be able to change anything on the remote machine. Almost all of the other modules that Ansible provides, such as the file module, allow us to actually configure the remote machine. The file module can be called with a single path argument; this will cause it to return information about the file in question. If you give it more arguments, it will try and alter the file's attributes and tell you if it has changed anything. Ansible modules will almost always tell you if they have changed anything, which becomes more important when you are writing playbooks. You can call the file module, as shown in the following command, to see details about /etc/fstab: $ ansible machinename -u root -k -m file -a 'path=/etc/fstab' The preceding command should elicit a response like the following code: machinename | success >> { "changed": false, "group": "root", "mode": "0644", "owner": "root", "path": "/etc/fstab", "size": 779, "state": "file" } Or like the following command to create a new test directory in /tmp: $ ansible machinename -u root -k -m file -a 'path=/tmp/test state=directory mode=0700 owner=root' The preceding command should return something like the following code: machinename | success >> { "changed": true, "group": "root", "mode": "0700", "owner": "root", "path": "/tmp/test", "size": 4096, "state": "directory" } The second command will have the changed variable set to true, if the directory doesn't exist or has different attributes. When run a second time, the value of changed should be false indicating that no changes were required. There are several modules that accept similar arguments to the file module, and one such example is the copy module. The copy module takes a file on the controller machine, copies it to the managed machine, and sets the attributes as required. For example, to copy the /etc/fstabfile to /tmp on the managed machine, you will use the following command: $ ansible machinename -m copy -a 'path=/tmp/fstab mode=0700 owner=root' The preceding command, when run the first time, should return something like the following code: machinename | success >> { "changed": true, "dest": "/tmp/fstab", "group": "root", "md5sum": "fe9304aa7b683f58609ec7d3ee9eea2f", "mode": "0700", "owner": "root", "size": 637, "src": "/root/.ansible/tmp/ansible-1374060150.96- 77605185106940/source", "state": "file" } There is also a module called command that will run any arbitrary command on the managed machine. This lets you configure it with any arbitrary command, such as a preprovided installer or a self-written script; it is also useful for rebooting machines. Please note that this module does not run the command within the shell, so you cannot perform redirection, use pipes, and expand shell variables or background commands. Ansible modules strive to prevent changes being made when they are not required. This is referred to as idempotency and can make running commands against multiple servers much faster. Unfortunately, Ansible cannot know if your command has changed anything or not, so to help it be more idempotent you have to give it some help. It can do this either via the creates or the removes argument. If you give a creates argument, the command will not be run if the filename argument exists. The opposite is true of the removes argument; if the filename exists, the command will be run. You run the command as follows: $ ansible machinename -m command -a 'rm -rf /tmp/testing removes=/tmp/testing' If there is no file or directory named /tmp/testing, the command output will indicate that it was skipped, as follows: machinename | skipped Otherwise, if the file did exist, it will look as follows: ansibletest | success | rc=0 >> Often it is better to use another module in place of the command module. Other modules offer more options and can better capture the problem domain they work in. For example, it would be much less work for Ansible and also the person writing the configurations to use the file module in this instance, since the file module will recursively delete something if the state is set to absent. So, this command would be equivalent to the following command: $ ansible machinename -m file -a 'path=/tmp/testing state=absent' If you need to use features usually available in a shell while running your command, you will need the shell module. This way you can use redirection, pipes, or job backgrounding. You can pick which shell to use with the executable argument. However, when you write the code, it also supports the creates argument but does not support the removes argument. You can use the shell module as follows: $ ansible machinename -m shell -a '/opt/fancyapp/bin/installer.sh > /var/log/fancyappinstall.log creates=/var/log/fancyappinstall.log' Summary In this article, we have covered which installation type to choose, installing Ansible, and how to build an inventory file to reflect your environment. After this, we saw how to use Ansible modules in an ad hoc style for simple tasks. Finally, we discussed how to learn which modules are available on your system and how to use the command line to get instructions for using a module. Resources for Article: Further resources on this subject: Configuring Manage Out to DirectAccess Clients [Article] Creating and configuring a basic mobile application [Article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [Article]
Read more
  • 0
  • 0
  • 12321
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-what-drupal
Packt
30 Oct 2013
3 min read
Save for later

What is Drupal?

Packt
30 Oct 2013
3 min read
(For more resources related to this topic, see here.) Currently Drupal is being used as a CMS in below listed domains Arts Banking and Financial Beauty and Fashion Blogging Community E-Commerce Education Entertainment Government Health Care Legal Industry Manufacturing and Energy Media Music Non-Profit Publishing Social Networking Small business Diversity that is being offered by Drupal is the reason of its growing popularity. Drupal is written in PHP.PHP is open source server side scripting language and it has changed the technological landscape to great extent. The Economist, Examiner.com and The White house websites have been developed in Drupal. System requirements Disk space A minimum installation requires 15 Megabytes. 60 MB is needed for a website with many contributed modules and themes installed. Keep in mind you need much more for the database, files uploaded by the users, media, backups and other files. Web server Apache, Nginx, or Microsoft IIS. Database Drupal 6: MySQL 4.1 or higher, PostgreSQL 7.1, Drupal 7: MySQL 5.0.15 or higher with PDO, PostgreSQL 8.3 or higher with PDO, SQLite 3.3.7 or higher Microsoft SQL Server and Oracle are supported by additional modules. PHP Drupal 6: PHP 4.4.0 or higher (5.2 recommended). Drupal 7: PHP 5.2.5 or higher (5.3 recommended). Drupal 8: PHP 5.3.10 or higher. How to create multiple websites using Drupal Multi-site allows you to share a single Drupal installation (including core code, contributed modules, and themes) among several sites One of the greatest features of Drupal is Multi-site feature. Using this feature a single Drupal installation can be used for various websites. Multisite feature is helpful in managing code during the code upgradation.Each site will have will have its own content, settings, enabled modules, and enabled theme. When to use multisite feature? If the sites are similar in functionallity (use same modules or use the same drupal distribution) you should use multisite feature. If the functionality is different don't use multisite. To create a new site using a shared Drupal code base you must complete the following steps: Create a new database for the site (if there is already an existing database you can also use this by defining a prefix in the installation procedure). Create a new subdirectory of the 'sites' directory with the name of your new site (see below for information on how to name the subdirectory). Copy the file sites/default/default.settings.php into the subdirectory you created in the previous step. Rename the new file to settings.php. Adjust the permissions of the new site directory. Make symbolic links if you are using a subdirectory such as packtpub.com/subdir and not a subdomain such as subd.example.com. In a Web browser, navigate to the URL of the new site and continue with the standard Drupal installation procedure. Summary This article discusses in brief about the Drupal platform and also the requirements for installing it. Resources for Article: Further resources on this subject: Drupal Web Services: Twitter and Drupal [Article] Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme [Article] Drupal 7 Module Development: Drupal's Theme Layer [Article]
Read more
  • 0
  • 0
  • 12318

article-image-working-tooltips
Packt
23 Dec 2013
6 min read
Save for later

Working with Tooltips

Packt
23 Dec 2013
6 min read
(For more resources related to this topic, see here.) The jQuery team introduced their version of the tooltip as part of changes to Version 1.9 of the library; it was designed to act as a direct replacement for the standard tooltip used in all browsers. The difference here, though, was that whilst you can't style the standard tooltip, jQuery UI's replacement is intended to be accessible, themeable, and completely customizable. It has been set to display not only when a control receives focus, but also when you hover over that control, which makes it easier to use for keyboard users. Implementing a default tooltip Tooltips were built to act as direct replacements for the browser's native tooltips. They will recognize the default markup of the title attribute in a tag, and use it to automatically add the additional markup required for the widget. The target selector can be customized though using tooltip's items and content options. Let's first have a look at the basic structure required for implementing tooltips. In a new file in your text editor, create the following page: <!DOCTYPE HTML> <html> <head> <meta charset="utf-8"> <title>Tooltip</title> <link rel="stylesheet" href="development- bundle/themes/redmond/jquery.ui.all.css"> <style> p { font-family: Verdana, sans-serif; } </style> <script src = "js/jquery-2.0.3.js"></script> <script src = "development- bundle/ui/jquery.ui.core.js"></script> <script src = "development-bundle/ui/jquery.ui.widget.js"> </script> <script src = "development-bundle/ui/jquery.ui.position.js"> </script> <script src = "development-bundle/ui/jquery.ui.tooltip.js"> </script> <script> $(document).ready(function($){ $(document).tooltip(); }); </script> </head> <body> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla blandit mi quis imperdiet semper. Fusce vulputate venenatis fringilla. Donec vitae facilisis tortor. Mauris dignissim nibh ac justo ultricies, nec vehicula ipsum ultricies. Mauris molestie felis ligula, id tincidunt urna consectetur at. Praesent <a href="http://www. ipsum.com" title="This was generated from www.ipsum.com">blandit</a> faucibus ante ut semper. Pellentesque non tristique nisi. Ut hendrerit tempus nulla, sit amet venenatis felis lobortis feugiat. Nam ac facilisis magna. Praesent consequat, risus in semper imperdiet, nulla lorem aliquet nisi, a laoreet nisl leo rutrum mauris.</p> </body> </html> Save the code as tooltip1.html in your jqueryui working folder. Let's review what was used. The following script and CSS resources are needed for the default tooltip widget configuration: jquery.ui.all.css jquery-2.0.3.js jquery.ui.core.js jquery.ui.widget.js jquery.ui.tooltip.js The script required to create a tooltip, when using the title element in the underlying HTML can be as simple as this, which should be added after the last <script> element in your code, as shown in the previous example: <script> $(document).ready(function($){ $(document).tooltip(); }); </script> In this example, when hovering over the link, the library adds in the requisite aria described by the code for screen readers into the HTML link. The widget then dynamically generates the markup for the tooltip, and appends it to the document, just before the closing </body> tag. This is automatically removed as soon as the target element loses focus. ARIA, or Accessible Rich Internet Applications, provides a way to make content more accessible to people with disabilities. You can learn more about this initiative at https://developer.mozilla.org/en-US/docs/Accessibility/ARIA. It is not necessary to only use the $(document) element when adding tooltips. Tooltips will work equally well with classes or selector IDs; using a selector ID, will give a finer degree of control. Overriding the default styles When styling the Tooltip widget, we are not limited to merely using the prebuilt themes on offer, we can always elect to override existing styles with our own. In our next example, we’ll see how easy this is to accomplish, by making some minor changes to the example from tooltip1.html. In a new document, add the following styles, and save it as tooltipOverride.css, within the css folder: p { font-family: Verdana, sans-serif; } .ui-tooltip { background: #637887; color: #fff; } Don't forget to link to the new style sheet from the <head> of your document: <link rel="stylesheet" href="css/tooltipOverride.css"> Before we continue, it is worth explaining a great trick for styling tooltips before committing the results to code. If you are using Firefox, you can download and install the Toggle JS add-on for Firefox, which is available from https://addons.mozilla.org/en-US/firefox/addon/toggle-js/. This allows us to switch off JavaScript on a per-page basis; we can then hover over the link to create the tooltip, before expanding the markup in Firebug and styling it at our leisure. Save your HTML document as tooltip2.html. When we run the page in a browser, you should see the modified tooltip appear when hovering over the link in the text: Using prebuilt themes If creating completely new styles by hand is overkill for your needs, you can always elect to use one of the prebuilt themes that are available for download from the jQuery UI site. This is a really easy change to make. We first need to download a copy of the replacement theme; in our example, we’re going to use one called Excite Bike. Let’s start by browsing to http://jqueryui.com/download/, then deselecting the Toggle All option. We don’t need to download the whole library, just the theme at the bottom, change the theme option to display Excite Bike then select Download. Next, open a copy of tooltip2.html then look for this line: <link rel="stylesheet" href="development-bundle/themes/redmond /jquery.ui.all.css"> You will notice the highlighted word in the above line. This is the name of the existing theme. Change this to excite-bike then save the document as tooltip3.html, then remove the tooltipOverride.css link, and you’re all set. The following is our replacement theme in action: With a single change of word, we can switch between any of the prebuilt themes available for use with jQuery UI (or indeed even any of the custom ones that others have made available online), as long as you have downloaded and copied the theme into the appropriate folder. There may be occasions though, were we need to tweak the settings. This gives us the best of both worlds, where we only need to concentrate on making the required changes. Let’s take a look at how we can alter an existing theme, using ThemeRoller.
Read more
  • 0
  • 0
  • 12279

article-image-now-youre-ready
Packt
20 Aug 2014
14 min read
Save for later

Now You're Ready!

Packt
20 Aug 2014
14 min read
In this article by Ryan John, author of the book Canvas LMS Course Design, we will have a look at the key points encountered during the course-building process, along with connections to educational philosophies and practices that support technology as a powerful way to enhance teaching and learning. (For more resources related to this topic, see here.) As you finish teaching your course, you will be well served to export your course to keep as a backup, to upload and reteach later within Canvas to a new group of students, or to import into another LMS. After covering how to export your course, we will tie everything we've learned together through a discussion of how Canvas can help you and your students achieve educational goals while acquiring important 21st century skills. Overall, we will cover the following topics: Exporting your course from Canvas to your computer Connecting Canvas to education in the 21st century Exporting your course Now that your course is complete, you will want to export the course from Canvas to your computer. When you export your course, Canvas compiles all the information from your course and allows you to download a single file to your computer. This file will contain all of the information for your course, and you can use this file as a master template for each time you or your colleagues teach the course. Exporting your course is helpful for two main reasons: It is wise to save a back-up version of your course on a computer. After all the hard work you have put into building and teaching your course, it is always a good decision to export your course and save it to a computer. If you are using a Free for Teachers account, your course will remain intact and accessible online until you choose to delete it. However, if you use Canvas through your institution, each institution has different procedures and policies in place regarding what happens to courses when they are complete. Exporting and saving your course will preserve your hard work and protect it from any accidental or unintended deletion. Once you have exported your course, you will be able to import your course into Canvas at any point in the future. You are also able to import your course into other LMSs such as Moodle or BlackBoard. You might wish to import your course back into Canvas if your course is removed from your institution-specific Canvas account upon completion. You will have a copy of the course to import for the next time you are scheduled to teach the same course. You might build and teach a course using a Free for Teachers account, and then later wish to import that version of the course into an institution-specific Canvas account or another LMS. Exporting your course does not remove the course from Canvas—your course will still be accessible on the Canvas site unless it is automatically deleted by your institution or if you choose to delete it. To export your entire course, complete the following steps: Click on the Settings tab at the bottom of the left-hand side menu, as pictured in the following screenshot: On the Settings page, look to the right-hand side menu. Click on the Export Course Content button, which is highlighted in the following screenshot: A screen will appear asking you whether you would like to export the Course or export a Quiz. To export your entire course, select the Course option and then click on Create Export, as shown in the following screenshot: Once you click on Create Export, a progress bar will appear. As indicated in the message below the progress bar, the export might take a while to complete, and you can leave the page while Canvas exports the content. The following screenshot displays this progress bar and message: When the export is complete, you will receive an e-mail from notifications@instructure.com that resembles the following screenshot. Click on the Click to view exports link in the e-mail: A new window or tab will appear in your browser that shows your Content Exports. Below the heading of the page, you will see your course export listed with a link that reads Click here to download, as pictured in the following screenshot. Go ahead and click on the link, and the course export file will be downloaded to your computer. Your course export file will be downloaded to your computer as a single .imscc file. You can then move the downloaded file to a folder on your computer's hard drive for later access. Your course export is complete, and you can save the exported file for later use. To access the content stored in the exported .imscc file, you will need to import the file back into Canvas or another LMS. You might notice an option to Conclude this Course on the course Settings page if your institution has not hidden or disabled this option. In most cases, it is not necessary to conclude your course if you have set the correct course start and end dates in your Course Details. Concluding your course prevents you from altering grades or accessing course content, and you cannot unconclude your course on your own. Some institutions conclude courses automatically, which is why it is always best to export your course to preserve your work. Now that we have covered the last how-to aspects of Canvas, let's close with some ways to apply the skills we have learned in this book to contemporary educational practices, philosophies, and requirements that you might encounter in your teaching. Connecting Canvas to education in the 21st century While learning how to use the features of Canvas, it is easy to forget the main purpose of Canvas' existence—to better serve your students and you in the process of education. In the midst of rapidly evolving technology, students and teachers alike require skills that are as adaptable and fluid as the technologies and new ideas swirling around them. While the development of various technologies might seem daunting, those involved in education in the 21st century have access to new and exciting tools that have never before existed. As an educator seeking to refine your craft, utilizing tools such as Canvas can help you and your students develop the skills that are becoming increasingly necessary to live and thrive in the 21st century. As attainment of these skills is indeed proving more and more valuable in recent years, many educational systems have begun to require evidence that instructors are cognizant of these skills and actively working to ensure that students are reaching valuable goals. Enacting the Framework for 21st Century Learning As education across the world continues to evolve through time, the development of frameworks, methods, and philosophies of teaching have shaped the world of formal education. In recent years, one such approach that has gained prominence in the United States' education systems is the Framework for 21st Century Learning, which was developed over the last decade through the work of the Partnership for 21st Century Skills (P21). This partnership between education, business, community, and government leaders was founded to help educators provide children in Kindergarten through 12th Grade (K-12) with the skills they would need going forward into the 21st century. Though the focus of P21 is on children in grades K-12, the concepts and knowledge articulated in the Framework for 21st Century Learning are valuable for learners at all levels, including those in higher education. In the following sections, we will apply our knowledge of Canvas to the desired 21st century student outcomes, as articulated in the P21 Framework for 21st Century Learning, to brainstorm the ways in which Canvas can help prepare your students for the future. Core subjects and 21st century themes The Framework for 21st Century Learning describes the importance of learning certain core subjects including English, reading or language arts, world languages, the arts, Mathematics, Economics, Science, Geography, History, Government, and Civics. In connecting these core subjects to the use of Canvas, the features of Canvas and the tips throughout this book should enable you to successfully teach courses in any of these subjects. In tandem with teaching and learning within the core subjects, P21 also advocates for schools to "promote understanding of academic content at much higher levels by weaving 21st century interdisciplinary themes into core subjects." The following examples offer insight and ideas for ways in which Canvas can help you integrate these interdisciplinary themes into your course. As you read through the following suggestions and ideas, think about strategies that you might be able to implement into your existing curriculum to enhance its effectiveness and help your students engage with the P21 skills: Global awareness: Since it is accessible from anywhere with an Internet connection, Canvas opens the opportunity for a myriad of interactions across the globe. Utilizing Canvas as the platform for a purely online course enables students from around the world to enroll in your course. As a distance-learning tool in colleges, universities, or continuing education departments, Canvas has the capacity to unite students from anywhere in the world to directly interact with one another: You might utilize the graded discussion feature for students to post a reflection about a class reading that considers their personal cultural background and how that affects their perception of the content. Taking it a step further, you might require students to post a reply comment on other students' reflections to further spark discussion, collaboration, and cross-cultural connections. As a reminder, it is always best to include an overview of online discussion etiquette somewhere within your course—you might consider adding a "Netiquette" section to your syllabus to maintain focus and a professional tone within these discussions. You might set up a conference through Canvas with an international colleague as a guest lecturer for a course in any subject. As a prerequisite assignment, you might ask students to prepare three questions to ask the guest lecturer to facilitate a real-time international discussion within your class. Financial, economic, business, and entrepreneurial literacy: As the world becomes increasingly digitized, accessing and incorporating current content from the Web is a great way to incorporate financial, economic, business, and entrepreneurial literacy into your course: In a Math course, you might consider creating a course module centered around the stock market. Within the module, you could build custom content pages offering direct instruction and introductions to specific topics. You could upload course readings and embed videos of interviews with experts with the YouTube app. You could link to live steam websites of the movement of the markets and create quizzes to assess students' understanding. Civic literacy: In fostering students' understanding of their role within their communities, Canvas can serve as a conduit of information regarding civic responsibilities, procedures, and actions: You might create a discussion assignment in which students search the Internet for a news article about a current event and post a reflection with connections to other content covered in the course. Offering guidance in your instructions to address how local and national citizenship impacts students' engagement with the event or incident could deepen the nature of responses you receive. Since discussion posts are visible to all participants in your course, a follow-up assignment might be for students to read one of the articles posted by another student and critique or respond to their reflection. Health literacy: Canvas can allow you to facilitate the exploration of health and wellness through the wide array of submission options for assignments. By utilizing the variety of assignment types you can create within Canvas, students are able to explore course content in new and meaningful ways: In a studio art class, you can create an out-of-class assignment to be submitted to Canvas in which students research the history, nature, and benefits of art therapy online and then create and upload a video sharing their personal relationship with art and connecting it to what they have found in the art therapy stories of others. Environmental literacy: As a cloud-based LMS, Canvas allows you to share files and course content with your students while maintaining and fostering an awareness of environmental sustainability: In any course you teach that involves readings uploaded to Canvas, encourage your students to download the readings to their computers or mobile devices rather than printing the content onto paper. Downloading documents to read on a device instead of printing them saves paper, reduces waste, and helps foster sustainable environmental habits. For PDF files embedded into content pages on Canvas, students can click on the preview icon that appears next to the document link and read the file directly on the content page without downloading or printing anything. Make a conscious effort to mention or address the environmental impacts of online learning versus traditional classroom settings, perhaps during a synchronous class conference or on a discussion board. Learning and innovation skills A number of specific elements combined can enable students to develop learning and innovation skills to prepare them for the increasingly "complex life and work environments in the 21st century." The communication setup of Canvas allows for quick and direct interactions while offering students the opportunity to contemplate and revise their contributions before posting to the course, submitting an assignment, or interacting with other students. This flexibility, combined with the ways in which you design your assignments, can help incorporate the following elements into your course to ensure the development of learning and innovation skills: Creativity and innovation: There are many ways in which the features of Canvas can help your students develop their creativity and innovation. As you build your course, finding ways for students to think creatively, work creatively with others, and implement innovations can guide the creation of your course assignments: You might consider assigning groups of students to assemble a content page within Canvas dedicated to a chosen or assigned topic. Do so by creating a content page, and then enable any user within the course to edit the page. Allowing students to experiment with the capabilities of the Rich Content Editor, embedding outside content and synthesizing ideas within Canvas allows each group's creativity to shine. As a follow-up assignment, you might choose to have students transfer the content of their content page to a public website or blog using sites such as Wikispaces, Wix, or Weebly. Once the sites are created, students can post their group site to a Canvas discussion page, where other students can view and interact with the work of their peers. Asking students to disseminate the class sites to friends or family around the globe could create international connections stemming from the creativity and innovation of your students' web content. Critical thinking and problem solving: As your students learn to overcome obstacles and find multiple solutions to complex problems, Canvas offers a place for students to work together to develop their critical thinking and problem-solving skills: Assign pairs of students to debate and posit solutions to a global issue that connects to topics within your course. Ask students to use the Conversations feature of Canvas to debate the issue privately, finding supporting evidence in various forms from around the Internet. Using the Collaborations feature, ask each pair of students to assemble and submit a final e-report on the topic, presenting the various solutions they came up with as well as supporting evidence in various electronic forms such as articles, videos, news clips, and websites. Communication and collaboration: With the seemingly impersonal nature of electronic communication, communication skills are incredibly important to maintain intended meanings across multiple means of communication. As the nature of online collaboration and communication poses challenges for understanding, connotation, and meaning, honing communication skills becomes increasingly important: As a follow-up assignment to the preceding debate suggestion, use the conferences tool in Canvas to set up a full class debate. During the debate, ask each pair of students to present their final e-report to the class, followed by a group discussion of each pair's findings, solutions, and conclusions. You might find it useful for each pair to explain their process and describe the challenges and/or benefits of collaborating and communicating via the Internet in contrast to collaborating and communicating in person.
Read more
  • 0
  • 0
  • 12209

article-image-platform-service
Packt
21 Nov 2013
5 min read
Save for later

Platform as a Service

Packt
21 Nov 2013
5 min read
(For more resources related to this topic, see here.) Platform as a Service is a very interesting take on the traditional cloud computing models. While there are many (often conflicting) definitions of a PaaS, for all practical purposes, PaaS provides a complete platform and environment to build and host applications or services. Emphasis is clearly on providing an end-to-end precreated environment to develop and deploy the application that automatically scales as required. PaaS packs together all the necessary components such as an operating system, database, programming language, libraries, web or application container, and a storage or hosting option. PaaS offerings vary and their chargebacks are dependent on what is utilized by the end user. There are excellent public offerings of PaaS such as Google App Engine, Heroku, Microsoft Azure, and Amazon Elastic Beanstalk. In a private cloud offering for an enterprise, it is possible to implement a similar PaaS environment. Out of the various possibilities, we will focus on building a Database as a Service (DBaaS) infrastructure using Oracle Enterprise Manager. DBaaS is sometimes seen as a mix of PaaS or SaaS depending on the kind of service it provides. DBaaS that provides services such as a database would be leaning more towards its PaaS legacy; but if it provides a service such as Business Intelligence, it takes more of a SaaS form. Oracle Enterprise Manager enables self-service provisioning of virtualized database instances out of a common shared database instance or cluster. Oracle Database is built to be clustered, and this makes it an easy fit for a robust DBaaS platform. Setting up the PaaS infrastructure Before we go about implementing a DBaaS, we will need to make sure our common platform is up and working. We will now check how we can create a PaaS Zone. Creating a PaaS Zone Enterprise Manager groups host or Oracle VM Manager Zones into PaaS Infrastructure Zones. You will need to have at least one PaaS Zone before you can add more features into the setup. To create a PaaS Zone, make sure that you have the following: The EM_CLOUD_ADMINISTRATOR, EM_SSA_ADMINISTRATOR, and EM_SSA_USER roles created A software library To set up a PaaS Infrastructure Zone, perform the following steps: Navigate to Setup | Cloud | PaaS Infrastructure Zone. Click on Create in the PaaS Infrastructure Zone main page. Enter the necessary details for PaaS Infrastructure Zone such as Name and Description. Based on the type of members you want to add to this zone, you can select any of the following member types: Host: This option will only allow the host targets to be part of this zone. Also, make sure you provide the necessary details for the placement policy constraints defined per host. These values are used to prevent over utilization of hosts which are already being heavily used. You can set a percentage threshold for Maximum CPU Utilization and Maximum Memory Allocation. Any host exceeding this threshold will not be used for provisioning. OVM Zone: This option will allow you to add Oracle Virtual Manager Zone targets: If you select Host at this stage, you will see the following page: Click on the + button to add named credentials and make sure you click on Test Credentials button to verify the credential. These named credentials must be global and available on all the hosts in this zone. Click on the Add button to add target hosts to this zone. If you selected OVM Zone in the previous screen (step 1 of 4), you will be presented with the following screen: Click on the Add button to add roles that can access this PaaS Infrastructure Zone. Once you have created a PaaS Infrastructure Zone, you can proceed with setting up necessary pieces for a DBaaS. However, time and again you might want to edit or review your PaaS Infrastructure Zone. To view and manage your PaaS Infrastructure Zones, navigate to Enterprise Menu | Cloud | Middleware and Database Cloud | PaaS Infrastructure Zones. From this page you can create, edit, delete, or view more details for a PaaS Infrastructure Zone. Clicking on the PaaS infrastructure zone link will display a detailed drill-down page with quite a few details related to that zone. The page is shown as follows: This page shows a lot of very useful details about the zone. Some of them are listed as follows: General: This section shows stats for this zone and shows details such as the total number of software pools, Oracle VM zones, member types (hosts or Oracle VM Zones), and other related details. CPU and Memory: This section gives an overview of CPU and memory utilization across all servers in the zone. Issues: This section shows incidents and problems for the target. This is a handy summary to check if there are any issues that needs attention. Request Summary: This section shows the status of requests being processed currently. Software Pool Summary: This section shows the name and type of each software pool in the zone. Unallocated Servers: This section shows a list of servers that are not associated with any software pool. Members: This section shows the members of the zones and the member. Service Template Summary: Shows the service templates associated with the zone. Summary We saw in this article, how PaaS plays a vital role in the structure of a DBaaS architechture. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Features of CloudFlare [Article] Oracle Tools and Products [Article]
Read more
  • 0
  • 0
  • 12208
article-image-drupal-7-social-networking-managing-users-and-profiles
Packt
27 Sep 2011
9 min read
Save for later

Drupal 7 Social Networking: Managing Users and Profiles

Packt
27 Sep 2011
9 min read
  (For more resources on Drupal, see here.) What are we going to do and why? Before we get started, let's take a closer look at what we are going to do in this article and why. At the moment, our users can interact with the website and contribute content, including through their own personal blog. Apart from the blog, there isn't a great deal which differentiates our users; they are simply a username with a blog! One key improvement to make now is to make provisions for customizable user profiles. Our site being a social network with a dinosaur theme, the following would be useful information to have on our users: Details of their pet dinosaurs, including: Name Breed Date of birth Hobbies Their details for other social networking sites; for example, links to their Facebook profile, Twitter account, or LinkedIn page Location of the user (city / area) Their web address (if they have their own website) Some of these can be added to user profiles by adding new fields to profiles, using the built in Field API; however we will also install some additional modules to extend the default offering. Many websites allow users to upload an to associate with their user account, either a photograph or an avatar to represent them. Drupal has provisions for this, but it has some drawbacks which can be fixed using Gravatar. Gravatar is a social avatar service through which users upload their avatar, which is then accessed by other websites that request the avatar using the user's e-mail address. This is convenient for our users, as it saves them having to upload their avatars to our site, and reduces the amount of data stored on our site, as well as the amount of data being transferred to and from our site. Since not all users will want to use a third-party service for their avatars (particularly, users who are not already signed up to Gravatar) we can let them upload their own avatars if they wish, through the Upload module. There are many other social networking sites out there, which don't complete with ours, and are more generalized, as a result we might want to allow our users to promote their profiles for other social networks too. We can download and install the Follow module which will allow users to publicize their profiles for other social networking sites on their profile on our site. Once our users get to know each other more, they may become more interested in each other's posts and topics and may wish to look up a specific user's contribution to the site. The tracker module allows users to track one another's contributions to the site. It is a core module, which just needs to be enabled and set up. Now that we have a better idea of what we are going to do in this , let's get started! Getting set up As this article covers features provided by both core modules and contributed modules (which need to be downloaded first), let's download and enable the modules first, saving us the need for continually downloading and enabling modules throughout the article. The modules which we will require are: Tracker (core module) Gravatar (can be downloaded from: http://drupal.org/project/gravatar) Follow (can be downloaded from: http://drupal.org/project/follow) Field_collection (can be downloaded from:http://drupal.org/project/field_collection) Entity (can be downloaded from:http://drupal.org/project/entity ) Trigger module (core module) These modules can be downloaded and then the contents extracted to the /sites/all/modules folder within our Drupal installation. Once extracted they will then be ready to be enabled within the Modules section of our admin area. Users, roles, and permissions Let's take a detailed look at users, roles, and permissions and how they all fit together. Users, roles, and permissions are all managed from the People section of the administration area: User management Within the People section, users are listed by default on the main screen. These are user accounts which are either created by us, as administrators, or created when a visitor to our site signs up for a user account. From here we can search for particular types of users, create new users, and edit users—including updating their profiles, suspending their account, or delete them permanently from our social network. Once our site starts to gain popularity it will become more difficult for us to navigate through the user list. Thankfully there are search, sort, and filter features available to make this easier for us. Let's start by taking a look at our user list: (Move the mouse over the image to enlarge.) This user list shows, for each user: Their username If their user account is active or blocked (their status) The roles which are associated with their account How long they have been a member of our community When they last accessed our site A link to edit the user's account Users: Viewing, searching, sorting, and filtering Clicking on a username will take us to the profile of that particular user, allowing us to view their profile as normal. Clicking one of the headings in the user list allows us to sort the list from the field we selected: This could be particularly useful to see who our latest members are, or to allow us to see which users are blocked, if we need to reactivate a particular account. We can also filter the user list based on a particular role that is assigned to a user, a particular permission they have (by virtue of their roles), or by their status (if their account is active or blocked). This is managed from the SHOW ONLY USERS WHERE panel: Creating a user Within the People area, there is a link Add user, which will allow us to create a new user account for our site: This takes us to the new user page where we are required to fill out the Username, E-mail address, and Password (twice to confirm) for the new user account we wish to create. We can also select the status of the user (Active or Blocked), any roles we wish to apply to their account, and indicate if we want to automatically e-mail the user to notify them of their new account: Editing a user To edit a user account we simply need to click the edit link displayed next to the user in the user list. This takes us to a page similar to the create user screen, except that it is pre-populated with the users details. It also contains a few other settings related to some default installed modules. As we install new modules, the page may include more options. Inform the user! If you are planning to change a user's username, password, or e-mail address you should notify them of the change, otherwise they may struggle the next time they try to log in! Suspending / blocking a user If we need to block or suspend a user, we can do this from the edit screen by updating their status to Blocked: This would prevent the user from accessing our site. For example, if a user had been posting inappropriate material, even after a number of warnings, we could block their account to prevent them from accessing the site. Why block? Why not just delete? If we were to simply delete a user who was troublesome on the site, they could simply sign up again (unless we went to a separate area and also blocked their e-mail address and username). Of course, the user could still sign up again using a different e-mail address and a different username, but this helps us keep things under control. Canceling and deleting a user account Also within the edit screen is the option to cancel a user's account: On clicking the Cancel account button, we are given a number of options for how we wish to cancel the account: The first and third options will at least keep the context of any discussions or contributions to which the user was involved with. The second option will unpublish their content, so if for example comments or pages are removed which have an impact on the community, we can at least re-enable them. The final option will delete the account and all content associated with it. Finally, we can also select if the user themselves must confirm that they wish to have their account deleted. Particularly useful if this is in response to a request from the user to delete all of their data, they can be given a final chance to change their mind. Bulk user operations For occasions when we need to perform specific operations to a range of user accounts (for example, unblocking a number of users, or adding / removing roles from specific users) we can use the Update options panel, in the user list to do these: From here we simply select the users we want to apply an action to, and then select one of the following options from the UPDATE OPTIONS list: Unblock the selected users Block the selected users Cancel the selected user accounts Add a role to the selected users Remove a role from the selected users Roles Users are grouped into a number of roles, which in turn have permissions assigned to them. By default there are three roles within Drupal: Administrators Anonymous users Authenticated users The anonymous and authenticated roles can be edited but they cannot be renamed or deleted. We can manage user roles by navigating to People | Permissions | Roles: The edit permissions link allows us to edit the permissions associated with a specific role. To create a new role, we simply need to enter the name for the role in the text box provided and click the Add role button.  
Read more
  • 0
  • 0
  • 12152

article-image-ibm-filenet-p8-content-manager-exploring-object-store-level-items
Packt
10 Feb 2011
9 min read
Save for later

IBM FileNet P8 Content Manager: Exploring Object Store-level Items

Packt
10 Feb 2011
9 min read
As with the Domain, there are two basic paths in FEM to accessing things in an Object Store. The tree-view in the left-hand panel can be expanded to show Object Stores and many repository objects within them, as illustrated in the screenshot below. Each individual Object Store has a right-click context menu. Selecting Properties from that menu will bring up a multi-tabbed pop-up panel. We'll look first at the General tab of that panel. Content Access Recording Level Ordinarily, the Content Engine (CE) server does not keep track of when a particular document's content was last retrieved because it requires an expensive database update that is often uninteresting. The ContentAccessRecordingLevel property on the Object Store can be used to enable the recording of this optional information in a document or annotation's DateContentLastAccessed property. It is off by default. It is sometimes interesting to know, for example, that document content was accessed within the last week as opposed to three years ago. Once a particular document has had its content read, there is a good chance that there will be a few additional accesses in the same neighborhood of time (not for a technical reason; rather, it's just statistically likely). Rather than record the last access time for each access, an installation can choose, via this property's value, to have access times recorded only with a granularity of hourly or daily. This can greatly reduce the number of database updates while still giving a suitable approximation of the last access time. There is also an option to update the DateContentLastAccessed property on every access. Auditing The CE server can record when clients retrieve or update selected objects. Enabling that involves setting up subscriptions to object instances or classes. This is quite similar to the event subsystem in the CE server. Because it can be quite elaborate to set up the necessary auditing configuration, it can also be enabled or disabled completely at the Object Store level. Checkout type The CE server offers two document checkout types, Collaborative and Exclusive. The difference lies in who is allowed to perform the subsequent checkin. An exclusive checkout will only allow the same user to do the checkin. Via an API, an application can make the explicit choice for one type or the other, or it can use the Object Store default value. Using the default value is often handy since a given application may not have any context for deciding one form over another. Even with a collaborative checkout, the subsequent checkin is still subject to access checks, so you can still have fine-grained control over that. In fact, because you can use fine-grained security to limit who can do a checkin, you might as well make the Object Store default be Collaborative unless you have some specific use case that demands Exclusive. Text Index Date Partitioning Most of the values on the CBR tab, as shown in the figure next, are read-only because they are established when the Content Search Engine (CSE) is first set up. One item that can be changed on a per-Object Store basis is the date-based partitioning of text index collections. Partitioning of the text index collections allows for more efficient searching of large collections because the CE can narrow its search to a specific partition or partitions rather than searching the entirety of the text index. By default, there is no partitioning. If you check the box to change the partition settings, the Date Property drop-down presents a list of date-valued custom properties. In the screenshot above, you see the custom properties Received On and Sent On, which are from email-related documents. Once you select one of those properties, you're offered a choice of partitioning granularity, ranging from one month up to one year. Additional text index configuration properties are available if you select the Index Areas node in the FEM tree-view, then right-click an index area entry and select Properties. Although we are showing the screenshot here for reference, your environment may not yet have a CSE or any index areas if the needed installation procedures are not complete. Cache configuration Just as we saw at the Domain level, the Cache tab allows the configuration of various cache tuning parameters for each Object Store. As we've said before, you don't want to change these values without a good reason. The default values are suitable for most situations. Metadata One of the key features of CM is that it has an extensible metadata structure. You don't have to work within a fixed framework of pre-defined document properties. You can add additional properties to the Document class, and you can even make subclasses of Document for specific purposes. For example, you might have a subclass called CustomerProfile, another subclass called DesignDocument, yet another subclass called ProductDescription, and so on. Creating subclasses lets you define just the properties you need to specialize the class to your particular business purpose. There is no need to have informal rules about where properties should be ignored because they're not applicable. There is also generally no need to have a property that marks a document as a CustomerProfile versus something else. The class provides that distinction. CM comes with a set of pre-defined system classes, and each class has a number of pre-defined system properties (many of which are shared across most system classes). There are pre-defined system classes for Document, Folder, Annotation, CustomObject, and many others. The classes just mentioned are often described as the business object classes because they are used to directly implement common business application concepts. System properties are properties for which the CE server has some specifically-coded behavior. Some system properties are used to control server behavior, and others provide reports of some kind of system state. We've seen several examples already in the Domain and Object Store objects. It's common for applications to create their own subclasses and custom properties as part of their installation procedures, but it is equally common to do similar things manually via FEM. FEM contains several wizards to make the process simpler for the administrator, but, behind the scenes, various pieces are always in play. Property templates The foundation for any custom property is a property template. If you select the Property Templates node in the tree view, you will see a long list of existing property templates. Double-clicking on any item in the list will reveal that property template's properties. A property template is an independently persistable object, so it has its own identity and security. Most system properties do not have explicit property templates. Their characteristics come about from a different mechanism internal to the CE server. Property templates have that name because the characteristics they define act as a pattern for properties added to classes, where the property is embodied in a property definition for a particular class. Some of the property template properties can be overridden in a property definition, but some cannot. For example, the basic data type and cardinality cannot be changed once a property template is created. On the other hand, things like settability and a value being required can be modified in the property definition. When creating a new property with no existing property template, you can either create the property template independently, ahead of time, or you can follow the FEM wizard steps for adding a property to a class. FEM will prompt you with additional panels if you need to create a property template for the property being added. Choice lists Most property types allow for a few simple validity checks to be enforced by the CE server. For example, an integer-valued property has an optional minimum and maximum value based on its intended use (in addition to the expected absolute constraints imposed by the integer data type). For some use cases, it is desirable to limit the allowed values to a specific list of items. The mechanism for that in the CE server is the choice list, and it's available for stringvalued and integer-valued properties. If you select the Choice Lists node in the FEM tree view, you will see a list of existing top-level choice lists. The example choice lists in the screenshot below all happen to come from AddOns installed in the Object Store. Double-clicking on any item in the list will reveal that choice list's properties. A choice list is an independently persistable object, so it has its own identity and security. We've mentioned independent objects a couple of times, and more mentions are coming. For now, it is enough to think of them as objects that can be stored or retrieved in their own right. Most independent objects have their own access security. Contrast independent objects with dependent objects that only exist within the context of some independent object. A choice list is a collection of choice objects, although a choice list may be nested hiearchically. That is, at any position in a choice list there can be another choice list rather than a simple choice. A choice object consists of a localizable display name and a choice value (a string or an integer, depending on the type of choice list). Nested choice lists can only be referenced within some top-level choice list. Classes Within the FEM tree view are two nodes describing classes: Document Class and Other Classes. Documents are listed separately only for user convenience (since Document subclasses occur most frequently). You can think of these two nodes as one big list. In any case, expanding the node in the tree reveals hierarchically nested subclasses. Selecting a class from the tree reveals any subclasses and any custom properties. The screenshot shows the custom class EntryTemplate, which comes from a Workplace AddOn. You can see that it has two subclasses, RecordsTemplate and WebContentTemplate, and four custom properties. When we mention a specific class or property name, like EntryTemplate, we try to use the symbolic name, which has a restricted character set and never contains spaces. The FEM screenshots tend to show display names. Display names are localizable and can contain any Unicode character. Although the subclassing mechanism in CM generally mimics the subclassing concept in modern object-oriented programming languages, it does have some differences. You can add custom properties to an existing class, including many system classes. Although you can change some characteristics of properties on a subclass, there are restrictions on what you can do. For example, a particular string property on a subclass must have a maximum length equal to or less than that property's maximum length on the superclass.
Read more
  • 0
  • 0
  • 12089

Packt
08 Mar 2016
17 min read
Save for later

Magento 2 – the New E-commerce Era

Packt
08 Mar 2016
17 min read
In this article by Ray Bogman and Vladimir Kerkhoff, the authors of the book, Magento 2 Cookbook, we will cover the basic tasks related to creating a catalog and products in Magento 2. You will learn the following recipes: Creating a root catalog Creating subcategories Managing an attribute set (For more resources related to this topic, see here.) Introduction This article explains how to set up a vanilla Magento 2 store. If Magento 2 is totally new for you, then lots of new basic whereabouts are pointed out. If you are currently working with Magento 1, then not a lot has changed since. The new backend of Magento 2 is the biggest improvement of them all. The design is built responsively and has a great user experience. Compared to Magento 1, this is a great improvement. The menu is located vertically on the left of the screen and works great on desktop and mobile environments: In this article, we will see how to set up a website with multiple domains using different catalogs. Depending on the website, store, and store view setup, we can create different subcategories, URLs, and product per domain name. There are a number of different ways customers can browse your store, but one of the most effective one is layered navigation. Layered navigation is located in your catalog and holds product features to sort or filter. Every website benefits from great Search Engine Optimization (SEO). You will learn how to define catalog URLs per catalog. Throughout this article, we will cover the basics on how to set up a multidomain setup. Additional tasks required to complete a production-like setup are out of the scope of this article. Creating a root catalog The first thing that we need to start with when setting up a vanilla Magento 2 website is defining our website, store, and store view structure. So what is the difference between website, store, and store view, and why is it important: A website is the top-level container and most important of the three. It is the parent level of the entire store and used, for example, to define domain names, different shipping methods, payment options, customers, orders, and so on. Stores can be used to define, for example, different store views with the same information. A store is always connected to a root catalog that holds all the categories and subcategories. One website can manage multiple stores, and every store has a different root catalog. When using multiple stores, it is not possible to share one basket. The main reason for this has to do with the configuration setup where shipping, catalog, customer, inventory, taxes, and payment settings are not sharable between different sites. Store views is the lowest level and mostly used to handle different localizations. Every store view can be set with a different language. Besides using store views just for localizations, it can also be used for Business to Business (B2B), hidden private sales pages (with noindex and nofollow), and so on. The option where we use the base link URL, for example, (yourdomain.com/myhiddenpage) is easy to set up. The website, store, and store view structure is shown in the following image: Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to create a multi-website setup including three domains (yourdomain.com, yourdomain.de, and yourdomain.fr) and separate root catalogs. The following steps will guide you through this: First, we need to update our NGINX. We need to configure the additional domains before we can connect them to Magento. Make sure that all domain names are connected to your server and DNS is configured correctly. Go to /etc/nginx/conf.d, open the default.conf file, and include the following content at the top of your file: map $http_host $magecode { hostnames; default base; yourdomain.de de; yourdomain.fr fr; } Your configuration should look like this now: map $http_host $magecode { hostnames; default base; yourdomain.de de; yourdomain.fr fr; } upstream fastcgi_backend { server 127.0.0.1:9000; } server { listen 80; listen 443 ssl http2; server_name yourdomain.com; set $MAGE_ROOT /var/www/html; set $MAGE_MODE developer; ssl_certificate /etc/ssl/yourdomain-com.cert; ssl_certificate_key /etc/ssl/yourdomain-com.key; include /var/www/html/nginx.conf.sample; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location ~ /\.ht { deny all; } } Now let's go to the Magento 2 configuration file in /var/www/html/ and open the nginx.conf.sample file. Go to the bottom and look for the following: location ~ (index|get|static|report|404|503)\.php$ Now we add the following lines to the file under fastcgi_pass   fastcgi_backend;: fastcgi_param MAGE_RUN_TYPE website; fastcgi_param MAGE_RUN_CODE $magecode; Your configuration should look like this now (this is only a small section of the bottom section): location ~ (index|get|static|report|404|503)\.php$ { try_files $uri =404; fastcgi_pass fastcgi_backend; fastcgi_param MAGE_RUN_TYPE website; fastcgi_param MAGE_RUN_CODE $magecode; fastcgi_param PHP_FLAG "session.auto_start=off \n suhosin.session.cryptua=off"; fastcgi_param PHP_VALUE "memory_limit=256M \n max_execution_time=600"; fastcgi_read_timeout 600s; fastcgi_connect_timeout 600s; fastcgi_param MAGE_MODE $MAGE_MODE; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } The current setup is using the MAGE_RUN_TYPE website variable. You may change website to store depending on your setup preferences. When changing the variable, you need your default.conf mapping codes as well. Now, all you have to do is restart NGINX and PHP-FPM to use your new settings. Run the following command: service nginx restart && service php-fpm restart Before we continue we need to check whether our web server is serving the correct codes. Run the following command in the Magento 2 web directory: var/www/html/pub echo "<?php header("Content-type: text/plain"); print_r($_SERVER); ?>" > magecode.php Don't forget to update your nginx.conf.sample file with the new magecode code. It's located at the bottom of your file and should look like this: location ~ (index|get|static|report|404|503|magecode)\.php$ { Restart NGINX and open the file in your browser. The output should look as follows. As you can see, the created MAGE_RUN variables are available. Congratulations, you just finished configuring NGINX including additional domains. Now let's continue connecting them in Magento 2. Now log in to the backend and navigate to Stores | All Stores. By default, Magento 2 has one Website, Store, and Store View setup. Now click on Create Website and commit the following details: Name My German Website Code de Next, click on Create Store and commit the following details: Web site My German Website Name My German Website Root Category Default Category (we will change this later)  Next, click on Create Store View and commit the following details: Store My German Website Name German Code de Status Enabled  Continue the same step for the French domain. Make sure that the Code in Website and Store View is fr. The next important step is connecting the websites with the domain name. Navigate to Stores | Configuration | Web | Base URLs. Change the Store View scope at the top to My German Website. You will be prompted when switching; press ok to continue. Now, unset the checkbox called Use Default from Base URL and Base Link URL and commit your domain name. Save and continue the same procedure for the other website. The output should look like this: Save your entire configuration and clear your cache. Now go to Products | Categories and click on Add Root Category with the following data: Name Root German Is Active Yes Page Title My German Website Continue the same step for the French domain. You may add additional information here but it is not needed. Changing the current Root Category called Default Category to Root English is also optional but advised. Save your configuration, go to Stores | All Stores, and change all of the stores to the appropriate Root Catalog that we just created. Every Root Category should now have a dedicated Root Catalog. Congratulations, you just finished configuring Magento 2 including additional domains and dedicated Root Categories. Now let's open a browser and surf to your created domain names: yourdomain.com, yourdomain.de, and yourdomain.fr. How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 11, we created a multistore setup for .com, .de, and .fr domains using a separate Root Catalog. In steps 1 through 4, we configured the domain mapping in the NGINX default.conf file. Then, we added the fastcgi_param MAGE_RUN code to the nginx.conf.sample file, which will manage what website or store view to request within Magento. In step 6, we used an easy test method to check whether all domains run the correct MAGE_RUN code. In steps 7 through 9, we configured the website, store, and store view name and code for the given domain names. In step 10, we created additional Root Catalogs for the remaining German and French stores. They are then connected to the previously created store configuration. All stores have their own Root Catalog now. There's more… Are you able to buy additional domain names but like to try setting up a multistore? Here are some tips to create one. Depending on whether you are using Windows, Mac OS, or Linux, the following options apply: Windows: Go to C:\Windows\System32\drivers\etc, open up the hosts file as an administrator, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and click on the Start button; then search for cmd.exe and commit the following: ipconfig /flushdns Mac OS: Go to the /etc/ directory, open the hosts file as a super user, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and run the following command on the shell: dscacheutil -flushcache Depending on your Mac version, check out the different commands here: http://www.hongkiat.com/blog/how-to-clear-flush-dns-cache-in-os-x-yosemite/ Linux: Go to the /etc/ directory, open the hosts file as a root user, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and run the following command on the shell: service nscd restart Depending on your Linux version, check out the different commands here: http://www.cyberciti.biz/faq/rhel-debian-ubuntu-flush-clear-dns-cache/ Open your browser and surf to the custom-made domains. These domains work only on your PC. You can copy these IP and domain names on as many PCs as you prefer. This method also works great when you are developing or testing and your production domain is not available on your development environment. Creating subcategories After creating the foundation of the website, we need to set up a catalog structure. Setting up a catalog structure is not difficult, but needs to be thought out well. Some websites have an easy setup using two levels, while others sometimes use five or more subcategories. Always keep in mind the user experience; your customer needs to crawl the pages easily. Keep it simple! Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to set up a catalog including subcategories. The following steps will guide you through this: First, log in to the backend of Magento 2 and go to Products | Categories. As we have already created Root Catalogs, we start with using the Root English catalog first. Click on the Root English catalog on the left and then select the Add Subcategory button above the menu. Now commit the following and repeat all steps again for the other Root Catalogs: Name Shoes (Schuhe) (Chaussures) Is Active Yes Page Title Shoes (Schuhe) (Chaussures) Name Clothes (Kleider) (Vêtements) Is Active Yes Page Title Clothes (Kleider) (Vêtements) As we have created the first level of our catalog, we can continue with the second level. Now click on the first level that you need to extend with a subcategory and select the Add Subcategory button. Now commit the following and repeat all steps again for the other Root Catalogs: Name Men (Männer) (Hommes) Is Active Yes Page Title Men (Männer) (Hommes) Name Women (Frau) (Femmes) Is Active Yes Page Title Women (Frau) (Femmes) Congratulations, you just finished configuring subcategories in Magento 2. Now let's open a browser and surf to your created domain names: yourdomain.com, yourdomain.de, and yourdomain.fr. Your categories should now look as follows: How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 4, we created subcategories for the English, German, and French stores. In this recipe, we created a dedicated Root Catalog for every website. This way, every store can be configured using their own tax and shipping rules. There's more… In our example, we only submitted Name, Is Active, and Page Title. You may continue to commit the Description, Image, Meta Keywords, and Meta Description fields. By default, the URL key is the same as the Name field; you can change this depending on your SEO needs. Every category or subcategory has a default page layout defined by the theme. You may need to override this. Go to the Custom Design tab and click the drop-down menu of Page Layout. We can choose from the following options: 1 column, 2 columns with left bar, 2 columns with right bar, 3 columns, or Empty. Managing an attribute set Every product has a unique DNA; some products such as shoes could have different colors, brands, and sizes, while a snowboard could have weight, length, torsion, manufacture, and style. Setting up a website with all the attributes does not make sense. Depending on the products that you sell, you should create attributes that apply per website. When creating products for your website, attributes are the key elements and need to be thought through. What and how many attributes do I need? How many values does one need? All types of questions that could have a great impact on your website and, not to forget, the performance of it. Creating an attribute such as color and having 100 K of different key values stored is not improving your overall speed and user experience. Always think things through. After creating the attributes, we combine them in attribute sets that can be picked when starting to create a product. Some attributes can be used more than once, while others are unique to one product of an attribute set. Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to create product attributes and sets. The following steps will guide you through this: First, log in to the backend of Magento 2 and go to Stores | Products. As we are using a vanilla setup, only system attributes and one attribute set is installed. Now click on Add New Attribute and commit the following data in the Properties tab: Attribute Properties Default label shoe_size Catalog Input Type for Store Owners Dropdown Values Required No Manage Options (values of your attribute) English Admin French German 4 4 35 35 4.5 4.5 35 35 5 5 35-36 35-36 5.5 5.5 36 36 6 6 36-37 36-37 6.5 6.5 37 37 7 7 37-38 37-38 7.5 7.5 38 38 8 8 38-39 38-39 8.5 8.5 39 39 Advanced Attribute Properties Scope Global Unique Value No Add to Column Options Yes Use in Filer Options Yes As we have already set up a multi-website that sells shoes and clothes, we stick with this. The attributes that we need to sell shoes are: shoe_size, shoe_type, width, color, gender, and occasion. Continue with the rest of the chart accordingly (http://www.shoesizingcharts.com). Click on Save and Continue Edit now and continue on the Manage Labels tab with the following information: Manage Titles (Size, Color, etc.) English French German Size Taille Größe Click on Save and Continue Edit now and continue on the Storefront Properties tab with the following information: Storefront Properties Use in Search No Comparable in Storefront No Use in Layered Navigation Filterable (with result) Use in Search Result Layered Navigation No Position 0 Use for Promo Rule Conditions No Allow HTML Tags on Storefront Yes Visible on Catalog Pages on Storefront Yes Used in Product Listing No Used for Sorting in Product Listing No Click on Save Attribute now and clear the cache. Depending on whether you have set up the index management accordingly through the Magento 2 cronjob, it will automatically update the newly created attribute. The additional shoe_type, width, color, gender, and occasion attributes configuration can be downloaded at https://github.com/mage2cookbook/chapter4. After creating all of the attributes, we combine them in an attribute set called Shoes. Go to Stores | Attribute Set, click on Add Attribute Set, and commit the following data: Edit Attribute Set Name Name Shoes Based On Default Now click on the Add New button in the Groups section and commit the group name called Shoes. The newly created group is now located at the bottom of the list. You may need to scroll down before you see it. It is possible to drag and drop the group higher up in the list. Now drag and drop the created attributes, shoe_size, shoe_type, width, color, gender, and occasion to the group and save the configuration. The notice of the cron job is automatically updated depending on your settings. Congratulations, you just finished creating attributes and attribute sets in Magento 2. This can be seen in the following screenshot: How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 10, we created attributes that will be used in an attribute set. The attributes and sets are the fundamentals for every website. In steps 1 through 5, we created multiple attributes to define all details about the shoes and clothes that we would like to sell. Some attributes are later used as configurable values on the frontend while others only indicate the gender or occasion. In steps 6 through 9, we connected the attributes to the related attribute set so that when creating a product, all correct elements are available. There's more… After creating the attribute set for Shoes, we continue to create an attribute set for Clothes. Use the following attributes to create the set: color, occasion, apparel_type, sleeve_length, fit, size, length, and gender. Follow the same steps as we did before to create a new attribute set. You may reuse the attributes, color, occasion, and gender. All detailed attributes can be found at https://github.com/mage2cookbook/chapter4#clothes-set. The following is the screenshot of the Clothes attribute set: Summary In this article, you learned how to create a Root Catalog, subcategories, and manage attribute sets. For more information on Magento 2, Refer the following books by Packt Publishing: Magento 2 Development Cookbook (https://www.packtpub.com/web-development/magento-2-development-cookbook) Magento 2 Developer's Guide (https://www.packtpub.com/web-development/magento-2-developers-guide) Resources for Article: Further resources on this subject: Social Media in Magento [article] Upgrading from Magneto 1 [article] Social Media and Magento [article]
Read more
  • 0
  • 0
  • 12025
article-image-getting-started-freeradius
Packt
07 Sep 2011
6 min read
Save for later

Getting Started with FreeRADIUS

Packt
07 Sep 2011
6 min read
After FreeRADIUS has been installed it needs to be configured for our requirements. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, will help you to get familiar with FreeRADIUS. It assumes that you already know the basics of the RADIUS protocol. In this article we shall: Perform a basic configuration of FreeRADIUS and test it Discover ways of getting help Learn the recommended way to configure and test FreeRADIUS See how everything fits together with FreeRADIUS (For more resources on this subject, see here.) Before you startThis article assumes that you have a clean installation of FreeRADIUS. You will need root access to edit FreeRADIUS configuration files for the basic configuration and testing. A simple setup We start this article by creating a simple setup of FreeRADIUS with the following: The localhost defined as an NAS device (RADIUS client) Alice defined as a test user After we have defined the client and the test user, we will use the radtest program to fill the role of a RADIUS client and test the authentication of Alice. Time for action – configuring FreeRADIUS FreeRADIUS is set up by modifying configuration files. The location of these files depends on how FreeRADIUS was installed: If you have installed the standard FreeRADIUS packages that are provided with the distribution, it will be under /etc/raddb on CentOS and SLES. On Ubuntu it will be under /etc/freeradius. If you have built and installed FreeRADIUS from source using the distribution's package management system it will also be under /etc/raddb on CentOS and SLEs. On Ubuntu it will be under /etc/freeradius. If you have compiled and installed FreeRADIUS using configure, make, make install it will be under /usr/local/etc/raddb. The following instructions assume that the FreeRADIUS configuration directory is your current working directory: Ensure that you are root in order to be able to edit the configuration files. FreeRADIUS includes a default client called localhost. This client can be used by RADIUS client programs on the localhost to help with troubleshooting and testing. Confirm that the following entry exists in the clients.conf file: client localhost { ipaddr = 127.0.0.1 secret = testing123 require_message_authenticator = no nastype = other} Define Alice as a FreeRADIUS test user. Add the following lines at the top of the users file. Make sure the second and third lines are indented by a single tab character: "alice" Cleartext-Password := "passme" Framed-IP-Address = 192.168.1.65, Reply-Message = "Hello, %{User-Name}" Start the FreeRADIUS server in debug mode. Make sure that there is no other instance running by shutting it down through the startup script. We assume Ubuntu in this case. $> sudo su#> /etc/init.d/freeradius stop#> freeradius -X You can also use the more brutal method of kill -9 $(pidof freeradius) or killall freeradius on Ubuntu and kill -9 $(pidof radius) or killall radiusd on CentOS and SLES if the startup script does not stop FreeRADIUS. Ensure FreeRADIUS has started correctly by confirming that the last line on your screen says the following: Ready to process requests. If this did not happen, read through the output of the FreeRADIUS server started in debug mode to see what problem was identified and a possible location thereof. Authenticate Alice using the following command: $> radtest alice passme 127.0.0.1 100 testing123 The debug output of FreeRADIUS will show how the Access-Request packet arrives and how the FreeRADIUS server responds to this request. Radtest will also show the response of the FreeRADIUS server: Sending Access-Request of id 17 to 127.0.0.1 port 1812 User-Name = "alice" User-Password = "passme" NAS-IP-Address = 127.0.1.1 NAS-Port = 100rad_recv: Access-Accept packet from host 127.0.0.1 port 1812,id=147, length=40 Framed-IP-Address = 192.168.1.65 Reply-Message = "Hello, alice" What just happened? We have created a test user on the FreeRADIUS server. We have also used the radtest command as a client to the FreeRADIUS server to test authentication. Let's elaborate on some interesting and important points. Configuring FreeRADIUS Configuration of the FreeRADIUS server is logically divided into different files. These files are modified to configure a certain function, component, or module of FreeRADIUS. There is, however, a main configuration file that sources the various sub-files. This file is called radiusd.conf. The default configuration is suitable for most installations. Very few changes are required to make FreeRADIUS useful in your environment. Clients Although there are many files inside the FreeRADIUS server configuration directory, only a few require further changes. The clients.conf file is used to define clients to the FreeRADIUS server. Before an NAS can use the FreeRADIUS server it has to be defined as a client on the FreeRADIUS server. Let's look at some points about client definitions. Sections A client is defined by a client section. FreeRADIUS uses sections to group and define various things. A section starts with a keyword indicating the section name. This is followed by enclosing brackets. Inside the enclosing brackets are various settings specific to that section. Sections can also be nested. Sometimes the section's keyword is followed by a single word to differentiate between sections of the same type. This allows us to have different client entries in clients.conf. Each client has a short name to distinguish it from the others. The clients.conf file is not the only file where client sections can be defined although it is the usual and most logical place. The following image shows nested client definitions inside a server section: (Move the mouse over the image to enlarge.) Client identification The FreeRADIUS server identifies a client by its IP Address. If an unknown client sends a request to the server, the request will be silently ignored. Shared secret The client and server also require to have a shared secret, which will be used to encrypt and decrypt certain AVPs. The value of the User-Password AVP is encrypted using this shared secret. When the shared secret differs between the client and server, FreeRADIUS server will detect it and warn you when running in debug mode: Failed to authenticate the user. WARNING: Unprintable characters in the password. Double-check the shared secret on the server and the NAS! Message-Authenticator When defining a client you can enforce the presence of the Message-Authenticator AVP in all the requests. Since we will be using the radtest program, which does not include it, we disable it for localhost by setting require_message_authenticator to no. Nastype The nastype is set to other. The value of nastype will determine how the checkrad Perl script will behave. Checkrad is used to determine if a user is already using resources on an NAS. Since localhost does not have this function or need we will define it as other. Common errors If the server is down or the packets from radtest cannot reach the server because of a firewall between them, radtest will try three times and then give up with the following message: radclient: no response from server for ID 133 socket 3 If you run radtest as a normal user it may complain about not having access to the FreeRADIUS dictionary file. This is required for normal operations. The way to solve this is either to change the permissions on the reported file or to run radtest as root.
Read more
  • 0
  • 0
  • 11984

article-image-building-form-applications-joomla-using-ck-forms
Packt
05 Jan 2010
4 min read
Save for later

Building Form Applications in Joomla! using CK Forms

Packt
05 Jan 2010
4 min read
(For more resources on Joomla!, see here.) Add a quick survey  form to your Joomla site Dynamic web application often means database at the back-end. It not only takes data from the database and shows, but also collects data from the visitors. For example, you want to add a quick survey into your Joomla! Site. Instead of searching for different extensions for survey forms, you can quickly build one survey form using an extension named CK Forms. This extensions is freely available for download at http://ckforms.cookex.eu/download/download.php. For building a quick survey, follow the steps below: Download CK Forms extension from http://ckforms.cookex.eu/download/download.php and install it from Extensions | Install/Uninstall screen in Joomla! Administration area. Once installed successfully, you see the component CK Forms in Components menu. Select Components | CK Forms and you get CK Forms screen as shown below: For creating a new form, click on New icon in the toolbar. That opens up CK Forms: [New] screen. In the General tab, type the name and title of the form. The name should be without space, but title can be fairly long and with spaces. Then select Yes in Published field if you want to show the form now. In Description field, type a brief description of the form which will be displayed before the form. In Results tab, select Yes in Save result field. This will store the data submitted by the form into database and allow you to see the data later. In Text result field, type the text that you want to display just after submission of the form. In Redirect URL field, type the URL of a page where the user will be redirected after submitting the form. In E-mail tab, select Yes in Email result field, if you want the results of the forms to be e-mailed to you. If you select Yes in this field, provide mail-to, mail cc, and Mail BCC address. Also specify subject of the mail in Mail subject field. Select Yes in Include fileupload file field if you want the uploaded file to be mailed too. If an e-mail field is present in the form and the visitor submits his/her e-mail address, you can send an acknowledgement on receiving his/her data through e-mail. For sending such receipt messages, select Yes in E-mail receipt field. Then specify e-mail receipt subject and e-mail receipt text. You can also include the data and file submitted via the form with this receipt e-mail. For doing so, select Yes in both Include data and Include fileupload file fields. In Advanced tab, you can specify whether you want to use Captcha or not. Select Yes in Use Captcha field. Then type some tips text in Captcha tips text, for example, 'Please type the text shown in the image'. You can also specify error texts in Captcha custom error text field. In Uploaded files path field, you can specify where the files uploaded by the users will be stored. The directory mentioned here must be writable. You can also specify the maximum size of uploaded file in File uploaded maximum size field. To display Powered by CK Forms text below the form, select Yes in Display "Powered by" text field. In Front-end display tab, you can choose to display IP address of the visitor. You can view help on working with CK Forms by clicking on Help tab. After filling up all the fields, click on Save icon in the toolbar. That will save the form and take you to CK Forms screen. Now you will see the newly created form listed in this screen.
Read more
  • 0
  • 0
  • 11906
Modal Close icon
Modal Close icon