Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Servers

95 Articles
article-image-microsoft-sql-server-2008-r2-master-data-services-overview
Packt
19 Jul 2011
5 min read
Save for later

Microsoft SQL Server 2008 R2 Master Data Services Overview

Packt
19 Jul 2011
5 min read
Microsoft SQL Server 2008 R2 Master Data Services Master Data Services overview Master Data Services is Microsoft's Master Data Management product that ships with SQL Server 2008 R2. Much like other parts of SQL Server, such as Analysis Services (SSAS) or Reporting Services (SSRS), MDS doesn't get installed along with the database engine, but is a separate product in its own right. Unlike SSAS or SSRS, it's worth noting that MDS is only available in the Enterprise and Data Centre Editions of SQL Server, and that the server must be 64-bit. MDS is a product that has grown from acquisition, as it is based on the +EDM product that Microsoft obtained when they bought the Atlanta-based company called Stratature in 2007. A great deal of work has been carried out since the acquisition, including changing the user interface and putting in a web service layer. At a high level, the new product has the following features: Entity maintenance—MDS supports data stewardship by allowing users to add, edit, and delete members. The tool is not specific to a particular industry or area, but instead is generic enough to work across a variety of subject domains. Modeling capability—MDS contains interfaces that allow administrative users to create data models to hold entity members. Hierarchy management—Relationships between members can be utilized to produce hierarchies that users can alter. Version management—Copies of entity data and related metadata can be archived to create an entirely separate version of the data. Business rules and workflow—A comprehensive business rules engine is included in order to enforce data quality and assist with data stewardship via workflow. Alerts can be sent to users using e-mail when the business rules encounter a particular condition. Security—A granular security model is included, where it is possible, for example, to prevent a given user from accessing certain entities, attributes, and members. Master Data Services architecture Technically, Master Data Services consists of the following components: SQL Server Database—The database holds the entities such as Customer or Product, whether they are imported from other systems or created in MDS. Master Data Manager—A web-based data stewardship and administration portal that amongst many other features allows data stewards to add, edit, and delete entity members. Web Service Layer—All calls to the database from the front-end go through a WCF (Windows Communication Foundation) web service. Internet Information Services (IIS) is used to host the web services and the Master Data Manager application. Workflow Integration Service—A Windows service that acts as a broker between MDS and SharePoint in order to allow MDS business rules to use SharePoint workflows. Configuration Manager—A windows application that allows key settings to be altered by an administrator. The following diagram shows how the components interact with one another: MDS SQL Server database The MDS database uses a mix of components in order to be the master data store and to support the functionality found in the Master Data Manager, including stored procedures, views, and functions. Separate tables are created both for entities and for their supporting objects, all of which happens on the fly when a new object gets created in Master Data Manager. The data for all entities across all subject areas are stored in the same database, meaning that the database could get quite big if several subject domains are being managed. The tables themselves are created with a code name. For example, on my local installation, the Product entity is not stored in a table called "Product" as you might expect, but in a table called "tbl_2_10_EN". Locating entity data The exact table that contains the data for a particular entity can be found by writing a select statement against the view called viw_SYSTEM_SCHEMA_ENTITY in the mdm schema. As well as containing a number of standard SQL Server table-valued and scalar functions, the MDS database also contains a handful of .Net CLR (Common Language Runtime)-based functions, which can be found in the mdq schema. The functions utilize the Microsoft.MasterDataServices.DataQuality assembly and are used to assist with data quality and the merging, de-duplication, and survivorship exercises that are often required in a master data management solution. Some of the actions in MDS, such as the e-mail alerts or loading of large amounts of data, need to happen in an asynchronous manner. Service Broker is utilized to allow users to continue to use the front-end without having to wait for long running processes to complete. Although strictly outside the MDS database, SQL Server Database Mail, which resides in the system msdb database, is used as the mechanism to send e-mail alerts to subscribing users. In addition to the tables that hold the master data entities, the MDS database also contains a set of staging tables that should be used when importing data into MDS. Once the staging tables have been populated correctly, the staged data can be loaded into the master data store in a single batch. Internet Information Services (IIS) During the initial configuration of MDS, an IIS Web Application will get created within an IIS website. The name of the application that gets created is called "MDS", although this can be over-ridden if needed. The Web Application contains a Virtual Directory that points to the physical path of <Drive>:<install location>WebApplication, where the various components of the Master Data Manager application are stored. The MDS WCF Service called Service.svc is also located in the same directory. The service can be exposed in order to provide a unified access point to any MDS functionality (for example, creating an entity or member, retrieving all entity members) that is needed by other applications. Master Data Manager connects to the WCF service, which then connects to the database, so this is the route that should be taken by other applications, instead of connecting to the database directly.
Read more
  • 0
  • 0
  • 7784

article-image-getting-started-microsoft-sql-server-2008-r2
Packt
09 Jun 2011
9 min read
Save for later

Getting Started with Microsoft SQL Server 2008 R2

Packt
09 Jun 2011
9 min read
  Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system Introduction Microsoft SQL Server 2008 has opened up a new dimension within data platforms and SQL Server 2008 R2 has been developed on the areas of core Database Platform and rich Business Intelligence. On the core database environment, SQL Server 2008 R2 advances the new enhancements as a primary goal of scalability and availability for highly transactional applications on enterprise-wide networks. On Business Intelligence platforms, the new features that are elevated include Master Data Management (MDM), StreamInsight, PowerPivot for Excel 2010, and Report Builder 3.0. The SQL Server 2008 R2 Installation Center includes system configuration checker rules to ensure the deployment and installation completes successfully. Further, the SQL Server setup support files will help to reduce the software footprint for installation of multiple SQL instances. This article begins with SQL Server 2008 R2 version's new features and enhancements, and adding the service pack features using Slipstream technology. Then an explanation towards how best the master data services can help in designing and adopting key solutions, working with data-tier applications to integrate development into deployment, and an explanation of how best the federated servers enhancement can help to design highly scalable applications for data platforms. Adding SQL Server 2008 R2 Service Pack features using Slipstream technology The success of any project relies upon the simpler methods of implementation and a process to reduce the complexity in testing to ensure a successful outcome. This can be applied directly to the process of SQL Server 2008 R2 installation that involves some downtime, such as the reboot of servers. This is where the Slipstream process allows other changes to the databases or database server. This method offers the extension of flexibility to upgrade the process as an easier part, if there are minimal changes to only those required for the upgrade process. The following recipe is prepared to enable you to get to know Slipstream. Slipstream is the process of combining all the latest patch packages into the initial installation. The major advantage of this process is time, and the capability to include all the setup files along with service pack and hotfixes. The single-click deployment of Slipstream helps us to merge the original source media with updates in memory and then install the update files to enable multiple deployments of SQL Server 2008 R2. Getting Ready In order to begin adding features of SQL Server using Slipstream, you need to ensure you have the following in place: .NET Framework 3.5 Service Pack 1: It helps improvements in the area of data platform, such as ADO.NET Entity Framework, ADO.NET data services, and support for new features of SQL Server 2008 version onwards. You can download .NET Framework 3.5 Service Pack 1 from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=ab99342f-5d1a-413d-8319-81da479ab0d7&displaylang=en. Windows Installer 4.5: It helps the application installation and configuration service for Windows, which works as an embedded chainer to add packages to a multiple package transaction. The major advantage of this feature enables an update to add or change custom action, so that the custom action is called when an update is uninstalled. You can download Windows Installer 4.5 redistributable package from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=5A58B56F-60B6-4412-95B9-54D056D6F9F4. SQL Server setup support files: It installs SQL Server Native Client that contains SQL OLEDB provider and SQL ODBC driver as a native dynamic link library (DLL) supporting applications using native code APIs to SQL Server. How to do it... Slipstream is a built-in ability of the Windows operating system and since the release of SQL Server 2008 Service Pack 1, it is included. The best practice is to use Slipstream Service Pack as an independent process for Service pack installation, Cumulative Update patching, and Hotfix patching. The key step to Slipstream success is to ensure the following steps are succeeded: The prerequisite steps (mentioned in the earlier sections) are completed. In case of multiple language instances of SQL Server, we need to ensure that we download the correct service pack language from http://www.microsoft.com/downloads/en/ that suits the instance. The Service Pack files are independent to each platform to download, such as X86 for 32-bit, X64 for 64-bit, and IA64 for Itanium platform. To perform the Slipstream Service Pack process, you need to complete the following steps: Create two folders on the local server: SQL2K8R2_FullSP and SQL2K8R2SP. Obtain the original SQL Server 2008 R2 setup source media and copy to SQL2K8R2_FullSP folder. Download the Service Pack1 from Microsoft Downloads site to save in SQL2K8R2SP folder, as per the platform architecture: SQLServer2008SP1-KB968369-IA64-ENU.exe SQLServer2008SP1-KB968369-x64-ENU.exe SQLServer2008SP1-KB968369-x86-ENU.exe Extract the package file using Windows Explorer or using a command prompt operation, as shown in the following screenshot: In case the platform consists of multiple SQL instances with different architectures, for instance SQL Server 2008 R2 Enterprise Edition 64-bit as a default instance and SQL Server 2008 R2 Standard Edition as a named instance, then make sure you download the relevant architecture file http://www.microsoft.com/downloads/en/ as stated previously and extract to relevant folders. This is the first checkpoint to proceed further and the key to ensuring the original setup media is updated correctly. Copy the executable and localized resource file from the extracted location to the original source media location using robocopy utility, which is available from Windows Server 2008 onwards: Copy all the files except the module program file that is executed by various programs and applications in Windows operating systems. It is important to ensure the correct architecture files are copied, such X64 and X86 related files. In addition to the initial checkpoint, this additional checkpoint is required in order to ensure the correct path is specified that will be picked up by Slipstream during the setup of SQL Server 2008 R2 and Service Pack installation. The defaultsetup.ini is the key to guide the Slipstream process to install the RTM version and Service Pack files. The file can be located within the SQL2K8R2_FullSP folder as per the architecture. From Windows Explorer, go to the SQL2K8R2_FullSP folder and open the defaultsetp.ini file to add the correct path for the PCUSOURCE parameter. The file can be located from the SQL Server setup folder location for the processor, for instance, the 32-bit platform the file is available from servernamedirectorySQL Server 2008 R2X86 folder. The previous screenshot represents the file existence within the server, to ensure that the matching SQL Server Product ID (license key) is supplied. There is more attached to the process if the file does not exist, there is no harm to the Slipstream process, the file can be created at the original folder defined in the following steps. It is essential that the license key (product ID) and PCUSource information is included as follows: ;SQLSERVER2008 Configuration File [SQLSERVER2008] PID="??" PCUSOURCE=?? Now, the PCUSOURCE value should consist of the full path of Service pack files that are copied during the initial step, the entry should be as follows: add PCUSOURCE="{Full path}PCU". The full path must include the absolute path to the PCU folder, for instance, if the setup files exist in local folder the path must be as follows: <drivename>SQLServer2008R2_FullSP If that folder is shared out, then the full path must be: MyServerSQLServer2008_FullSP1 The final step of this Slipstream process is to execute the setup.exe from SQL2K8R2_FullSP folder. How it works... The Slipstream steps and installation process are a two-fold movement. Slipstream uses the Remote Installation Services (RIS) technology of Windows Server services to allow configuration management to be automated. The RIS process is capable of downloading the required files or images from the specific path to complete the installation process. The SQL Server 2008 R2 setup runs a pre-check before preceding the installation. The System Configuration Check (SCC) application scans the computer where the SQL Server will be installed. The SCC checks for a set of conditions that prevent a successful installation of SQL Server services. Before the setup starts the SQL Server installation wizard, the SCC executes as a background process and retrieves the status of each item. It then compares the result with the required conditions and provides guidance for the removal of blocking issues. The SQL Server Setup validates your computer configuration using a System Configuration Checker (SCC) before the Setup operation completes using a set of check-parameters that will help to resolve the blocking issues. The sample list of check-parameters is as follows: The following are some of the additional checks that SCC performs to determine if the SQL Server editions in an in-place upgrade path are valid: Checks the system databases for features that are not supported in the SQL Server edition to which you are upgrading Checks that neither SQL Server 7.0 nor SQL Server 7.0 OLAP Services is installed on the server SQL Server 2008 or higher versions are not supported on the server that has SQL Server 7.0. Checks all user databases for features that are not supported by the SQL Server edition Checks if the SQL Server service can be restarted Checks that the SQL Server service is not set to Disabled Checks if the selected instance of SQL Server meets the upgrade matrix requirements Checks if SQL Server Analysis Services is being upgraded to a valid edition SCC checks if the edition of the selected instance of SQL Server is supported for 'Allowable Upgrade Paths' There's more... As the prerequisite process of Slipstream is completed, we need to ensure that the installation of SQL Server 2008 R2, Service Pack, and Hotfixes patches are applied with the setup steps. To confirm the workflow process is followed correctly from the folder SQL2K8R2_FullSP, double-click on setup.exe file to continue the installation of RTM version, Service Pack, and required hotfix patches. While continuing the setup at the Installation Rules screen, the SCC rule checks for Update Setup Media Language Compatibility value, which should be passed in order to proceed, as shown in the following screenshot: If you have failed to see the update setup media language rule, then the same information can be obtained once the installation process is completed. The complete steps and final result of setup are logged as a text file under the folder: C:Program FilesMicrosoft SQL Server100Setup BootstrapLog. The log file is saved as Summary_<MachineName>_Date_Time.txt, for example, 'Summary_DBiASSQA_20100708_200214.txt'.
Read more
  • 0
  • 0
  • 7740

article-image-migration-apache-lighttpd
Packt
22 Oct 2009
7 min read
Save for later

Migration from Apache to Lighttpd

Packt
22 Oct 2009
7 min read
Now starting from a working Apache installation, what can Lighttpd offer us? Improved performance for most cases (as in more hits per second) Reduced CPU time and memory usage Improved security Of course, the move to Lighttpd is not a small one, especially if our Apache configuration makes use of its many features. Systems tied into Apache as a module may make the move hard or even impossible without porting the module to a Lighttpd module or moving the functionality into CGI programs, if possible. We can ease the pain by moving in small steps. The following descriptions assume that we have one Apache instance running on one hardware instance. But we can scale the method by repeating it for every hardware instance. When not to migrateBefore we start this journey, we need to know that our hardware and operating systems support Lighttpd, that we have root access (or access to someone who has), and that the system has enough space for another Lighttpd installation (yes, I know, Lighttpd should reduce space concerns, but I have seen Apache installations munching away entire RAID arrays). Probably, this only makes sense if we plan on moving a big percentage of traffic to Lighttpd. We also might make extensive use of Apache module, which means a complete migration would involve finding or writing suitable substitutes for Lighttpd. Adding Lighttpd to the Mix Install Lighttpd on the system that Apache runs on. Find an unused port (refer to a port scanner if needed) to set server.port to. For example, if port 4080 is unused on our system, we would look for server.port in our Lighttpd configuration and change it to: server.port = 4080 If we want to use SSL, we should change all occurrences of the port 443 to another free port, say 4443. We assume our Apache is answering requests on HTTP port 80. Now let's use this Lighttpd instance as a proxy for our Apache by adding the following configuration: server.modules = (#..."mod_proxy",#...)#...proxy.server = ("" => ( # proxy everythinghost => "127.0.0.1" # localhostport => "80")) This tells our Lighttpd to proxy all requests to the server that answers on localhost, port 80, which happens to be our Apache server. Now, when we start our Lighttpd and point our browser to http://localhost:4080/, we should be able to see the same thing that our Apache is returning. What is a proxy?A Proxy stands in front of another object, simulating the object by relaying all requests to it. A proxy can change requests on the fly, filter requests, and so on. In our case, Lighttpd is the web server to the outside, whilst Apache will still get all requests as usual. Excursion: mod_proxy mod_proxy is the module that allows Lighttpd to relay requests to another web server. It is not to be confused with mod_proxy_core (of Lighttpd 1.5.0), which provides a basis for other interfaces such as CGI. Usually, we want to proxy only a specific subset of requests, for example, we might want to proxy requests for Java server pages to a Tomcat server. This could be done with the following proxy directive: proxy.server = (".jsp" => ( host => "127.0.0.1", port => "8080" )# given our tomcat is on port 8080) Thus the tomcat server only serves JSPs, which is what it was built to do, whilst our Lighttpd does the rest. Or we might have another server which we want to include in our Web presence at some given directory: proxy.server = ("/somepath" => ( host => "127.0.0.1", port => "8080" )) Assuming the server is on port 8080, this will do the trick. Now http://localhost/somepath/index.html will be the same as http://localhost:8080/index.html. Reducing Apache Load Note that as most Lighttpd directives, proxy.server can be moved into a selector, thereby reducing its reach. This way, we can reduce the set of files Apache will have to touch in a phased manner. For example, YouTube™ uses Lighttpd to serve the videos. Usually, we want to make Lighttpd serve static files such as images, CSS, and JavaScript, leaving Apache to serve the dynamically generated pages. Now, we have two options: we can either filter the extensions we want Apache to handle, or we can filter the addresses we want Lighttpd to serve without asking Apache. Actually, the first can be done in two ways. Assuming we want to give all addresses ending with .cgi and .php to Apache, we could either use the matching of proxy.server: proxy.server = (".cgi" => ( host = "127.0.0.1", port = "8080" ),".php" => ( host = "127.0.0.1", port = "8080" )) or match by selector: $HTTP['url'] =~ "(.cgi|.php)$" {proxy.server = ( "" => ( host = "127.0.0.1", port = "8080" ) )} The second way also allows negative filtering and filtering by regexp — just use !~ instead of =~. mod_perl, mod_php, and mod_python There are no Lighttpd modules to embed scripting languages into Lighttpd (with the exception of mod_magnet, which embeds Lua) because this is simply not the Lighttpd way of doing things. Instead, we have the CGI, SCGI, and FastCGI interfaces to outsource this work to the respective interpreters. Most mod_perl scripts are easily converted to FastCGI using CGI::Fast. Usually, our mod_perl script will look a lot like the following script: use CGI;my $q = CGI->new;initialize(); # this might need to be done only onceprocess_query($q); # this should be done per requestprint response($q); # this, too Using the easiest way to convert to FastCGI: use CGI:Fast # instead of CGIwhile (my $q = CGI:Fast->new) { # get requests in a while-loopinitialize();process_query($q);print response($q);} If this runs, we may try to put the initialize() call outside of the loop to make our script run even faster than under mod_perl. However, this is just the basic case. There are mod_perl scripts that manipulate the Apache core or use special hooks, so these scripts can get a little more complicated to migrate. Migrating from mod_php to php-fcgi is easier — we do not need to change the scripts, just the configuration. This means that we do not get the benefits of an obvious request loop, but we can work around that by setting some global variables only if they are not already set. The security benefit is obvious. Even for Apache, there are some alternatives to mod_php, which try to provide more security, often with bad performance implications. mod_python can be a little more complicated, because Apache calls out to the python functions directly, converting form fields to function arguments on the fly. If we are lucky, our python scripts could implement the WSGI (Web Server Gateway Interface). In this case, we can just use a WSGI-FastCGI wrapper. Looking on the Web, I already found two: one standalone (http://svn.saddi.com/py-lib/trunk/fcgi.py), and one, a part of the PEAK project (http://peak.telecommunity.com/DevCenter/FrontPage). Otherwise, python usually has excellent support for SCGI. As with mod_perl, there are some internals that have to be moved into the configuration (for example dynamic 404 pages, the directive for this is server.error-handler-405, which can also point to a CGI script). However, for basic scripts, we can use SCGI (either from http://www.mems-exchange.org/software/scgi/ or as a python-only version from http://www.cherokee-project.com/download/pyscgi/). We also need to change import cgi to import scgi and change CGIHandler and CGIServer to SCGIHandler and SCGIServer, respectively.
Read more
  • 0
  • 0
  • 7421

article-image-optimizing-lighttpd
Packt
16 Oct 2009
5 min read
Save for later

Optimizing Lighttpd

Packt
16 Oct 2009
5 min read
If our Lighttpd runs on a multi-processor machine, it can take advantage of that by spawning multiple versions of itself. Also, most Lighttpd installations will not have a machine to themselves; therefore, we should not only measure the speed but also its resource usage. Optimizing Compilers: gcc with the usual settings (-O2) already does quite a good job of creating a fast Lighttpd executable. However, -O3 may nudge the speed up a tiny little bit (or slow it down, depending on our system) at the cost of a bigger executable system. If there are optimizing compilers for our platform (for example, Intel and Sun Microsystems each have compilers that optimize for their CPUs), they might even give another tiny speed boost. If we do not want to invest money in commercial compilers, but maximize on what gcc has to offer, we can use Acovea, which is an open source project that employs genetic algorithms and trial-and-error to find the best individual settings for gcc on our platform. Get it from http://www.coyotegulch.com/products/acovea/ Finally, optimization should stop where security (or, to a lesser extent, maintainability) is compromised. A slower web server that does what we want is way better than a fast web server obeying the commands of a script kiddie. Before we optimize away blindly, we better have a way to measure the "speed". A useful measure most administrators will agree with is "served requests per second". http_load is a tool to measure the requests per second. We can get it from http://www.acme.com/software/http_load/. http_load is very simple. Give it a site to request, and it will flood the site with requests, measuring how many are served in a given amount of time. This allows a very simplistic approach to optimizing Lighttpd: Tweak some settings, run http_load with a sufficient realistic scenario, and see if our Lighttpd handles more or less requests than before. We do not yet know where to spend time optimizing. For this, we need to make use of timing log instrumentation that has been included with Lighttpd 1.5.0 or even use a profiler to see where the most time is spent. However, there are some "big knobs" to turn that can increase performance, where http_load will help us find a good setting. Installing http_load http_load can be downloaded as a source .tar file (which was named .tar.gz for me, though it is not gzipped). The version as of this writing is 12Mar2006. Unpack it to /usr/src (or another path by changing the /usr/src) with: $ cd /usr/src && tar xf /path/to/http_load-12Mar2006.tar.gz$ cd http_load-12Mar2006 We can optionally add SSL support. We may skip this if we do not need it. To add SSL support we need to find out where the SSL libs and includes are. I assume they are in /usr/lib and /usr/include, respectively, but they may or may not be the same on your system. Additionally, there is a "SSL tree" directory that is usually in /usr/ssl or /usr/local/ssl and contains certificates, revocation lists, and so on. Open the Makefile with a text editor and look at line 11 to 14, which reads: #SSL_TREE = /usr/local/ssl#SSL_DEFS = -DUSE_SSL#SSL_INC = -I$(SSL_TREE)/include#SSL_LIBS = -L$(SSL_TREE)/lib -lssl -lcrypto Change them to the following (assuming the given directories are correct): SSL_TREE = /usr/sslSSL_DEFS = -DUSE_SSLSSL_INC = -I/usr/includeSSL_LIBS = -L/usr/lib -lssl -lcrypto Now compile and install http_loadwith the following command: $ make all install Now we're all set to load-test our Lighttpd. Running http_load tests We just need a URL file, which contains URLs that lead to the pages our Lighttpd serves. http_load will then fetch these pages at random as long as, or as often as we ask it to. For example, we may have a front page with links to different articles. We can just start putting a link to our front page into the URL file, which we will name urls to get started; for example, http://localhost/index.html. Note that the file just contains URLs, nothing less, nothing more (for example, http_load does not support blank lines). Now we can make our first test run: $ http_load -parallel 10 -seconds 60 urls This will run for one minute and try to open 10 connections per second. Let's see if our Lighttpd keeps up: 343 fetches, 10 max parallel, 26814 bytes, in 60 seconds78.1749 mean bytes/connection5.71667 fetches/sec, 446.9 bytes/secmsecs/connect: 290.847 mean, 9094 max,15 minmsecs/first-response: 181.902 mean, 9016 max, 15 minHTTP response codes: code 200 - 327   As we can see, it does. http_load needs one of the two start conditions and one of the two stop conditions plus a URL file to run. We can create the URL file manually or crawl our document root(s) with the following python script called crawl.py: #!/usr/bin/python#run from document root, pipe into URLs file. For example:# /path/to/docroot$ crawl.py > urlsimport os, re, syshostname = "http://localhost/"for (root, dirs, files) in os.walk("."): for name in files: filepath = os.path.join(root, name) print re.sub("./", hostname, filepath)   You can download the crawl.oy file from http://www.packtpub.com/files/code/2103_Code.zip. Capture the output into a file to use as URL file. For example, start the script from within our document root with: $ python crawl.py > urls This will give us a urls file, which will make http_load try to get all files (given that we have specified enough requests). Then we can start http_load as discussed in the preceding example. http_load takes the following options:  
Read more
  • 0
  • 0
  • 7278

article-image-how-to-build-a-recommender-server-with-mahout-and-solr
Pat Ferrel
24 Sep 2014
8 min read
Save for later

How to Build a Recommender Server with Mahout and Solr

Pat Ferrel
24 Sep 2014
8 min read
In the last post, Mahout on Spark: Recommenders we talked about creating a co-occurrence indicator matrix for a recommender using Mahout. The goals for a recommender are many, but first they must be fast and must make “good” personalized recommendations. We’ll start with the basics and improve on the “good” part as we go. As we saw last time, co-occurrence or item-based recommenders are described by: rp = hp[AtA] Calculating [AtA] We needed some more interesting data first so I captured video preferences by mining the web. The target demo would be a Guide to Online Video. Input was collected for many users by simply logging their preferences to CSV text files: 906507914,dislike,mars_needs_moms 906507914,dislike,mars_needs_moms 906507914,like,moneyball 906507914,like,the_hunger_games 906535685,like,wolf_creek 906535685,like,inside 906535685,like,le_passe 576805712,like,persepolis 576805712,dislike,barbarella 576805712,like,gentlemans_agreement 576805712,like,europa_report 576805712,like,samsara 596511523,dislike,a_hijacking 596511523,like,the_kings_speech … The technique for actually using multiple actions hasn’t been described yet so for now we’ll use the dislikes in the application to filter out recommendations and use only the likes to calculate recommendations. That means we need to use only the “like” preferences. The Mahout 1.0 version of spark-itemsimilarity can read these files directly and filter out all but the lines with “like” in them: mahout spark-itemsimilarity –i root-input-dir -o output-dir --filter1 like -fc 1 –ic 2 #filter out all but likes #indicate columns for filter and items --omitStrength This will give us output like this: the_hunger_games_catching_fire<tab>holiday_inn 2_guns superman_man_of_steel five_card_stud district_13 blue_jasmine this_is_the_end riddick ... law_abiding_citizen<tab>stone ong_bak_2 the_grey american centurion edge_of_darkness orphan hausu buried ... munich<tab>the_devil_wears_prada brothers marie_antoinette singer brothers_grimm apocalypto ghost_dog_the_way_of_the_samurai ... private_parts<tab>puccini_for_beginners finding_forrester anger_management small_soldiers ice_age_2 karate_kid magicians ... world-war-z<tab>the_wolverine the_hunger_games_catching_fire ghost_rider_spirit_of_vengeance holiday_inn the_hangover_part_iii ... This is a tab delimited file with a video itemID followed by a string containing a list of similar videos that is space delimited. The similar video list may contain some surprises because here “similarity” means “liked by similar people”. It doesn’t mean the videos were similar in content or genre, so don’t worry if they look odd. We’ll use another technique to make “on subject” recommendations later. Anyone familiar with the Mahout first generation recommender will notice right away that we are using IDs that have meaning to the application, whereas before Mahout required its own integer IDs. A fast, scalable similarity engine In Mahout’s first generation recommenders, all recs were calculated for all users. This meant that new users would have to wait for a long running batch job to happen before they saw recommendations. In the Guide demo app, we want to make good recommendations to new users and use new preferences in real time. We have already calculated [AtA] indicating item similarities so we need a real-time method for the final part of the equation rp = hp[AtA]. Capturing hp is the first task and in the demo we log all actions to a database in real time. This may have scaling issues but is fine for a demo. Now we will make use of the “multiply as similarity” idea we introduced in the first post. Multiplying hp[AtA] can be done with a fast similarity engine—otherwise known as a search engine. At their core, search engines are primarily similarity engines that index textual data (AKA a matrix of token vectors) and take text as the query (a token vector). Another way to look at this is that search engines find by example—they are optimized to find a collection of items by similarity to the query. We will use the search engine to find the most similar indicator vectors in [AtA] to our query hp, thereby producing rp. Using this method, rp will be the list of items returned from the search—row IDs in [AtA]. Spark-itemsimilarity is designed to create output that can be directly indexed by search engines. In the Guide demo we chose to create a catalog of items in a database and to use Solr to index columns in the database. Both Solr and Elasticsearch have highly scalable fast engines that can perform searches on database columns or a variety of text formats so you don’t have to use a database to store the indicators. We loaded the indicators along with some metadata about the items into the database like this: itemID foreign-key genres indicators 123 world-war-z sci-fi action the_wolverine … 456 captain_phillips action drama pi when_com… … So, the foreign-key is our video item ID from the indicator output and the indicator is the space-delimited list of similar video item IDs. We must now set the search engine to index the indicators. This integration is usually pretty simple and depends on what database you use or if you are storing the entire catalog in files (leaving the database out of the loop). Once you’ve triggered the indexing of your indicators, we are ready for the query. The query will be a preference history vector consisting of the same tokens/video item IDs you see in the indicators. For a known user these should be logged and available, perhaps in the database, but for a new user we’ll have to find a way to encourage preference feedback. New users The demo site asks a new user to create an account and run through a trainer that collects important preferences from the user. We can probably leave the details of how to ask for “important” preferences for later. Suffice to say, we clustered items and took popular ones from each cluster so that the users were more likely to have seen them. From this we see that the user liked: argo django iron_man_3 pi looper … Whether you are responding to a new user or just accounting for the most recent preferences of returning users, recentness is very important. Using the previous IDs as a query on the indicator field of the database returns recommendations, even though the new user’s data was not used to train the recommender. Here’s what we get: The first line shows the result of the search engine query for the new user. The trainer on the demo site has several pages of examples to rate and the more you rate the better the recommendations become, as one would expect but these look pretty good given only 9 ratings. I can make a value judgment because they were rated by me. In a small sampling of 20 people using the site and after having them complete the entire 20 pages of training examples, we asked them to tell us how many of the recommendations on the first line were things they liked or would like to see. We got 53-90% right. Only a few people participated and your data will vary greatly but this was at least some validation. The second line of recommendations and several more below it are calculated using a genre and this begins to show the power of the search engine method. In the trainer I picked movies where the number 1 genre was “drama”. If you have the search engine index both indicators as well as genres you can combine indicator and genre preferences in the query. To produce line 1 the query was: Query: indicator field: “argo django iron_man_3 pi looper …” To produce line 2 the query was: Query: indicator field: “argo django iron_man_3 pi looper …” genre field: “drama”; boost: 5 The boost is used to skew results towards a field. In practice this will give you mostly matching genres but is not the same as a filter, which can also be used if you want a guarantee that the results will be from “drama”. Conclusion Combining a search engine with Mahout created a recommender that is extremely fast and scalable but also seamlessly blends results using collaborative filtering data and metadata. Using metadata boosting in the query allows us to skew results in a direction that makes sense.  Using multiple fields in a single query gives us even more power than this example shows. It allows us to mix in different actions. Remember the “dislike” action that we discarded? One simple and reasonable way to use that is to filter results by things the user disliked, and the demo site does just that. But we can go even further; we can use dislikes in a cross-action recommender. Certain of the user’s dislikes might even predict what they will like, but that requires us to go back to the original equation so we’ll leave it for another post.  About the author Pat is a serial entrepreneur, consultant, and Apache Mahout committer working on the next generation of Spark-based recommenders. He lives in Seattle and can be contacted through his site at https://finderbots.com or @occam on Twitter.
Read more
  • 0
  • 0
  • 7273

article-image-cross-premise-connectivity
Packt
08 Feb 2013
14 min read
Save for later

Cross-premise Connectivity

Packt
08 Feb 2013
14 min read
Evolving remote access challenges In order to increase productivity of employees, every company wants to provide access to their applications to their employees from anywhere. The users are no longer tied to work from a single location. The users need access to their data from any location and also from any device they have. They also want to access their applications irrespective of where the application is hosted. Allowing this remote connectivity to increase the productivity is in constant conflict with keeping the edge secure. As we allow more applications, the edge device becomes porous and keeping the edge secure is a constant battle for the administrators. The network administrators will have to ensure that this remote access to their remote users is always available and they can access their application in the same way as they would access it while in the office. Otherwise they would need to be trained on how to access an application while they are remote, and this is bound to increase the support cost for maintaining the infrastructure. Another important challenge for the network administrator is the ability to manage the remote connections and ensure they are secure. Migration to dynamic cloud In a modern enterprise, there is a constant need to optimize the infrastructure based on workload. Most of the time we want to know how to plan for the correct capacity rather than taking a bet on the number of servers that are needed for a given workload. If the business needs are seasonal we need to bet on a certain level of infrastructure expenses. If we don't get the expected traffic, the investment may go underutilized. At the same time if the incoming traffic volume is too high, the organization may lose the opportunity to generate additional revenue. In order to reduce the risk of losing additional revenue and at the same time to reduce large capital expenses, organizations may deploy virtualized solutions. However, this still requires the organization to take a bet on the initial infrastructure. What if the organization could deploy their infrastructure based on need? Then they could expand on demand. This is where moving to the cloud helps to move the capital expense ( CapEx ) to operational expense ( OpEx). If you tell your finance department that you are moving to an OpEx model for your infrastructure needs, you will definitely be greeted by cheers and offered cake (or at least, a fancy calculator). The needs of modern data centers As we said, reducing capital expense is on everyone's to-do list these days, and being able to invest in your infrastructure based on business needs is a key to achieving that goal. If your company is expecting seasonal workload, you would probably want to be able to dynamically expand your infrastructure based on needs. Moving your workloads to the cloud allows you to do this. If you are dealing with sensitive customer data or intellectual property, you probably want to be able to maintain secure connectivity between your premise and the cloud. You might also need to move workloads between your premise and the cloud as per your business demands, and so establishing secure connectivity between corporate and the cloud must be dynamic and transparent to your users. That means the gateway you use at the edge of your on-premise network and the gateway your cloud provider uses must be compatible. Another consideration is that you must also be able to establish or tear down the connection quickly, and it needs to be able to recover from outages very quickly. In addition, today's users are mobile and the data they access is also dynamic (the data itself may move from your on-premise servers to the cloud or back). Ideally, the users need not know where the data is and from where they are accessing the data, and they should not change their behavior depending on from where they access the data and where the data resides. All these are the needs of the modern data center. Things may get even more complex if you have multiple branch offices and multiple cloud locations. Dynamic cloud access with URA Let's see how these goals can be met with Windows Server 2012. In order for the mobile users to connect to the organizational network, they can use either DirectAccess or VPN. When you move resources to the cloud, you need to maintain the same address space of the resources so that your users are impacted by this change as little as possible. When you move a server or an entire network to the cloud, you can establish a Site-to-Site (S2S) connection through an edge gateway. Imagine you have a global deployment with many remote sites, a couple of public cloud data centers and some of your own private cloud. As the number of these remote sites grow, the number of Site-to-Site links needed will grow exponentially. If you have to maintain a gateway server or device for the Site-to-Site connections and another gateway for remote access such as VPN or DirectAccess, the maintenance cost associated with it can increase dramatically. One of the most significant new abilities with Windows Server 2012 Unified Remote Access is the combination of DirectAccess and the traditional Routing and Remote Access Server ( RRAS ) in the same Remote Access role. With this, you can now manage all your remote access needs from one unified console. As we've seen, only certain versions of Windows (Windows 7 Enterprise and Ultimate, Windows 8 Enterprise) can be DirectAccess clients, but what if you have to accommodate some Vista or XP clients or if you have third-party clients that need CorpNet connectivity? With Windows Server 2012, you can enable the traditional VPN from the Remote Access console and allow the down-level and third-party clients to connect via VPN. The Unified Remote Access console also allows the remote access clients to be monitored from the same console. This is very useful as you can now configure, manage, monitor, and troubleshoot all remote access needs from the same place. In the past, you might have used the Site-to-Site demand-dial connections to connect and route to your remote offices, but until now the demand-dial Site-to-Site connections used either the Point-to-Point Tunneling Protocol ( PPTP) or Layer Two Tunnel Protocol ( L2TP) protocols. However, these involved manual steps that needed to be performed from the console. They also produced challenges working with similar gateway devices from other vendors and because the actions are needed to be performed through the console, they did not scale well if the number of Site-to-Site connections increased beyond a certain number. Some products attempted to overcome the limits of the built-in Site-to-Site options in Windows. For example, Microsoft's Forefront Threat Management Gateway 2010 used the Internet Key Exchange ( IKE ) protocol, which allowed it to work with other gateways from Cisco and Juniper. However, the limit of that solution was that in case one end of the IPsec connection fails for some reason, the Dead Peer Detection (DPD) took some time to realize the failure. The time it took for the recovery or fallback to alternate path caused some applications that were communicating over the tunnel to fail and this disruption to the service could cause significant losses. Thanks to the ability to combine both VPN and DirectAccess in the same box as well as the ability to add the Site-to-Site IPsec connection in the same box, Windows Server 2012 allows you to reduce the number of unique gateway servers needed at each site. Also, the Site-to-Site connections can be established and torn down with a simple PowerShell command, making managing multiple connections easier. The S2S tunnel mode IPsec link uses the industry standard IKEv2 protocol for IPsec negotiation between the end points, which is great because this protocol is the current interoperability standard for almost any VPN gateway. That means you don't have to worry about what the remote gateway is; as long as it supports IKEv2, you can confidently create the S2S IPsec tunnel to it and establish connectivity easily and with a much better recovery speed in case of a connection drop. Now let's look at the options and see how we can quickly and effectively establish the connectivity using URA. Let's start with a headquarters location and a branch office location and then look at the high-level needs and steps to achieve the desired connectivity. Since this involves just two locations, our typical needs are that clients in either location should be able to connect to the other site. The connection should be secure and we need the link only when there is a need for traffic flow between the two locations. We don't want to use dedicated links such as T1 or fractional T1 lines as we do not want to pay for the high cost associated with them. Instead, we can use our pre-existing Internet connection and establish Site-to-Site IPsec tunnels that provide us a secure way to connect between the two locations. We also want users from public Internet locations to be able to access any resource in any location. We have already seen how DirectAccess can provide us with the seamless connectivity to the organizational network for domain-joined Windows 7 or Windows 8 clients, and how to set up a multisite deployment. We also saw how multisite allows Windows 8 clients to connect to the nearest site and Windows 7 clients can connect to the site they are configured to connect to. Because the same URA server can also be configured as a S2S gateway and the IPsec tunnel allows both IPv4 and IPv6 traffic to flow through it, it will now allow our DirectAccess clients in public Internet locations to connect to any one of the sites and also reach the remote site through the Site-to-Site tunnel. Adding the site in the cloud is very similar to adding a branch office location and it can be either your private cloud or the public cloud. Typically, the cloud service provider provides its own gateway and will allow you to build your infrastructure behind it. The provider could typically provide you an IP address for you to use as a remote end point and they will just allow you to connect to your resources by NATting the traffic to your resource in the cloud. Adding a cloud location using Site-to-Site In the following diagram, we have a site called Headquarters with a URA server (URA1) at the edge. The clients on the public Internet can access resources in the corporate network through DirectAccess or through the traditional VPN, using the URA1 at the edge. We have a cloud infrastructure provider and we need to build our CloudNet in the cloud and provide connectivity between the corporate network at the Headquarters and CloudNet in the cloud. The clients on the Internet should be able to access resources in the corporate network or CloudNet, and the connection should be transparent to them. The CloudGW is the typical edge device in the cloud that your cloud provider owns and it is used to control and monitor the traffic flow to each tenant. Basic setup of cross-premise connectivity The following steps outline the various options and scenarios you might want to configure: Ask your cloud provider for the public IP address of the cloud gateway they provide. Build a virtual machine running Windows Server 2012 with the Remote Access role and place it in your cloud location. We will refer to this server as URA2. Configure URA2 as a S2S gateway with two interfaces: The interface towards the CloudGW will be the IPsec tunnel endpoint for the S2S connection. The IP address for this interface could be a public IPv4 address assigned by your cloud provider or a private IPv4 address of your choice. If it is a private IPv4 address, the provider should send all the IPsec traffic for the S2S connection from the CloudGW to the Internet-facing interface of URA2. The remote tunnel endpoint configuration in URA1 for the remote site will be the public address that you got in step 1. If the Internet-facing interface of URA2 is also a routable public IPv4 address, the remote tunnel endpoint configuration in URA1 for the remote site will be this public address of URA2. The second interface on URA2 will be a private address that you are going to use in your CloudNet towards servers you are hosting there. Configure the cloud gateway to allow the S2S connections to your gateway (URA2). Establish S2S connectivity between URA2 and URA1. This will allow you to route all traffic between CloudNet and CorpNet. The preceding steps provide full access between the CloudNet and CorpNet and also allow your DirectAccess and VPN clients on the Internet to access any resource in CorpNet or CloudNet without having to worry whether the resource is in CorpNet or in CloudNet. DirectAccess entry point in the cloud Building on the basic setup, you can further extend the capabilities of the clients on the Internet to reach the CloudNet directly without having to go through the CorpNet. To achieve this, we can add a URA Server in the CloudNet (URA3). Here is an overview of the steps to achieve this (assuming your URA server URA3 is already installed with the Remote Access role): Place a domain controller in CloudNet. It can communicate with your domain through the Site-to-Site connection to do Active Directory replication and perform just like any other domain controller. Enable the multisite configuration on your primary URA server (URA1). Add URA3 as an additional entry point. It will be configured as a URA server with the single NIC topology. Register the IP-HTTPS site name in DNS for URA3. Configure your cloud gateway to forward the HTTPS traffic to your URA2 and in turn to URA3 to allow clients to establish the IP-HTTPS connections. Using this setup, clients on the Internet can connect to either the entry point URA1 or URA3. No matter what they choose, they can access all resources either directly or via the Site-to-Site tunnel. Authentication The Site-to-Site connection between the two end points (URA1 and URA2) can be configured with Pre Shared Key ( PSK) for authentication or you can further secure the IPsec tunnel with Certificate Authentication. Here, the certificates you will need for Certificate Authentication would be computer certificates that match the name of the end points. You could use either certificates issued by a third-party provider or certificates issued from your internal Certificate Authority ( CA). As with any certificate authentication, the two end points need to trust the certificates used at either end, so you need to make sure the certificate of the root CA is installed on both servers. To make things simpler, you can start with a simple PSK-based tunnel and once the basic scenario works, change the authentication to computer certificates. We will see the steps to use both PSK and Certificates in the detailed steps in the following section. Configuration steps Even though the Site-to-Site IPsec tunnel configuration is possible via the console, we highly recommend that you get familiar with the PowerShell commands for this configuration as they make it a lot easier to configure this in case you need to manage multiple configurations. If you have multiple remote sites, having to set up and tear down each site based on workload demand is not scalable when configured through the console. Summary we have seen how by combining the DirectAccess and Site-to-Site VPN functionalities we are now able to use one single box to provide all remote access features. With virtual machine live migration options, you can move any workload from your corporate network to cloud network and back over the S2S connection and keep the same names for the servers. This way, clients from any location can access your applications in the same way as they would access them if they were on the corporate network. Resources for Article : Further resources on this subject: Creating and managing user accounts in Microsoft Windows SBS 2011 [Article] Disaster Recovery for Hyper-V [Article] Windows 8 and Windows Server 2012 Modules and Cmdlets [Article]
Read more
  • 0
  • 0
  • 7272
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-lets-breakdown-numbers
Packt
24 Oct 2013
8 min read
Save for later

Let's Breakdown the Numbers

Packt
24 Oct 2013
8 min read
(For more resources related to this topic, see here.) John Kirkland is an awesome "accidental" SQL Server DBA for Red Speed Bicycle LLC—a growing bicycle startup based in the United States. The company distributes bikes, bicycle parts, and accessories to various distribution points around the world. To say that they are performing well financially is an understatement. They are booming! They've been expanding their business to Canada, Australia, France, and the United Kingdom in the last three years. The company has upgraded their SQL Server 2000 database recently to the latest version of SQL Server 2012. Linda, from the Finance Group, asked John if they can migrate their Microsoft Access Reports into the SQL Server 2012 Reporting Services. John installed SSRS 2012 in a native mode. He decided to build the reports from the ground up so that the report development process would not interrupt the operation in the Finance Group. There is only one caveat; John has never authored any reports in SQL Server Reporting Services (SSRS) before. Let's give John a hand and help him build his reports from the ground up. Then, we'll see more of his SSRS adventures as we follow his journey throughout this article. Here's the first report requirement for John: a simple table that shows all the sales transactions in their database. Linda wants to see a report with the following data: Date Sales Order ID Category Subcategory Product Name Unit Price Quantity Line Total We will build our report, and all succeeding reports in this article, using the SQL Server Data Tools (SSDT). SSDT is Visual Studio shell which is an integrated environment used to build SQL Server database objects. You can install SSDT from the SQL Server installation media. In June 2013, Microsoft released SQL Server Data Tools-Business Intelligence (SSDTBI). SSDTBI is a component that contains templates for SQL Server Analysis Services (SSAS), SQL Server Integration Services (SSIS), and SQL Server Reporting Services (SSRS) for Visual Studio 2012. SSDTBI replaced Business Intelligence Development Studio (BIDS) from the previous versions of SQL Server. You have two options in creating your SSRS reports: SSDT or Visual Studio 2012. If you use Visual Studio, you have to install the SSDTBI templates. Let's create a new solution and name it SSRS2012Blueprints. For the following exercises, we're using SSRS 2012 in native mode. Also, make a note that we're using the AdventureWorks2012 Sample database all throughout this article unless otherwise indicated. You can download the sample database from CodePlex. Here's the link: http://msftdbprodsamples.codeplex.com/releases/view/55330. Defining a data source for the project Now, let's define a shared data source and shared dataset for the first report. A shared dataset and data source can be shared among the reports within the project: Right-click on the Shared Data Sources folder under the SSRS2012Bueprints solution in the Solution Explorer window, as shown in the following illustration. If the Solution Explorer window is not visible, access it by navigating to Menu | View | Solution Explorer, or press Ctrl + Alt + L: Select Add New Data Source which displays the Shared Data Source Properties window. Let's name our data source DS_SSRS2012Blueprint. For this demonstration, let's use the wizard to create the connection string. As a good practice, I use the wizard for setting up connection strings for my data connections. Aside from convenience, I'm quite confident that I'm getting the right connections that I want. Another option for setting the connection is through the Connection Properties dialog box, as shown in the next screenshot. Clicking on the Edit button next to the connection string box displays the Connection Properties dialog box: Shared versus embedded data sources and datasets: as a good practice, always use shared data sources and shared datasets where appropriate. One characteristic of a productive development project is using reusable objects as much as possible. For the connection, one option is to manually specify the connection string as shown: Data Source=localhost;Initial Catalog=AdventureWorks2012 We may find this option as a convenient way of creating our data connections. But if you're new to the report environment you're currently working on, you may find setting up the connection string manually more cumbersome than setting it up through the wizard. Always test the connection before saving your data source. After testing, click on the OK buttons on both dialog boxes. Defining the dataset for the project Our next step is to create the shared dataset for the project. Before doing that, let's create a stored procedure named dbo.uspSalesDetails. This is going to be the query for our dataset. Download the T-SQL codes included in this article if you haven't done so already. We're going to use the T-SQL file named uspSalesDetails_Ch01.sql for this article. We will use the same stored procedure for this whole article, unless otherwise indicated. Right-click on the Shared Datasets folder in Solution Explorer, just like we did when we created the data source. That displays the Shared Datasets Properties dialog. Let's name our dataset ds_SalesDetailReport. We use the query type stored procedure, and select or type uspSalesDetails on the Select or enter stored procedure name drop-down combo box. Click on OK when you're done: Before we work on the report itself, let's examine our dataset. In the Solution Explorer window, double-click on the dataset ds_SalesDetailReport.rsd, which displays the Shared Dataset Properties dialog box. Notice that the fields returned by our stored procedure have been automatically detected by the report designer. You can rename the field as shown: Ad-hoc Query (Text Query Type) versus Stored Procedure: as a good practice, always use a stored procedure where a query is used. The primary reason for this is that a stored procedure is compiled into a single execution plan. Using stored procedures will also allow you to modify certain elements of your reports without modifying the actual report. Creating the report file Now, we're almost ready to build our first report. We will create our report by building it from scratch by performing the following steps: Going back to the Solution Explorer window, right-click on the Reports folder. Please take note that selecting the Add New Report option will initialize Report Wizard. Use the wizard to build simple tabular or matrix reports. Go ahead if you want to try the wizard but for the purpose of our demonstration, we'll skip the wizard. Select Add, instead of Add New Report, then select New Item: Selecting New Item displays the Add New Item dialog box as shown in the following screenshot. Choose the Report template (default report template) in the template window. Name the report SalesDetailsReport.rdl. Click on the Add button to add the report to our project: Clicking on the Add button displays the empty report in the report designer. It looks similar to the following screenshot: Creating a parameterized report You may have noticed that the stored procedure we created for the shared dataset is parameterized. It has the following parameters: It's a good practice to test all the queries on the database just to make sure we get the datasets that we need. Doing so will eliminate a lot of data quality issues during report execution. This is also the best time to validate all our data. We want our report consumers to have the correct data that is needed for making critical decisions. Let's execute the stored procedure in SQL Server Management Studio (SSMS) and take a look at the execution output. We want to make sure that we're getting the results that we want to have on the report. Now, we add a dataset to our report based on the shared dataset that we had previously created: Right-click on the Datasets folder in the Report Data window. If it's not open, you can open it by navigating to Menu | View | Report Data, or press Ctrl + Alt + D: Selecting Add Dataset displays the Dataset Properties. Let's name our report dataset tblSalesReport. We will use this dataset as the underlying data for the table element that we will create to hold our report data. Indicate that we want to use a shared dataset. A list of the project shared datasets is displayed. We only have one at this point, which is the ds_SalesDetailsReport. Let's select that one, then click on OK. Going back to the Report Data window, you may notice that we now have more objects under the Parameters and Datasets folders. Switch to the Toolbox window. If you don't see it, then go to Menu | View | Toolbox, or press Ctrl + Alt + X. Double-click or drag a table to the empty surface of the designer. Let's add more columns to the table to accommodate all eight dataset fields. Click on the table, then right-click on the bar on the last column and select Insert Column | Right. To add data to the report, let's drag each element from the dataset to their own cell at the table data region. There are three data regions in SSRS: table, matrix, and list. In SSRS 2012, a fourth data region has been added but you can't see that listed anywhere. It's called tablix. Tablix is not shown as an option because it is built into those three data regions. What we're doing in the preceding screenshot is essentially dragging data into the underlying tablix data region. But how can I add my parameters into the report? you may ask. Well, let's switch to the Preview tab. We should now see our parameters already built into the report because we specified them in our stored procedure. Our report should look similar to the following screenshot:
Read more
  • 0
  • 0
  • 7008

article-image-linux-desktop-environments
Packt
29 Oct 2013
7 min read
Save for later

Linux Desktop Environments

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) A computer desktop is normally composed of windows, icons, directories/folders, a toolbar, and some artwork. A window manager handles what the user sees and the tasks that are performed. A desktop is also sometimes referred to as a graphical user interface (GUI). There are many different desktops available for Linux systems. Here is an overview of some of the more common ones. GNOME 2 GNOME 2 is a desktop environment and GUI that is developed mainly by Red Hat, Inc. It provides a very powerful and conventional desktop interface. There is a launcher menu for quicker access to applications, and also taskbars (called panels). Note that in most cases these can be located on the screen where the user desires. The screenshot of GNOME 2 running on Fedora 14 is as follows: This shows the desktop, a command window, and the Computer folder. The top and bottom "rows" are the panels. From the top, starting on the left, are the Applications, Places, and System menus. I then have a screensaver, the Firefox browser, a terminal, Evolution, and a Notepad. In the middle is the lock-screen app, and on the far right is a notification about updates, the volume control, Wi-Fi strength, battery level, the date/time, and the current user. Note that I have customized several of these, for example, the clock. Getting ready If you have a computer running the GNOME 2 desktop, you may follow along in this section. A good way to do this is by running a Live Image, available from many different Linux distributions. The screenshot showing the Add to Panel window is as follows: How to do it... Let's work with this desktop a bit: Bring this dialog up by right-clicking on an empty location on the task bar. Let's add something cool. Scroll down until you see Weather Report, click on it and then click on the Add button at the bottom. On the panel you should now see something like 0 °F. Right-click on it. This will bring up a dialog, select Preferences. You are now on the General tab. Feel free to change anything here you want, then select the Location tab, and put in your information. When done, close the dialog. On my system the correct information was displayed instantly. Now let's add something else that is even more cool. Open the Add to Panel dialog again and this time add Workspace Switcher. The default number of workspaces is two, I would suggest adding two more. When done, close the dialog. You will now see four little boxes on the bottom right of your screen. Clicking on one takes you to that workspace. This is a very handy feature of GNOME 2. There's more... I find GNOME 2 very intuitive and easy to use. It is powerful and can be customized extensively. It does have a few drawbacks, however. It tends to be somewhat "heavy" and may not perform well on less powerful machines. It also does not always report errors properly. For example, using Firefox open a local file that does not exist on your system (that is, file:///tmp/LinuxBook.doc). A File Not Found dialog should appear. Now try opening another local file that does exist, but which you do not have permissions for. It does not report an error, and in fact doesn't seem to do anything. Remember this if it happens to you. KDE desktop The KDE desktop was designed for desktop PCs and powerful laptops. It allows for extensive customization and is available on many different platforms. The following is a description of some of its features. Getting ready If you have a Linux machine running the KDE desktop you can follow along. These screenshots are from KDE running on a Live Media image of Fedora 18. The desktop icon on the far right allows the user to access Tool Box: You can add panels, widgets, activities, shortcuts, lock the screen, and add a lot more using this dialog. The default panel on the bottom begins with a Fedora icon. This icon is called a Kickoff Application Launcher and allows the user to access certain items quickly. These include Favorites, Applications, a Computer folder, a Recently Used folder, and a Leave button. If you click on the next icon it will bring up the Activity Manager. Here you can create the activities and monitor them. The next icon allows you to select which desktop is currently in the foreground, and the next items are the windows that are currently open. Over to the far right is the Clipboard. Here is a screenshot of the clipboard menu: Next is the volume control, device notifier, and networking status. Here is a screenshot of Interfaces and Connections dialog: Lastly, there is a button to show the hidden icons and the time. How to do it... Let's add a few things to this desktop: We should add a console. Right-click on an empty space on the desktop. A dialog will come up with several options; select Konsole. You should now have a terminal. Close that dialog by clicking on some empty space. Now let's add some more desktops. Right-click on the third icon on the bottom left of the screen. A dialog will appear, click on Add Virtual Desktop. I personally like four of these. Now let's add something to the panel. Right-click on some empty space on the panel and hover the mouse over Panel Options; click on AddWidgets. You will be presented with a few widgets. Note that the list can be scrolled to see a whole lot more. For example, scroll over to Web Browser and double-click on it. The web browser icon will appear on the panel near the time. There's more... You can obviously do quite a bit of customization using the KDE desktop. I would suggest trying out all of the various options, to see which ones you like the best. KDE is actually a large community of open source developers, of which KDE Plasma desktop is a part. This desktop is probably the heaviest of the ones reviewed, but also one of the most powerful. I believe this is a good choice for people who need a very elaborate desktop environment. xfce xfce is another desktop environment for Linux and UNIX systems. It tends to run very crisply and not use as many system resources. It is very intuitive and user-friendly. Getting ready The following is a screenshot of xfce running on the Linux machine I am using to write this article: If you have a machine running the xfce desktop, you can perform these actions. I recommend a Live Media image from the Fedora web page. While somewhat similar to GNOME 2, the layout is somewhat different. Starting with the panel on the top (panel 1) is the Applications Menu, following by a LogOut dialog. The currently open windows are next. Clicking on one of these will either bring it up or minimize it depending on its current state. The next item is the Workspaces of which I have four, then the Internet status. To complete the list is the volume and mixer apps and the date and time. The screen contents are mostly self-explanatory; I have three terminal windows open and the File Manager folder. The smaller panel on the bottom of the screen is called panel 2. How to do it... Let's work with the panels a bit: In order to change panel 2 we must unlock it first. Right-click on the top panel, and go to Panel | PanelPreferences. Use the arrows to change to panel 2. See the screenshot below: You can see it is locked. Click on Lock panel to unlock it and then close this dialog. Now go to panel 2 (on the bottom) and right-click on one of the sides. Click on AddNewItems.... Add the applications you desire. There's more... This is by no means an exhaustive list of what xfce can do. The features are modular and can be added as needed. See http://www.xfce.org for more information.
Read more
  • 0
  • 0
  • 6994

article-image-customizing-your-ibm-lotus-notes-853-experience
Packt
02 Apr 2013
4 min read
Save for later

Customizing your IBM Lotus Notes 8.5.3 experience

Packt
02 Apr 2013
4 min read
(For more resources related to this topic, see here.) So you are using Lotus Notes 8.5.3 for e-mail, calendar, and collaboration, and you want to know how to go from just using Lotus Notes, to letting Lotus Notes work for you. Lotus Notes is highly customizable, adapting to the way you want to work. We will show you how to make Lotus Notes look and behave in the manner you choose. Getting ready Your IBM Lotus Notes 8.5.3 client should be installed and connected to your home mail server to receive mail. How to do it... Let's start with the Home page. The Home page is the first page you will see when setting up a new client. You can also access it in many different ways if your client is already set up. One way to get to it is from the Open list as shown in the following screenshot: Here is what the default Home page looks like after you open it: How it works... To customize the Home page, click on Click here for Home Page options. Then click on Create a new Home Page. This will bring up the Home Page wizard. Give your new Home page a name, and then you can choose how you want your important data to be displayed via your new Home page. As you can see, there are many ways to customize your Home page to display exactly what you need on your screen. There's more... Now we will look at more ways to customize your IBM Lotus Notes 8.5.3 experience. Open list By clicking on the Open button in the upper left corner of the Notes client, you can access the Open list. The Open list is a convenient place to launch your mail, calendar, contacts, to-dos, websites, and applications. You can also access your workspace and Home page from the Open list. Applications added to your workspace are dynamically added to the Open list. The contextual search feature will help you efficiently find exactly what you are looking for. One option when using the Open list is to dock it. When the Open list is docked, it will appear permanently on the left-hand side of the Lotus Notes client. To dock it, right-click on the Open list and select Dock the Open list. To undock it, right-click in an empty area of the docked list and uncheck the Dock the Open list. Windows and Themes You can choose how you want your windows in Lotus Notes 8.5.3 to look. In the Windows and Themes preference panel, you can control how you want Notes to behave. First, decide if you want your open documents to appear as tabs or windows. Then decide if you want the tabs that you had left open when you exit the client to be retained when you open it again. The option to Group documents from each application on a tab will group any documents or views opened from one application. You can see these options in the following screenshot: New mail notification By checking the preference setting called Sending and Receiving under Preferences | Mail, you can display a pop-up alert when a new mail arrives. The pop up displays the sender and the subject of the message. You can then open the e-mail from the pop up. You can also drag the pop up to pin it open. To turn this off, uncheck the preference setting. Workspace The workspace has been around for a long time, and this is where icons representing Domino applications are found. You can choose to stack icons or have them un-stacked. Stacking the icons places replicas of the same applications on top of each other. The icon on the top of the stack dictates which replica is opened. For example, for a mail if the server is on top, then the local replica will be ignored causing potential slowness. If you would like to make your workspace look more three-dimensional and add texture to the background select this setting in the Basic Notes Client preference. You can also add new pages, change the color of pages and name them, by right clicking on the workspace. Summary This article has provided a brief gist about Lotus Notes 8.5.3. It also explains how you can customize your Lotus Notes client, and make it look and behave in the manner you choose. Resources for Article : Further resources on this subject: Feeds in IBM Lotus Notes 8.5 [Article] Lotus Notes 8 — Productivity Tools [Article] IBM Lotus Quickr Services Overview [Article]
Read more
  • 0
  • 0
  • 6984

article-image-deploying-your-applications-websphere-application-server-70
Packt
01 Oct 2009
9 min read
Save for later

Deploying Your Applications on WebSphere Application Server 7.0

Packt
01 Oct 2009
9 min read
Data access applications We have just deployed an application that did not require database connectivity. Often, applications in the business world require access to a RDBMS to fulfill their business objective. If an application requires the ability to retrieve from, or store information in, a database, then you will need to create a data source which will allow the application to connect and use the database (DB). Looking at the figure below, we can see the logical flow of the sample data access application that we are going to install. The basic idea of the application is to display a list of tables that exist in a database schema. Since the application requires a database connection, we need to configure WebSphere before we can deploy the application. We will now cover the preparation work before we install our application. Data sources Each data source is associated with a JDBC provider that is configured for access to a specific database type. The data source provides connectivity which allows an application to communicate with the database. Preparing our sample database Before you create a data source, you need to ensure that the appropriate client database driver software is installed. For our demonstration, we are going to use Oracle Express Edition (Oracle XE) for Linux which is the free version of Oracle. We are using version Oracle XE 10g for Linux and the download size is about 210MB, so it will take time to download. We installed Oracle XE using the default install option for installing an RPM. The administration process is fully documented on Oracle's web site and in the documentation which is installed with the product. We could have chosen to use many open source/free databases, however their explanations and configurations would detract from the point. We have chosen to use Oracle's free RDBMS called Oracle XE, and JDBC with Oracle XE is quite easy to configure. By following these steps, you will be able to apply the same logic to any of the major vendors' full RDMS products, that is, DB/2, Oracle, SQL Server, and so on. Another reason why we chose Oracle XE is that it is an enterprise-ready DB and is administered by a simple web interface and comes with sample databases. We need to test that we can connect to our database without WebSphere so that we can evaluate the DB design. To do this, we will need to install Oracle XE. We will now cover the following steps one by one. Download Oracle XE from Oracle's web site using the following URL:http://www.oracle.com/technology/products/database/xe/index.html. Transfer the oracle-xe-10.2.0.1-1.0.i386.rpm file to an appropriate directory on your Linux server using WinSCP (Secure Copy) or your chosen Secure FTP client. Since the XE installer uses X Windows, ensure that you have Xming running. Then install Oracle XE by using the rpm command, as shown here: rpm -ivh oracle-xe-10.2.0.1-1.0.i386.rpm Follow the installer steps as prompted: HTTP port = 8080 Listener port = 1521 SYS & SYSTEM / password = oracle Autostart = y Oracle XE requires 1024 minimum swap space and requires 1.5 GB of disk space to install. Ensure that Oracle XE is running. You can now access the web interface via a browser from the local machine; by default, XE will only accept a connection locally. As shown in the following figure, we have a screenshot of using Firefox to connect to OracleXE using the URL http://localhost:8080/apex. The reason we use Firefox on Linux is that this is the most commonly installed default browser on the newer Linux distributions. When the administration application loads, you will be presented with a login screen as seen in the following screenshot. You can log in using the username SYSTEM and password oracle as set by your installation process. Oracle XE comes with a pre-created user called HR which is granted ownership to the HR Schema. However, the account is locked by default for security reasons and so we need to unlock the HR user account. To unlock an account, we need to navigate to the Database Users | Manage Users screen, as demonstrated in the following screenshot: You will notice that the icon for the HR user is locked. You will see a small padlock on the HR icon, as seen in this figure: Click on the HR user icon and unlock the account as shown in the following figure. You need to reset the password and change Account Status to Unlocked, and then click Alter User to set the new password. The following figure shows that the HR account is unlocked: The HR account is now unlocked as seen above. Log out and log back into the administration interface using the HR user to ensure that the account is now unlocked. Another good test to perform to ensure connectivity to Oracle is to use an Oracle admin tool called sqlplus. Sqlplus is a command line tool which database administrators can use to administer Oracle. We are going to use sqlplus to do a simple query to list the tables in the HR schema. To run sqlplus, we need to set up an environment variable called $ORACLE_HOME which is required to run sqlplus. To set $ORACLE_HOME, type the following command in a Linux shell: export ORACLE_HOME=/usr/lib/oracle/xe/app/oracle/product/10.2.0/server If you have installed Oracle XE in a non-default location, then you may have to use a different path. To run sqlplus, type the following command: <oracle_home>/bin/sqlplus The result will be a login screen as shown below: You will be prompted for a username. Type the following command: hr@xe<enter> For the password, type the following command: hr<enter> When you have successfully logged in, you can type the following commands in the SQL prompt: SELECT TABLE_NAME FROM user_tables<enter> /<enter> The / command means execute the command buffer. The result will be a list of tables in the HR schema, as shown in the following screenshot: We have now successfully verified that Oracle works from a command line, and thus it is very likely that WebSphere will also be able to communicate with Oracle. Next, we will cover how to configure WebSphere to communicate with Oracle. JDBC providers Deployed applications use JDBC providers to communicate with RDBMS. The JDBC provider object provides the actual JDBC driver implementation class for access to a specific database type, that is, Oracle, SQL Server, DB/2, and so on. You associate a data source with a JDBC provider. A data source provides the connection to the RDBMS. The JDBC provider and the data source provide connectivity to a database. Creating a JDBC provider Before creating a JDBC provider, you will need to understand the application's resource requirements, that is, the data sources that the application references. You should know the answer to the following questions: Does your application require a data source? Not all applications use a database. The security credentials required to connect to the database. Often databases are secured and you will need a username and password to access a secure database. Are there any web components (Servlets, JSP, and so on) or EJBs which need to access a database. Answering these questions will determine the amount of configuration required for your database connectivity configurations. To create a JDBC provider, log into the administration console and click on the JDBC Provider link in the JDBC category of the Resources section located in the left-hand panel of the administration console as shown below. We need to choose an appropriate scope from the Scope drop-down pick list. Scope determines how the provider will be seen by applications. We will talk more about scope in the JNDI section. For now, please choose the Cell scope as seen below. Click New and the new JDBC provider wizard is displayed. Select the Database type as Oracle, Provider type as Oracle JDBC Driver, Implementation type as Connection pool data source, and Name for the new JDBC provider. We are going to enter MyJDBCDriver as the provider name as seen in the previous screenshot. We also have to choose an Implementation type. There are two implementation types for Oracle JDBC Drivers. The table below explains the two different types. Implementation Type Description Connection pool data source Use Connection Pool datasource if your application does not require connection that supports two-phase commit transactions... XA Datasource Use XA Datasource if your application requires two-phase commit transactions. Click Next to go to the database classpath screen. As shown in the following screenshot, enter the database class path information for the JDBC provider. As long as you have installed Oracle XE using the default paths, you will be able to use the following path in the Directory location field: /usr/lib/oracle/xe/oracle/product/10.2.0/server/jdbc/lib. Click Next to proceed to the next step, where you will be presented with a summary as shown in the following screenshot. Review the JDBC provider information that you have entered and click Finish. You will now be prompted to save the JDBC provider configuration. Click Save, as shown in the following screenshot. Saving this will persist the configuration to disk the resources to resources.xml. Before we finish, we need to update the JDBC Provider with the correct JAR file as the default one is not the one that we wish to use as it was assuming a later Oracle driver which we are not using. To change the driver, we must first select the driver that we created earlier called MyJDBCDriver as shown in the following screenshot: In the screen presented, we are going to change the Classpath field from: ${ORACLE_JDBC_DRIVER_PATH}/ojdbc6.jar to ${ORACLE_JDBC_DRIVER_PATH}/ojdbc14.jar Since WAS 7.0 is the latest version of WebSphere, the wizard already knows about the new version of the oracle 11g JDBC Driver. We are connecting to Oracle XE 10g and the driver for this is ojdbc14.jar.The classpath file can contain a list of paths or JAR file names which together form the location for the resource provider classes. Class path entries are separated by using the ENTER key and must not contain path separator characters (such as ; or :). Class paths can contain variable (symbolic) names that can be substituted using a variable map. Check your driver installation notes for specific JAR file names that are required. Click Apply and save the configuration.
Read more
  • 0
  • 0
  • 6721
article-image-virtualization-0
Packt
16 Sep 2015
16 min read
Save for later

Virtualization

Packt
16 Sep 2015
16 min read
This article by Skanda Bhargav, the author of Troubleshooting Ubuntu Server, deals with virtualization techniques—why virtualization is important and how administrators can install and serve users with services via virtualization. We will learn about KVM, Xen, and Qemu. So sit back and let's take a spin into the virtual world of Ubuntu. (For more resources related to this topic, see here.) What is virtualization? Virtualization is a technique by which you can convert a set of files into a live running machine with an OS. It is easy to set up one machine and much easier to clone and replicate the same machine across hardware. Also, each of the clones can be customized based on requirements. We will look at setting up a virtual machine using Kernel-based Virtual Machine, Xen, and Qemu in the sections that follow. Today, people are using the power of virtualization in different situations and environments. Developers use virtualization in order to have an independent environment in which to safely test and develop applications without affecting other working environments. Administrators are using virtualization to separate services and also commission or decommission services as and when required or requested. By default, Ubuntu supports the Kernel-based Virtual Machine (KVM), which has built-in extensions for AMD and Intel-based processors. Xen and Qemu are the options suggested where you have hardware that does not have extensions for virtualization. libvirt The libvirt library is an open source library that is helpful for interfacing with different virtualization technologies. One small task before starting with libvirt is to check your hardware support extensions for KVM. The command to do so is as follows: kvm-ok You will see a message stating whether or not your CPU supports hardware virtualization. An additional task would be to verify the BIOS settings for virtualization and activate it. Installation Use the following command to install the package for libvirt: sudo apt-get install kvm libvirt-bin Next, you will need to add the user to the group libvirt. This will ensure that user gets additional options for networking. The command is as follows: sudo adduser $USER libvirtd We are now ready to install a guest OS. Its installation is very similar to that of installing a normal OS on the hardware. If your virtual machine needs a graphical user interface (GUI), you can make use of an application virt-viewer and connect using VNC to the virtual machine's console. We will be discussing the virt-viewer and its uses in the later sections of this article. virt-install virt-install is a part of the python-virtinst package. The command to install this package is as follows: sudo apt-get install python-virtinst One of the ways of using virt-install is as follows: sudo virt-install -n new_my_vm -r 256 -f new_my_vm.img -s 4 -c jeos.iso --accelerate --connect=qemu:///system --vnc --noautoconsole -v Let's understand the preceding command part by part: -n: This specifies the name of virtual machine that will be created -r: This specifies the RAM amount in MBs -f: This is the path for the virtual disk -s: This specifies the size of the virtual disk -c: This is the file to be used as virtual CD, but it can be an .iso file as well --accelerate: This is used to make use of kernel acceleration technologies --vnc: This exports the guest console via vnc --noautoconsole: This disables autoconnect for the virtual machine console -v: This creates a fully virtualized guest Once virt-install is launched, you may connect to console with virt-viewer utility from remote connections or locally using GUI. Use to wrap long text to next line. virt-clone One of the applications to clone a virtual machine to another is virt-clone. Cloning is a process of creating an exact replica of the virtual machine that you currently have. Cloning is helpful when you need a lot of virtual machines with same configuration. Here is an example of cloning a virtual machine: sudo virt-clone -o my_vm -n new_vm_clone -f /path/to/ new_vm_clone.img --connect=qemu:///sys Let's understand the preceding command part by part: -o: This is the original virtual machine that you want to clone -n: This is the new virtual machine name -f: This is the new virtual machine's file path --connect: This specifies the hypervisor to be used Managing the virtual machine Let's see how to manage the virtual machine we installed using virt. virsh Numerous utilities are available for managing virtual machines and libvirt; virsh is one such utility that can be used via command line. Here are a few examples: The following command lists the running virtual machines: virsh -c qemu:///system list The following command starts a virtual machine: virsh -c qemu:///system start my_new_vm The following command starts a virtual machine at boot: virsh -c qemu:///system autostart my_new_vm The following command restarts a virtual machine: virsh -c qemu:///system reboot my_new_vm You can save the state of virtual machine in a file. It can be restored later. Note that once you save the virtual machine, it will not be running anymore. The following command saves the state of the virtual machine: virsh -c qemu://system save my_new_vm my_new_vm-290615.state The following command restores a virtual machine from saved state: virsh -c qemu:///system restore my_new_vm-290615.state The following command shuts down a virtual machine: virsh -c qemu:///system shutdown my_new_vm The following command mounts a CD-ROM in the virtual machine: virsh -c qemu:///system attach-disk my_new_vm /dev/cdrom /media/cdrom The virtual machine manager A GUI-type utility for managing virtual machines is virt-manager. You can manage both local and remote virtual machines. The command to install the package is as follows: sudo apt-get install virt-manager The virt-manager works on a GUI environment. Hence, it is advisable to install it on a remote machine other than the production cluster, as production cluster should be used for doing the main tasks. The command to connect the virt-manager to a local server running libvirt is as follows: virt-manager -c qemu:///system If you want to connect the virt-manager from a different machine, then first you need to have SSH connectivity. This is required as libvirt will ask for a password on the machine. Once you have set up passwordless authentication, use the following command to connect manager to server: virt-manager -c qemu+ssh://virtnode1.ubuntuserver.com/system Here, the virtualization server is identified with the hostname ubuntuserver.com. The virtual machine viewer A utility for connecting to your virtual machine's console is virt-viewer. This requires a GUI to work with the virtual machine. Use the following command to install virt-viewer: sudo apt-get install virt-viewer Now, connect to your virtual machine console from your workstation using the following command: virt-viewer -c qemu:///system my_new_vm You may also connect to a remote host using SSH passwordless authentication by using the following command: virt-viewer -c qemu+ssh://virtnode4.ubuntuserver.com/system my_new_vm JeOS JeOS, short for Just Enough Operation System, is pronounced as "Juice" and is an operating system in the Ubuntu flavor. It is specially built for running virtual applications. JeOS is no longer available as a downloadable ISO CD-ROM. However, you can pick up any of the following approaches: Get a server ISO of the Ubuntu OS. While installing, hit F4 on your keyboard. You will see a list of items and select the one that reads Minimal installation. This will install the JeOS variant. Build your own copy with vmbuilder from Ubuntu. The kernel of JeOS is specifically tuned to run in virtual environments. It is stripped off of the unwanted packages and has only the base ones. JeOS takes advantage of the technological advancement in VMware products. A powerful combination of limited size with performance optimization is what makes JeOS a preferred OS over a full server OS in a large virtual installation. Also, with this OS being so light, the updates and security patches will be small and only limited to this variant. So, the users who are running their virtual applications on the JeOS will have less maintenance to worry about compared to a full server OS installation. vmbuilder The second way of getting the JeOS is by building your own copy of Ubuntu; you need not download any ISO from the Internet. The beauty of vmbuilder is that it will get the packages and tools based on your requirements. Then, build a virtual machine with these and the whole process is quick and easy. Essentially, vmbuilder is a script that will automate the process of creating a virtual machine, which can be easily deployed. Currently, the virtual machines built with vmbuilder are supported on KVM and Xen hypervisors. Using command-line arguments, you can specify what additional packages you require, remove the ones that you feel aren't necessary for your needs, select the Ubuntu version, and do much more. Some developers and admins contributed to the vmbuilder and changed the design specifics, but kept the commands same. Some of the goals were as follows: Reusability by other distributions Plugin feature added for interactions, so people can add logic for other environments A web interface along with CLI for easy access and maintenance Setup Firstly, we will need to set up libvirt and KVM before we use vmbuilder. libvirt was covered in the previous section. Let's now look at setting up KVM on your server. We will install some additional packages along with the KVM package, and one of them is for enabling X server on the machine. The command that you will need to run on your Ubuntu server is as follows: sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils The output of this command will be as follows: Let's look at what each of the packages mean: libvirt-bin: This is used by libvirtd for administration of KVM and Qemu qemu-kvm: This runs in the background ubuntu-vm-builder: This is a tool for building virtual machines from the command line bridge-utils: This enables networking for various virtual machines Adding users to groups You will have to add the user to the libvirtd command; this will enable them to run virtual machines. The command to add the current user is as follows: sudo adduser `id -un` libvirtd The output is as follows:   Installing vmbuilder Download the latest vmbuilder called python-vm-builder. You may also use the older ubuntu-vm-builder, but there are slight differences in the syntax. The command to install python-vm-builder is as follows: sudo apt-get install python-vm-builder The output will be as follows:   Defining the virtual machine While defining the virtual machine that you want to build, you need to take care of the following two important points: Do not assume that the enduser will know the technicalities of extending the disk size of virtual machine if the need arises. Either have a large virtual disk so that the application can grow or document the process to do so. However, it would be better to have your data stored in an external storage device. Allocating RAM is fairly simple. But remember that you should allocate your virtual machine an amount of RAM that is safe to run your application. To check the list of parameters that vmbuilder provides, use the following command: vmbuilder ––help   The two main parameters are virtualization technology, also known as hypervisor, and targeted distribution. The distribution we are using is Ubuntu 14.04, which is also known as trusty because of its codename. The command to check the release version is as follows: lsb_release -a The output is as follows:   Let's build a virtual machine on the same version of Ubuntu. Here's an example of building a virtual machine with vmbuilder: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system Now, we will discuss what the parameters mean: --suite: This specifies which Ubuntu release we want the virtual machine built on --flavour: This specifies which virtual kernel to use to build the JeOS image --arch: This specifies the processor architecture (64 bit or 32 bit) -o: This overwrites the previous version of the virtual machine image --libvirt: This adds the virtual machine to the list of available virtual machines Now that we have created a virtual machine, let's look at the next steps. JeOS installation We will examine the settings that are required to get our virtual machine up and running. IP address A good practice for assigning IP address to the virtual machines is to set a fixed IP address, usually from the private pool. Then, include this info as part of the documentation. We will define an IP address with following parameters: --ip (address): This is the IP address in dotted form --mask (value): This is the IP mask in dotted form (default is 255.255.255.0) --net (value): This is the IP net address (default is X.X.X.0) --bcast (value): This is the IP broadcast (default is X.X.X.255) --gw (address): This is the gateway address (default is X.X.X.1) --dns (address): This is the name server address (default is X.X.X.1) Our command looks like this now: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 You may have noticed that we have assigned only the IP, and all others will take the default value. Enabling the bridge We will have to enable the bridge for our virtual machines, as various remote hosts will have to access the applications. We will configure libvirt and modify the vmbuilder template to do so. First, create the template hierarchy and copy the default template into this folder: mkdir -p VMBuilder/plugins/libvirt/templates cp /etc/vmbuilder/libvirt/* VMBuilder/plugins/libvirt/templates/ Use your favorite editor and modify the following lines in the VMBuilder/plugins/libvirt/templates/libvirtxml.tmpl file: <interface type='network'> <source network='default'/> </interface> Replace these lines with the following lines: <interface type='bridge'> <source bridge='br0'/> </interface>   Partitions You have to allocate partitions to applications for their data storage and working. It is normal to have a separate storage space for each application in /var. The command provided by vmbuilder for this is --part: --part PATH vmbuilder will read the file from the PATH parameter and consider each line as a separate partition. Each line has two entries, mountpoint and size, where size is defined in MBs and is the maximum limit defined for that mountpoint. For this particular exercise, we will create a new file with name vmbuilder.partition and enter the following lines for creating partitions: root 6000 swap 4000 --- /var 16000 Also, please note that different disks are identified by the delimiter ---. Now, the command should be like this: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition Use to wrap long text to the next line. Setting the user and password We have to define a user and a password in order for the user to log in to the virtual machine after startup. For now, let's use a generic user identified as user and the password password. We can ask user to change the password after first login. The following parameters are used to set the username and password: --user (username): This sets the username (default is ubuntu) --name (fullname): This sets a name for the user (default is ubuntu) --pass (password): This sets the password for the user (default is ubuntu) So, now our command will be as follows: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition --user user --name user --pass password Final steps in the installation – first boot There are certain things that will need to be done at the first boot of a machine. We will install openssh-server at first boot. This will ensure that each virtual machine has a key, which is unique. If we had done this earlier in the setup phase, all virtual machines would have been given the same key; this might have posed a security issue. Let's create a script called first_boot.sh and run it at the first boot of every new virtual machine: # This script will run the first time the virtual machine boots # It is run as root apt-get update apt-get install -qqy --force-yes openssh-server Then, add the following line to the command line: --firstboot first_boot.sh Final steps in the installation – first login Remember we had specified a default password for the virtual machine. This means all the machines where this image will be used for installation will have the same password. We will prompt the user to change the password at first login. For this, we will use a shell script named first_login.sh. Add the following lines to the file: # This script is run the first time a user logs in. echo "Almost at the end of setting up your machine" echo "As a security precaution, please change your password" passwd Then, add the parameter to your command line: --firstlogin first_login.sh Auto updates You can make your virtual machine update itself at regular intervals. To enable this feature, add a package named unattended-upgrades to the command line: --addpkg unattended-upgrades ACPI handling ACPI handling will enable your virtual machine to take care of shutdown and restart events that are received from a remote machine. We will install the acipd package for the same: --addpkg acipd The complete command So, the final command with the parameters that we discussed previously would look like this: sudo vmbuilder kvm ubuntu --suite trusty --flavour virtual --arch amd64 -o --libvirt qemu:///system --ip 192.168.0.10 --part vmbuilder.partition --user user --name user --pass password --firstboot first_boot.sh --firstlogin first_login.sh --addpkg unattended-upgrades --addpkg acipd   Summary In this article, we discussed various virtualization techniques. We discussed virtualization as well as the tools and packages that help in creating and running a virtual machine. Also, you learned about the ways we can view, manage, connect to, and make use of the applications running on the virtual machine. Then, we saw the lightweight version of Ubuntu that is fine-tuned to run virtualization and applications on a virtual platform. At the later stages of this article, we covered how to build a virtual machine from a command line, how to add packages, how to set up user profiles, and the steps for first boot and first login. Resources for Article: Further resources on this subject: Introduction to OpenVPN [article] Speeding up Gradle builds for Android [article] Installing Red Hat CloudForms on Red Hat OpenStack [article]
Read more
  • 0
  • 0
  • 6715

article-image-securing-small-business-server-2008
Packt
24 Oct 2009
5 min read
Save for later

Securing the Small Business Server 2008

Packt
24 Oct 2009
5 min read
To do this, we are essentially completing the tasks in the home screen of the Windows SBS Console, which should look like the following screenshot. Assumptions I'm assuming that you understand the concepts of firewalls and ports; otherwise, you will struggle to safely configure your network. I'm also aware that OneCare, for servers, only provides an introductory offer for anti-malware and another product will be required; however, it is easier to describe the installation of one product rather than trying to answer for all products, so I'm using OneCare as a template. You will, however, need an anti-malware product that is server aware, or need to exclude server product locations such as the exchange data stores and other locations. Network security configuration There are a few areas where we can improve the security of the network. They are around the firewall, reducing the traffic that arrives at the SBS 2008 server, and the security certificate that is used to secure and identify the server communications. Configuring the firewall ports You will need the following ports configured on your firewall to direct traffic to SBS 2008: If you were using SBS 2003, then you can close down ports 444 and 4125, which might have previously been open. Loading a third-party security certificate SBS 2008 creates a security certificate to secure its communications. Certificates are only valuable if everybody seeing them trusts the system that issues the certificate. All computers that are part of the SBS 2008 network trust the SBS 2008 server, so trust is achieved in this way. For those that are not part of the SBS 2008 network, a special certificate must be loaded onto those machines so they will trust SBS 2008, else they will provide warnings to users about the integrity of the communication. There are organizations called Certificate Authorities who have established trust in the marketplace and most IT systems trust the certificates they issue. If you wish to have a more publically trusted certificate, then you will need to purchase one of these. One area where third-party certificates are often needed is when using mobile devices, to enable the loading of the SBS 2008 certificate onto the phones. Without the certificate on the phone, synchronization of Outlook information to the phone cannot take place. Importing a certificate If you already have a certificate or have purchased one and have been sent a file containing the certificate including the private keys, then you should follow this process. There are two steps to follow: Importing the certificate into the Local Computer Certificate store Assigning the certificate using the SBS Console Importing the certificate Start Windows SBS Console (Advanced Mode) from the Start menu and click on the Network tab and then the connectivity button. As this is the advanced console, you will see extra tasks available on the righthand side. Click on the Manage certificates task—if this is not present, check you are running the Advanced Mode console: it will say so in the title bar. This will run a management console with the certificates for your computer made visible. Expand the Personal tree and right-click on Certificates and select Import from the All Tasks menu item. Click Next to pass through the welcome screen for the Certificate Import Wizard and then click on the Browse button to locate the certificate. Then, click on Next to continue. You will now be required to enter your Password to enable access to the key. I would put a check mark in the two remaining check boxes to Mark the key as exportable to enable you to export the certificate should you need to in the future and include the extended properties. Then, click on Next. You will be required to confirm the location, which should be Personal and again click on Next. If it is not set to Personal, click on the Browse button and change the Certification store to Personal. Now click on Finish to complete the process and you will see a message stating that The import was successful. Close the Certificates Management console. Assigning the certificate In the Windows SBS Console, click the task Add a trusted certificate to start the process. Click on Next to skip past the introduction. If you have assigned a certificate before, you will be told that A valid trusted certificate already exists and you have the choice of renewing your existing certificate or replacing it. Select I want to replace the existing certificate with a new one and click on Next. If you have not added a trusted certificate before, then you will not see this screen. On the Get the certificate page, select the option to use a certificate already installed on the server and click on Next. The certificate that you installed will show in the list with a Type of Trusted, while the certificates issued by SBS 2008 will show as Self-issued. Select your Trusted certificate and click on Next. Click on Next to start the process and then Finish to exit the wizard.
Read more
  • 0
  • 0
  • 6484

Packt
04 Apr 2013
6 min read
Save for later

Instant Minecraft Designs – Building a Tudor-style house

Packt
04 Apr 2013
6 min read
(For more resources related to this topic, see here.) Tudor-style house In this recipe, we'll be building a Tudor-style house. We'll be employing some manual building methods, and we'll also introduce some WorldEdit CUI commands and VoxelSniper actions into our workflow. Getting ready Once you have installed the recommended mods, you will need to have Equip an Arrow tools equipped on your action bar. This is used by VoxelSniper to perform its functions. You will also need to equip a Wooden Axe as this item becomes the WorldEdit tool and will be used for making selections. Don't try and use them to break blocks especially if you have made a selection that you don't want to lose. Not only will they not break the block, they will also wreck your selection or worse. How to do it... Let's get started with building our Tudor-style house by performing the following steps: Find a nice area or clear one with roughly 40 x 40 squares of flat land. Mark out a selection of 37 x 13 blocks by left-clicking with the Wooden Axe to set the first point and then right-clicking for the second point. Hit your T key and type the //set 5:1 command. This will make all of the blocks in the selected area turn into Spruce Wood Planks. If you make a mistake, you can do //undo. The //undo command does not alter the selection itself, only the changes made to blocks. Now create a selection 20 x 13 that will complete the L shape of the mansion's bottom floor. Remember to left-click and right-click with the Wooden Axe tool. Now type //set 5:1. In the corner that will be at the end of the outside wall of the longest wing, place a stack of three Spruce Wood blocks on top of each other. Right beside this, place two stacked Wool blocks and one Spruce Wood block on top of them, as shown in the inset of the following screenshot: With the selection in place, we will now stack these six blocks horizontally along the 37 block wall. The stack command works in the direction you face. So face directly down the length of the floor and type //stack 17. If you make a mistake, do //undo. Go to the opposite end of the wall you just made and place a stack of three Spruce Wood blocks in the missing spot at the end. Then just like before, put two blocks of White Wool on the side of the corner Spruce Wood pole with one Spruce Wood block on top. Select these six blocks and facing along the short end wall, type //stack 5. Go to the end of this wall and complete it with the three Spruce Logs and two blocks of Wool with one Spruce block on top where the next wall will go. Select these six blocks. Remember! Wooden Axe, left-click, right-click. Facing down the inside wall, type //stack 11. Place another three Spruce Wood blocks upright in the corner and two Wool blocks with one Spruce block on top for the adjacent inner wall. Make a selection, face in the correct direction, and then type //stack 9. Repeat this same process of placing the six blocks, selecting them, facing in correct direction for the next wall, and typing //stack 5. Finally, type //stack 15 and your base should now be complete. On the corner section, we're going to make some bay windows. So let's create the reinforcing structure for those: Inset by two blocks from the corner place five of these reinforcement structures. They consist of one Spruce Wood upright and two upside down Nether Brick steps, each aligned to the Spruce Wood uprights behind them. Now we'll place the wall sections of the bay windows. You should be able to create these by referring to the right-hand section of the following screenshot: Now comes the use of VoxelSniper GUI. So let's add some windows using it. Hit your V key to bring up the VoxelSniper GUI. We're going to "snipe" some windows into place. The first section, Place, in the top left-hand side represents the block you wish to place. For this we will select Glass. The section directly below Place is the Replace panel. As the name suggests, this is the block you wish to replace. We wish to replace White Wool, so we'll select that. Scroll through and locate the Wool block. In the right-hand side, under the Ink panel scroll box, select the White Wool block. Make sure the No-Physics checkbox is not selected. In the right-hand panel, we will select the tool we wish to use. If it's not already selected, click on the Sized tab and choose Snipe. If you get lost, just follow the preceding screenshot. Choose your Arrow tool and right-click on the White Wool blocks you wish to change to Glass. VoxelSniper works from a distance hence the "Sniper" part of the name, so be careful when experimenting with this tool. If you make a mistake in VoxelSniper, use /u to undo. You can also do /u 5, or /u 7, or /u 22, and so on and so on if you wish to undo multiple actions. The upcoming screenshots should illustrate the sort of pattern we will implement along each of the walls. The VoxelSniper GUI tool retains the last settings used so you can just fill in all the Glass sections of the wall with Wool initially, and then replace them using VoxelSniper once you are done. For now, just do it for the two longest outer walls. The following screenshot shows the 37 and 33 block length walls: On the short wing end wall, we'll fill the whole area with White Wool. So let's type //set 35. On the short side, make a 21 x 4 selection like the one shown in the following screenshot (top-left section), and stand directly on the block as indicated by the player in the top-left section of the screenshot. Do //copy and then move to the pole on the opposite side. Once you are on the corner column like in the bottom-left section of the preceding screenshot, do //paste. To be sure that you are standing exactly on the right block, turn off flying (double-click Space bar), knock the block out below your feet, and make sure you fall down to the block below. Then jump up and replace the block. Do the same for the other wing. Select the wall section with the windows, repeat the process like you did for the previous wall, and then fill in the end wall with Wool blocks for now. Add a wooden floor that is level with the three Wool blocks below the Spruce Window frames. You can use the //set 5:1 command to fill in the large rectangular areas.
Read more
  • 0
  • 0
  • 6383
article-image-nginx-service
Packt
20 Oct 2015
15 min read
Save for later

Nginx service

Packt
20 Oct 2015
15 min read
In this article by Clement Nedelcu, author of the book, Nginx HTTP Server - Third Edition, we discuss the stages after having successfully built and installed Nginx. The default location for the output files is /usr/local/nginx. (For more resources related to this topic, see here.) Daemons and services The next step is obviously to execute Nginx. However, before doing so, it's important to understand the nature of this application. There are two types of computer applications—those that require immediate user input, thus running in the foreground, and those that do not, thus running in the background. Nginx is of the latter type, often referred to as daemon. Daemon names usually come with a trailing d and a couple of examples can be mentioned here—httpd, the HTTP server daemon, is the name given to Apache under several Linux distributions; named, the name server daemon; or crond the task scheduler—although, as you will notice, this is not the case for Nginx. When started from the command line, a daemon immediately returns the prompt, and in most cases, does not even bother outputting data to the terminal. Consequently, when starting Nginx you will not see any text appear on the screen, and the prompt will return immediately. While this might seem startling, it is on the contrary a good sign. It means the daemon was started correctly and the configuration did not contain any errors. User and group It is of utmost importance to understand the process architecture of Nginx and particularly the user and groups its various processes run under. A very common source of troubles when setting up Nginx is invalid file access permissions—due to a user or group misconfiguration, you often end up getting 403 Forbidden HTTP errors because Nginx cannot access the requested files. There are two levels of processes with possibly different permission sets: The Nginx master process: This should be started as root. In most Unix-like systems, processes started with the root account are allowed to open TCP sockets on any port, whereas other users can only open listening sockets on a port above 1024. If you do not start Nginx as root, standard ports such as 80 or 443 will not be accessible. Note that the user directive that allows you to specify a different user and group for the worker processes will not be taken into consideration for the master process. The Nginx worker processes: These are automatically spawned by the master process under the account you specified in the configuration file with the user directive. The configuration setting takes precedence over the configuration switch you may have specified at compile time. If you did not specify any of those, the worker processes will be started as user nobody, and group nobody (or nogroup depending on your OS). Nginx command-line switches The Nginx binary accepts command-line arguments to perform various operations, among which is controlling the background processes. To get the full list of commands, you may invoke the help screen using the following commands: [alex@example.com ~]$ cd /usr/local/nginx/sbin [alex@example.com sbin]$ ./nginx -h The next few sections will describe the purpose of these switches. Some allow you to control the daemon, some let you perform various operations on the application configuration. Starting and stopping the daemon You can start Nginx by running the Nginx binary without any switches. If the daemon is already running, a message will show up indicating that a socket is already listening on the specified port: [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) […] [emerg]: still could not bind(). Beyond this point, you may control the daemon by stopping it, restarting it, or simply reloading its configuration. Controlling is done by sending signals to the process using the nginx -s command. Command Description nginx –s stop Stops the daemon immediately (using the TERM signal). nginx –s quit Stops the daemon gracefully (using the QUIT signal). nginx –s reopen Reopens the log files. nginx –s reload Reloads the configuration. Note that when starting the daemon, stopping it, or performing any of the preceding operations, the configuration file is first parsed and verified. If the configuration is invalid, whatever command you have submitted will fail, even when trying to stop the daemon. In other words, in some cases you will not be able to even stop Nginx if the configuration file is invalid. An alternate way to terminate the process, in desperate cases only, is to use the kill or killall commands with root privileges: [root@example.com ~]# killall nginx Testing the configuration As you can imagine, testing the validity of your configuration will become crucial if you constantly tweak your server setup . The slightest mistake in any of the configuration files can result in a loss of control over the service—you will then be unable to stop it via regular init control commands, and obviously, it will refuse to start again. Consequently, the following command will be useful to you in many occasions; it allows you to check the syntax, validity, and integrity of your configuration: [alex@example.com ~]$ /usr/local/nginx/sbin/nginx –t The –t switch stands for test configuration. Nginx will parse the configuration anew and let you know whether it is valid or not. A valid configuration file does not necessarily mean Nginx will start though, as there might be additional problems such as socket issues, invalid paths, or incorrect access permissions. Obviously, manipulating your configuration files while your server is in production is a dangerous thing to do and should be avoided when possible. The best practice, in this case, is to place your new configuration into a separate temporary file and run the test on that file. Nginx makes it possible by offering the –c switch: [alex@example.com sbin]$ ./nginx –t –c /home/alex/test.conf This command will parse /home/alex/test.conf and make sure it is a valid Nginx configuration file. When you are done, after making sure that your new file is valid, proceed to replacing your current configuration file and reload the server configuration: [alex@example.com sbin]$ cp -i /home/alex/test.conf /usr/local/nginx/conf/nginx.conf cp: erase 'nginx.conf' ? yes [alex@example.com sbin]$ ./nginx –s reload Other switches Another switch that might come in handy in many situations is –V. Not only does it tell you the current Nginx build version, but more importantly it also reminds you about the arguments that you used during the configuration step—in other words, the command switches that you passed to the configure script before compilation. [alex@example.com sbin]$ ./nginx -V nginx version: nginx/1.8.0 (Ubuntu) built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) TLS SNI support enabled configure arguments: --with-http_ssl_module In this case, Nginx was configured with the --with-http_ssl_module switch only. Why is this so important? Well, if you ever try to use a module that was not included with the configure script during the precompilation process, the directive enabling the module will result in a configuration error. Your first reaction will be to wonder where the syntax error comes from. Your second reaction will be to wonder if you even built the module in the first place! Running nginx –V will answer this question. Additionally, the –g option lets you specify additional configuration directives in case they were not included in the configuration file: [alex@example.com sbin]$ ./nginx –g "timer_resolution 200ms"; Adding Nginx as a system service In this section, we will create a script that will transform the Nginx daemon into an actual system service. This will result in mainly two outcomes: the daemon will be controllable using standard commands, and more importantly, it will automatically be launched on system startup and stopped on system shutdown. System V scripts Most Linux-based operating systems to date use a System-V style init daemon. In other words, their startup process is managed by a daemon called init, which functions in a way that is inherited from the old System V Unix-based operating system. This daemon functions on the principle of runlevels, which represent the state of the computer. Here is a table representing the various runlevels and their signification: Runlevel State 0 System is halted 1 Single-user mode (rescue mode) 2 Multiuser mode, without NFS support 3 Full multiuser mode 4 Not used 5 Graphical interface mode 6 System reboot You can manually initiate a runlevel transition: use the telinit 0 command to shut down your computer or telinit 6 to reboot it. For each runlevel transition, a set of services are executed. This is the key concept to understand here: when your computer is stopped, its runlevel is 0. When you turn it on, there will be a transition from runlevel 0 to the default computer startup runlevel. The default startup runlevel is defined by your own system configuration (in the /etc/inittab file) and the default value depends on the distribution you are using: Debian and Ubuntu use runlevel 2, Red Hat and Fedora use runlevel 3 or 5, CentOS and Gentoo use runlevel 3, and so on—the list is long. So, in summary, when you start your computer running CentOS, it operates a transition from runlevel 0 to runlevel 3. That transition consists of starting all services that are scheduled for runlevel 3. The question is how to schedule a service to be started at a specific runlevel. For each runlevel, there is a directory containing scripts to be executed. If you enter these directories (rc0.d, rc1.d, to rc6.d), you will not find actual files, but rather symbolic links referring to scripts located in the init.d directory. Service startup scripts will indeed be placed in init.d, and links will be created by tools placing them in the proper directories. About init scripts An init script, also known as service startup script or even sysv script, is a shell script respecting a certain standard. The script controls a daemon application by responding to commands such as start, stop, and others, which are triggered at two levels. First, when the computer starts, if the service is scheduled to be started for the system runlevel, the init daemon will run the script with the start argument. The other possibility for you is to manually execute the script by calling it from the shell: [root@example.com ~]# service httpd start Or if your system does not come with the service command: [root@example.com ~]# /etc/init.d/httpd start The script must accept at least the start, stop, restart, force-reload, and status commands, as they will be used by the system to respectively start up, shut down, restart, forcefully reload the service, or inquire its status. However, to enlarge your field of action as a system administrator, it is often interesting to provide further options, such as a reload argument to reload the service configuration or a try-restart argument to stop and start the service again. Note that since service httpd start and /etc/init.d/httpd start essentially do the same thing, with the exception that the second command will work on all operating systems, we will make no further mention of the service command and will exclusively use the /etc/init.d/ method. Init script for Debian-based distributions We will thus create a shell script to start and stop our Nginx daemon and also to restart and reloading it. The purpose here is not to discuss Linux shell script programming, so we will merely provide the source code of an existing init script, along with some comments to help you understand it. Due to differences in the format of the init scripts from one distribution to another, we will discover two separate scripts here. The first one is meant for Debian-based distributions such as Debian, Ubuntu, Knoppix, and so forth. First, create a file called nginx with the text editor of your choice, and save it in the /etc/init.d/ directory (on some systems, /etc/init.d/ is actually a symbolic link to /etc/rc.d/init.d/). In the file you just created, insert the script provided in the code bundle supplied with this book. Make sure that you change the paths to make them correspond to your actual setup. You will need root permissions to save the script into the init.d directory. The complete init script for Debian-based distributions can be found in the code bundle. Init script for Red Hat–based distributions Due to the system tools, shell programming functions, and specific formatting that it requires, the preceding script is only compatible with Debian-based distributions. If your server is operated by a Red Hat–based distribution such as CentOS, Fedora, and many more, you will need an entirely different script. The complete init script for Red Hat–based distributions can be found in the code bundle. Installing the script Placing the file in the init.d directory does not complete our work. There are additional steps that will be required to enable the service. First, make the script executable. So far, it is only a piece of text that the system refuses to run. Granting executable permissions on the script is done with the chmod command: [root@example.com ~]# chmod +x /etc/init.d/nginx Note that if you created the file as the root user, you will need to be logged in as root to change the file permissions. At this point, you should already be able to start the service using service nginx start or /etc/init.d/nginx start, as well as stopping, restarting, or reloading the service. The last step here will be to make it so the script is automatically started at the proper runlevels. Unfortunately, doing this entirely depends on what operating system you are using. We will cover the two most popular families—Debian, Ubuntu, or other Debian-based distributions and Red Hat/Fedora/CentOS, or other Red Hat–derived systems. Debian-based distributions For the Debian-based distribution, a simple command will enable the init script for the system runlevel: [root@example.com ~]# update-rc.d -f nginx defaults This command will create links in the default system runlevel folders. For the reboot and shutdown runlevels, the script will be executed with the stop argument; for all other runlevels, the script will be executed with start. You can now restart your system and see your Nginx service being launched during the boot sequence. Red Hat–based distributions For the Red Hat–based systems family, the command differs, but you get an additional tool to manage system startup. Adding the service can be done via the following command: [root@example.com ~]# chkconfig nginx on Once that is done, you can then verify the runlevels for the service: [root@example.com ~]# chkconfig --list nginx Nginx 0:off 1:off 2:on 3:off 4:on 5:on 6:off Another tool will be useful to you to manage system services namely, ntsysv. It lists all services scheduled to be executed on system startup and allows you to enable or disable them at will. The tool ntsysv requires root privileges to be executed. Note that prior to using ntsysv, you must first run the chkconfig nginx on command, otherwise Nginx will not appear in the list of services. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed to you directly. NGINX Plus Since mid-2013, NGINX, Inc., the company behind the Nginx project, also offers a paid subscription called NGINX Plus. The announcement came as a surprise for the open source community, but several companies quickly jumped on the bandwagon and reported amazing improvements in terms of performance and scalability after using NGINX Plus. NGINX, Inc., the high performance web company, today announced the availability of NGINX Plus, a fully-supported version of the popular NGINX open source software complete with advanced features and offered with professional services. The product is developed and supported by the core engineering team at Nginx Inc., and is available immediately on a subscription basis. As business requirements continue to evolve rapidly, such as the shift to mobile and the explosion of dynamic content on the Web, CIOs are continuously looking for opportunities to increase application performance and development agility, while reducing dependencies on their infrastructure. NGINX Plus provides a flexible, scalable, uniformly applicable solution that was purpose built for these modern, distributed application architectures. Considering the pricing plans ($1,500 per year per instance) and the additional features made available, this platform is indeed clearly aimed at large corporations looking to integrate Nginx into their global architecture seamlessly and effortlessly. Professional support from the Nginx team is included and discounts can be offered for multiple-instance subscriptions. This book covers the open source version of Nginx only and does not detail advanced functionality offered by NGINX Plus. For more information about the paid subscription, take a look at http://www.nginx.com. Summary From this point on, Nginx is installed on your server and automatically starts with the system. Your web server is functional, though it does not yet answer the most basic functionality: serving a website. The first step towards hosting a website will be to prepare a suitable configuration file. Resources for Article: Further resources on this subject: Getting Started with Nginx[article] Fine-tune the NGINX Configuration[article] Nginx proxy module [article]
Read more
  • 0
  • 0
  • 6293

article-image-microsoft-sql-server-2012-performance-tuning-implementing-physical-database-structure
Packt
20 Jul 2012
8 min read
Save for later

Microsoft SQL Server 2012 Performance Tuning: Implementing Physical Database Structure

Packt
20 Jul 2012
8 min read
In this article we will cover: Configuring a data file and log file on multiple physical disks Using files and filegroups Moving an existing large table to a separate physical disk Moving non-clustered indexes to a separate physical disk Configuring the tempdb database on separate physical disk Configuring data file and log file on multiple physical disks If you know the exact difference between the ways in which data files and log files of a database are accessed, you can understand why you should place data files and log files on separate physical disks for better performance. The data file of a database, which is normally a file with a .mdf or .ndf extension, is used to store the actual data in the database. The data is stored in pages that are 8 KB in size. When particular data is queried by the user, SQL Server reads the required data pages from the disk into memory containing the requested data from the data file. In case SQL Server needs to make any modifcation in the existing data, it reads the required data pages into the buffer cache, updates those cached data pages in memory, writes modifications to the log file, when the transaction is committed, and then writes the updated data pages back to the disk, when the checkpoint operation is performed. SQL Server performs configurable checkpoint operations at regular intervals. In-memory modified data pages are called dirty pages. When a checkpoint is performed, it permanently writes these dirty pages on disk. The log file is used to record any change that is made to the database. It's intended for recovery of the database in case of disaster or failure. Because a log file is intended to record the changes, it is not designed to be read randomly, as compared to a data file. Rather, it is designed to be written and accessed in a sequential manner. SQL Server is designed to handle and process multiple I/O requests simultaneously, if we have enough hardware resources. Even if SQL Server is capable of handling simultaneous I/O requests in parallel, it may face the issue of disk contention while reading large amounts of data from data files and writing large a number of transaction logs to log files in parallel with two different requests if data files and log files reside on the same physical disk. However, if data file and log file are located on separate physical disks, SQL Server gracefully handles and processes such requests in parallel. When simultaneous requests for reading data and writing transaction logs are commonly expected in the OLTP database environment, placing data files and log files on separate physical drives greatly improves the performance of the database. Let's suppose that you are a DBA and, in your organization, you maintain and administer a production database called AdventureWorks2012 database. The database was created/ installed by an inexperienced team and has been residing in the default location for SQL Server. You are required to separate the data files and log files for this database and place them on different physical disks to achieve maximum I/O performance. How would you perform this task? The goal of this recipe is to teach you how to separate the data files and log files for an existing database to improve the I/O response time and database performance. Getting ready This recipe refers to the following physical disk volumes: E drive—to store the data file L drive—to store the log file In this article, wherever it is said "separate disk volume" or "separate drive", consider it a separate physical drive and not logical partitioned drive. The following are the prerequisites for completing this recipe: An instance of SQL Server 2012 Developer or Enterprise Evaluation edition. Sample AdventureWorks2012 database on the instance of SQL server. For more details on how to install the AdventureWorks2012 database, please refer to the Preface of this book E drive should be available on your machine. L drive should be available on your machine. How to do it... The following are the steps you need to perform for this recipe: Start SQL Server Management Studio and connect to SQL Server. In the query window, type and execute the following script to verify the existing path for data files and log files for the AdventureWorks2012 database: --Switch the current database --context to AdventureWorks2012 USE AdventureWorks2012 GO --Examine the current --location of the database. SELECT physical_name FROM sys.database_files GO Assuming that the AdventureWorks2012 database resides in its default location, depending upon your SQL Server installation path, you may see a result in the output of the previous query, similar to the one given here: Now, execute the following query to bring the database offline: USE master GO --Bring database offline ALTER DATABASE AdventureWorks2012 SET OFFLINE WITH ROLLBACK IMMEDIATE GO Once the database is offline, you can detach it without any problem. Right-click on AdventureWorks2012Object ExplorerTasks and then Detach…, as shown in following screenshot: This step brings up the Detach Database dialog box, as shown in following screenshot. Press the OK button on this dialog box. This will detach the AdventureWorks2012 database from the SQL Server instance and it will no longer appear in Object Explorer: Create the following two directories to place data files (.mdf files) and log files (.ldf files), respectively, for the AdventureWorks2012 database, on different physical disks: E:SQL_Data L:SQL_Log Now, using Windows Explorer, move the AdventureWorks2012_data.mdf and AdventureWorks2012_log.ldf database files manually from their original location to their respective new directories. The following paths should be the respective destinations: E:SQL_DataAdventureWorks2012_Data.mdf L:SQL_Log AdventureWorks2012_Log.ldf After the data and log files are copied to their new locations, we will attach them and bring our AdventureWorks2012 database back online. To do this, in Object Explorer, right-click on the Databases node and select Attach…. You will see the following Attach Databases dialog box. In this dialog box, click on the Add…> button: The previous step opens the Locate Database Files dialog box. In this dialog box, locate the .mdf data file E:SQL_DataAdventureWorks2012_Data.mdf and click on the OK button, as shown in following screenshot: After locating the .mdf data file, the Attach Databases dialog box should look similar to the following screenshot. Note that the log file (.ldf file) could not be located at this stage and there is a Not Found message against AdventureWorks2012_log.ldf, under the AdventureWorks2012 database details: section. This happens because we have moved the log file to our new location, L:SQL_Log, and SQL Server tries to find it in its default location: To locate the log file, click on the … button in the Current File Path column for the AdventureWorks2012_log.ldf log file. This will bring up the Locate Database Files dialog box. Locate the file L:SQL_LogAdventureWorks2012_log.ldf and click on the OK button. Refer to the following screenshot: To verify the new location of the AdventureWorks2012 database, run the following query in SSMS: --Switch the current database --context to AdventureWorks2012 USE AdventureWorks2012 GO --Verify the new location of --the database. SELECT physical_name ,name FROM sys.database_files GO In the query result, examine the new locations of the data files and log files for the AdventureWorks2012 database; see the following screenshot: How it works... In this recipe, we first queried the sys.database_files system catalog view to verify the current location of the AdventureWorks2012 database. Because we wanted to move the .mdf and .ldf files to new locations, we had to bring the database offline. We brought the database offline with the ALTER DATABASE command. Note that, in the ALTER DATABASE command, we included the ROLLBACK IMMEDIATE option. This rolls back the transactions that are not completed, and current connections to AdventureWorks2012 database are closed. After bringing the database offline, we detached the AdventureWorks2012 database from the instance of SQL server. You cannot move a database file to a new location if the database is online. If a database is to be moved, it must not be in use by SQL Server. In order to move a database, you can either stop the SQL Server service or bring the database offline. Bringing the database offline is a preferable option because stopping SQL Server service stops the functioning of the whole SQL Server instance. Alternatively, you can also select the checkbox Drop Connections in the Detach Database dialog box, which does not require bringing a database offline. We then created two new directories—E:SQL_Data and L:SQL_Log—to place the data and log files for AdventureWorks2012 and moved AdventureWorks2012_Data.mdf and AdventureWorks2012_Log.ldf over there. We then attached the AdventureWorks2012 database by attaching the .mdf and .ldf files from their new locations. Finally, we verifed the new location of the database by querying sys.database_files. You can script your Attach Database and Detach Database actions by clicking on the Script button in the wizard. This allows you to save and re-use the script for future purposes.
Read more
  • 0
  • 0
  • 6210
Modal Close icon
Modal Close icon