Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Servers

95 Articles
article-image-creating-vbnet-application-enterprisedb
Packt
27 Oct 2009
5 min read
Save for later

Creating a VB.NET application with EnterpriseDB

Packt
27 Oct 2009
5 min read
Overview of the tutorial You will begin by creating an ODBC datasource for accessing data on the Postgres server. Using the User DSN created you will be connecting to the Postgres server data. You will derive a dataset from the table which you will be using to display in a datagrid view on a form in a windows application. We start with the Categories table that was migrated from MS SQL Server 2008. This table with all of its columns is shown in the Postgres studio in the next figure. Creating the ODBC DSN Navigate to Start | Control Panel | Administrative Tools | Data Sources (ODBC) to bring up the ODBC Database Manager window. Click on Add.... In the Create New Data Source scroll down to EnterpriseDB 8.2 under the list heading Name as shown. Click Finish. The EnterpriseDB ODBC Driver page gets displayed as shown. Accept the default name for the Data Source(DSN) or, if you prefer, change the name. Here the default is accepted. The Database, Server, User Name, Port and the Password should all be available to you [Read article 1]. If you click on the option button Datasource you display a window with two pages as shown. Make no changes to the pages and accept defaults but make sure you review the pages. Click OK and you will be back in the EnterpriseDB Driver window. If you click on the button Global the Global Settings window gets displayed (not shown). These are logging options as the page describes. Click Cancel to the Global Settings window. Click on the Test button and verify that the connection was successful. Click on the Save button and save the DSN under the list heading User DSN. The DSN EnterpriseDB enters the list of DSN's created as shown here. Create a Windows Forms application and Establish a connection to Postgres Open Visual Studio 2008 from its shortcut. Click File | New | Project... and open the New Project window. Choose a windows forms project for Framework 2.0. Besides Framework 2.0 you can also create projects in other versions in Visual Studio 2008. In Server Explorer window double click the Connection icon as shown. This brings up the Add Connection window as shown. Click on Change... button to display the Change Data Source window. Scroll up and select Microsoft ODBC Data Source as shown. Click OK. Click on the drop-down handle for the option Use user or system data source name and choose EnterpriseDB you created earlier as shown. Insert User Name and Password and click on the Test Connection button. You should get a connection succeeded message as shown. Click OK on the message screen as well as to the add connection window. The connection appears in the Visual Studio 2008 in the Server Explorer as shown.     Displaying data from the table Drag and drop a DataGridView under Data in the Toolbox onto the form as shown (shown with SmartTasks handle clicked) Click on Choose Data Source handle to display a drop-down menu as shown below. Click on Add Project Data Source at the bottom. This displays the Choose a Data Source Type page of the Data Source Configuration Wizard. Accept the default datasource type and click Next. In the Choose Your Data Connection page of the wizard choose the ODBC.localhost.PGNorthwind as shown in the drop-down list. Click Next in the page that gets displayed and accept the default to save the connection string to the application configuration file as shown. Click Next. In the Choose Your Database Objects page, expand Tables and choose the categories table as shown. The default Dataset name can be changed. Herein the default is accepted. Click Finish. The DatagridView on Form1 gets displayed with two columns and a row but can be extended to the right by using drag handles to reveal all the four columns as shown. Three other objects PGNorthwindDataSet, CategoriesBindingSource, and CategoriesTableAdapter are also added to the control tray as shown. The PGNorthwindDataset.xsd file gets added to the project. Now build the project and run. The Form 1 gets displayed with the data from the PGNorthwind database as shown. In the design view of the form few more tasks have been added as shown. Here you can Add Query... to filter the data displayed; Edit the details of the columns and you can choose to add a column if you had chosen fewer columns from the original table. For example, Edit Column brings up its editor as shown where you can make changes to the styles if you desire to do so. The next figure shows slightly modified form by editing the columns and resizing the cell heights as shown. Summary A step-by-step procedure was described to display the data stored in a table in the Postgres database in a Windows Forms application. Procedure to create an ODBC DSN was also described. Using this ODBC DSN a connection was established to the Postgres server in Visual Studio 2008.
Read more
  • 0
  • 0
  • 10434

article-image-nginx-http-server-faqs
Packt
25 Mar 2011
4 min read
Save for later

Nginx HTTP Server FAQs

Packt
25 Mar 2011
4 min read
  Nginx HTTP Server Adopt Nginx for your web applications to make the most of your infrastructure and serve pages faster than ever         Read more about this book       (For more resources on this subject, see here.) Q: What is Nginx and how is it pronounced?A: Nginx, is a lightweight HTTP server originating from Russia— pronounced as "engine X". Q: From where can one download and find resources related to Nginx?A: Although Nginx is a relatively new and growing project, there are already a good number of resources available on the World Wide Web (WWW) and an active community of administrators and developers. The official website, which is at www.nginx.net, is rather simple and does not provide much information or documentation, other than links for downloading the latest versions. On the contrary, you will find a lot of interesting documentation and examples on the official wiki—wiki.nginx.org. (Move the mouse over the image to enlarge it.) Q: Which different versions are currently available?A: There are currently three version branches on the project: Stable version: This version is usually recommended, as it is approved by both developers and users, but is usually a little behind the development version above. The current latest stable version is 0.7.66, released on June 07, 2010. Development version: This is the the latest version available for download. Although it is generally solid enough to be installed on production servers, you may run into the occasional bug. As such, the stable version is recommended, even though you do not get to use the latest features. The current latest development version is 0.8.40, released on June 07, 2010. Legacy version: If for some reason you are interested in looking at the older versions, you will find two of them. There's a legacy version and a legacy stable version, respectively coming as 0.5.38 and 0.6.39 releases. Q: Are the development versions stable enough to be used on production servers?A: Cliff Wells, founder and maintainer of the nginx.org wiki website and community, believes so—"I generally use and recommend the latest development version. It's only bit me once!". Early adopters rarely report critical problems. It is up to you to select the version you will be using on your server. The Nginx developers have decided to maintain backwards compatibility in new versions. You can find more information on version changes, new additions, and bug fixes in the dedicated change log page on the official website. Q: How can one Upgrade Nginx without loosing a single connection?A: There are many situations where you need to replace the Nginx binary, for example, when you compile a new version and wish to put it in production or simply after having enabled new modules and rebuilt the application. What most administrators would do in this situation is stop the server, copy the new binary over the old one, and start Nginx again. While this is not considered to be a problem for most websites, there may be some cases where uptime is critical and connection losses should be avoided at all costs. Fortunately, Nginx embeds a mechanism allowing you to switch binaries with uninterrupted uptime—zero percent request loss is guaranteed if you follow these steps carefully: Replace the old Nginx binary (by default, /usr/local/nginx/sbin/nginx) with the new one. Find the pid of the Nginx master process, for example, with ps x grep nginx | grep master| or by looking at the value found in the pid file. Send a USR2 (12) signal to the master process—kill –USR2 ***, replacing *** with the pid found in step 2. This will initiate the upgrade by renaming the old .pid file and running the new binary. Send a WINCH (28) signal to the old master process—kill –WINCH ***, replacing *** with the pid found in step 2. This will engage a graceful shutdown of the old worker processes. Make sure that all the old worker processes are terminated, and then send a QUIT signal to the old master process—kill –QUIT ***, replacing *** with the pid found in step 2.
Read more
  • 0
  • 0
  • 10282

article-image-connecting-database
Packt
28 Nov 2014
20 min read
Save for later

Connecting to a database

Packt
28 Nov 2014
20 min read
In this article by Christopher Ritchie, the author of R WildFly Configuration, Deployment, and Administration, Second Edition, you will learn to configure enterprise services and components, such as transactions, connection pools, and Enterprise JavaBeans. (For more resources related to this topic, see here.) To allow your application to connect to a database, you will need to configure your server by adding a datasource. Upon server startup, each datasource is prepopulated with a pool of database connections. Applications acquire a database connection from the pool by doing a JNDI lookup and then calling getConnection(). Take a look at the following code: Connection result = null; try {    Context initialContext = new InitialContext();    DataSource datasource =    (DataSource)initialContext.lookup("java:/MySqlDS");    result = datasource.getConnection(); } catch (Exception ex) {    log("Cannot get connection: " + ex);} After the connection has been used, you should always call connection.close() as soon as possible. This frees the connection and allows it to be returned to the connection pool—ready for other applications or processes to use. Releases prior to JBoss AS 7 required a datasource configuration file (ds.xml) to be deployed with the application. Ever since the release of JBoss AS 7, this approach has no longer been mandatory due to the modular nature of the application server. Out of the box, the application server ships with the H2 open source database engine (http://www.h2database.com), which, because of its small footprint and browser-based console, is ideal for testing purposes. However, a real-world application requires an industry-standard database, such as the Oracle database or MySQL. In the following section, we will show you how to configure a datasource for the MySQL database. Any database configuration requires a two step procedure, which is as follows: Installing the JDBC driver Adding the datasource to your configuration Let's look at each section in detail. Installing the JDBC driver In WildFly's modular server architecture, you have a couple of ways to install your JDBC driver. You can install it either as a module or as a deployment unit. The first and recommended approach is to install the driver as a module. We will now look at a faster approach to installing the driver. However, it does have various limitations, which we will cover shortly. The first step to install a new module is to create the directory structure under the modules folder. The actual path for the module is JBOSS_HOME/modules/<module>/main. The main folder is where all the key module components are installed, namely, the driver and the module.xml file. So, next we need to add the following units: JBOSS_HOME/modules/com/mysql/main/mysql-connector-java-5.1.30-bin.jar JBOSS_HOME/modules/com/mysql/main/module.xml The MySQL JDBC driver used in this example, also known as Connector/J, can be downloaded for free from the MySQL site (http://dev.mysql.com/downloads/connector/j/). At the time of writing, the latest version is 5.1.30. The last thing to do is to create the module.xml file. This file contains the actual module definition. It is important to make sure that the module name (com.mysql) corresponds to the module attribute defined in the your datasource. You must also state the path to the JDBC driver resource and finally add the module dependencies, as shown in the following code: <module name="com.mysql">    <resources>        <resource-root path="mysql-connector-java-5.1.30-bin.jar"/>    </resources>    <dependencies>        <module name="javax.api"/>        <module name="javax.transaction.api"/>    </dependencies> </module> Here is a diagram showing the final directory structure of this new module: You will notice that there is a directory structure already within the modules folder. All the system libraries are housed inside the system/layers/base directory. Your custom modules should be placed directly inside the modules folder and not with the system modules. Adding a local datasource Once the JDBC driver is installed, you need to configure the datasource within the application server's configuration file. In WildFly, you can configure two kinds of datasources, local datasources and xa-datasources, which are distinguishable by the element name in the configuration file. A local datasource does not support two-phase commits using a java.sql.Driver. On the other hand, an xa-datasource supports two-phase commits using a javax.sql.XADataSource. Adding a datasource definition can be completed by adding the datasource definition within the server configuration file or by using the management interfaces. The management interfaces are the recommended way, as they will accurately update the configuration for you, which means that you do not need to worry about getting the correct syntax. In this article, we are going to add the datasource by modifying the server configuration file directly. Although this is not the recommended approach, it will allow you to get used to the syntax and layout of the file. In this article, we will show you how to add a datasource using the management tools. Here is a sample MySQL datasource configuration that you can copy into your datasources subsystem section within the standalone.xml configuration file: <datasources> <datasource jndi-name="java:/MySqlDS" pool-name="MySqlDS_Pool"    enabled="true" jta="true" use-java-context="true" use-ccm="true">    <connection-url>      jdbc:mysql://localhost:3306/MyDB    </connection-url>    <driver>mysql</driver>    <pool />    <security>      <user-name>jboss</user-name>      <password>jboss</password>    </security>    <statement/>    <timeout>      <idle-timeout-minutes>0</idle-timeout-minutes>      <query-timeout>600</query-timeout>    </timeout> </datasource> <drivers>    <driver name="mysql" module="com.mysql"/> </drivers> </datasources> As you can see, the configuration file uses the same XML schema definition from the earlier -*.ds.xml file, so it will not be difficult to migrate to WildFly from previous releases. In WildFly, it's mandatory that the datasource is bound into the java:/ or java:jboss/ JNDI namespace. Let's take a look at the various elements of this file: connection-url: This element is used to define the connection path to the database. driver: This element is used to define the JDBC driver class. pool: This element is used to define the JDBC connection pool properties. In this case, we are going to leave the default values. security: This element is used to configure the connection credentials. statement: This element is added just as a placeholder for statement-caching options. timeout: This element is optional and contains a set of other elements, such as query-timeout, which is a static configuration of the maximum seconds before a query times out. Also the included idle-timeout-minutes element indicates the maximum time a connection may be idle before being closed; setting it to 0 disables it, and the default is 15 minutes. Configuring the connection pool One key aspect of the datasource configuration is the pool element. You can use connection pooling without modifying any of the existing WildFly configurations, as, without modification, WildFly will choose to use default settings. If you want to customize the pooling configuration, for example, change the pool size or change the types of connections that are pooled, you will need to learn how to modify the configuration file. Here's an example of pool configuration, which can be added to your datasource configuration: <pool>    <min-pool-size>5</min-pool-size>    <max-pool-size>10</max-pool-size>    <prefill>true</prefill>    <use-strict-min>true</use-strict-min>    <flush-strategy>FailingConnectionOnly</flush-strategy> </pool> The attributes included in the pool configuration are actually borrowed from earlier releases, so we include them here for your reference: Attribute Meaning initial-pool-size This means the initial number of connections a pool should hold (default is 0 (zero)). min-pool-size This is the minimum number of connections in the pool (default is 0 (zero)). max-pool-size This is the maximum number of connections in the pool (default is 20). prefill This attempts to prefill the connection pool to the minimum number of connections. use-strict-min This determines whether idle connections below min-pool-size should be closed. allow-multiple-users This determines whether multiple users can access the datasource through the getConnection method. This has been changed slightly in WildFly. In WildFly, the line <allow-multiple-users>true</allow-multiple-users> is required. In JBoss AS 7, the empty element <allow-multiple-users/> was used. capacity This specifies the capacity policies for the pool—either incrementer or decrementer. connection-listener Here, you can specify org.jboss.jca.adapters.jdbc.spi.listener.ConnectionListener that allows you to listen for connection callbacks, such as activation and passivation. flush-strategy This specifies how the pool should be flushed in the event of an error (default is FailingConnectionsOnly). Configuring the statement cache For each connection within a connection pool, the WildFly server is able to create a statement cache. When a prepared statement or callable statement is used, WildFly will cache the statement so that it can be reused. In order to activate the statement cache, you have to specify a value greater than 0 within the prepared-statement-cache-size element. Take a look at the following code: <statement>    <track-statements>true</track-statements>    <prepared-statement-cache-size>10</prepared-statement-cache-size>    <share-prepared-statements/> </statement> Notice that we have also set track-statements to true. This will enable automatic closing of statements and ResultSets. This is important if you want to use prepared statement caching and/or don't want to prevent cursor leaks. The last element, share-prepared-statements, can only be used when the prepared statement cache is enabled. This property determines whether two requests in the same transaction should return the same statement (default is false). Adding an xa-datasource Adding an xa-datasource requires some modification to the datasource configuration. The xa-datasource is configured within its own element, that is, within the datasource. You will also need to specify the xa-datasource class within the driver element. In the following code, we will add a configuration for our MySQL JDBC driver, which will be used to set up an xa-datasource: <datasources> <xa-datasource jndi-name="java:/XAMySqlDS" pool-name="MySqlDS_Pool"    enabled="true" use-java-context="true" use-ccm="true">    <xa-datasource-property name="URL">      jdbc:mysql://localhost:3306/MyDB    </xa-datasource-property>    <xa-datasource-property name="User">jboss    </xa-datasource-property>    <xa-datasource-property name="Password">jboss    </xa-datasource-property>    <driver>mysql-xa</driver> </xa-datasource> <drivers>    <driver name="mysql-xa" module="com.mysql">      <xa-datasource-class>       com.mysql.jdbc.jdbc2.optional.MysqlXADataSource      </xa-datasource-class>    </driver> </drivers> </datasources> Datasource versus xa-datasource You should use an xa-datasource in cases where a single transaction spans multiple datasources, for example, if a method consumes a Java Message Service (JMS) and updates a Java Persistence API (JPA) entity. Installing the driver as a deployment unit In the WildFly application server, every library is a module. Thus, simply deploying the JDBC driver to the application server will trigger its installation. If the JDBC driver consists of more than a single JAR file, you will not be able to install the driver as a deployment unit. In this case, you will have to install the driver as a core module. So, to install the database driver as a deployment unit, simply copy the mysql-connector-java-5.1.30-bin.jar driver into the JBOSS_HOME/standalone/deployments folder of your installation, as shown in the following image: Once you have deployed your JDBC driver, you still need to add the datasource to your server configuration file. The simplest way to do this is to paste the following datasource definition into the configuration file, as follows: <datasource jndi-name="java:/MySqlDS" pool-name="MySqlDS_Pool" enabled="true" jta="true" use-java-context="true" use-ccm="true"> <connection-url>    jdbc:mysql://localhost:3306/MyDB </connection-url> <driver>mysql-connector-java-5.1.130-bin.jar</driver> <pool /> <security>    <user-name>jboss</user-name>    <password>jboss</password> </security> </datasource> Alternatively, you can use the command-line interface (CLI) or the web administration console to achieve the same result. What about domain deployment? In this article, we are discussing the configuration of standalone servers. The services can also be configured in the domain servers. Domain servers, however, don't have a specified folder scanned for deployment. Rather, the management interfaces are used to inject resources into the domain. Choosing the right driver deployment strategy At this point, you might wonder about a best practice for deploying the JDBC driver. Installing the driver as a deployment unit is a handy shortcut; however, it can limit its usage. Firstly, it requires a JDBC 4-compliant driver. Deploying a non-JDBC-4-compliant driver is possible, but it requires a simple patching procedure. To do this, create a META-INF/services structure containing the java.sql.Driver file. The content of the file will be the driver name. For example, let's suppose you have to patch a MySQL driver—the content will be com.mysql.jdbc.Driver. Once you have created your structure, you can package your JDBC driver with any zipping utility or the .jar command, jar -uf <your -jdbc-driver.jar> META-INF/services/java.sql.Driver. The most current JDBC drivers are compliant with JDBC 4 although, curiously, not all are recognized as such by the application server. The following table describes some of the most used drivers and their JDBC compliance: Database Driver JDBC 4 compliant Contains java.sql.Driver MySQL mysql-connector-java-5.1.30-bin.jar Yes, though not recognized as compliant by WildFly Yes PostgreSQL postgresql-9.3-1101.jdbc4.jar Yes, though not recognized as compliant by WildFly Yes Oracle ojdbc6.jar/ojdbc5.jar Yes Yes Oracle ojdbc4.jar No No As you can see, the most notable exception to the list of drivers is the older Oracle ojdbc4.jar, which is not compliant with JDBC 4 and does not contain the driver information in META-INF/services/java.sql.Driver. The second issue with driver deployment is related to the specific case of xa-datasources. Installing the driver as deployment means that the application server by itself cannot deduce the information about the xa-datasource class used in the driver. Since this information is not contained inside META-INF/services, you are forced to specify information about the xa-datasource class for each xa-datasource you are going to create. When you install a driver as a module, the xa-datasource class information can be shared for all the installed datasources. <driver name="mysql-xa" module="com.mysql"> <xa-datasource-class>    com.mysql.jdbc.jdbc2.optional.MysqlXADataSource </xa-datasource-class> </driver> So, if you are not too limited by these issues, installing the driver as a deployment is a handy shortcut that can be used in your development environment. For a production environment, it is recommended that you install the driver as a static module. Configuring a datasource programmatically After installing your driver, you may want to limit the amount of application configuration in the server file. This can be done by configuring your datasource programmatically This option requires zero modification to your configuration file, which means greater application portability. The support to configure a datasource programmatically is one of the cool features of Java EE that can be achieved by using the @DataSourceDefinition annotation, as follows: @DataSourceDefinition(name = "java:/OracleDS", className = " oracle.jdbc.OracleDriver", portNumber = 1521, serverName = "192.168.1.1", databaseName = "OracleSID", user = "scott", password = "tiger", properties = {"createDatabase=create"}) @Singleton public class DataSourceEJB { @Resource(lookup = "java:/OracleDS") private DataSource ds; } In this example, we defined a datasource for an Oracle database. It's important to note that, when configuring a datasource programmatically, you will actually bypass JCA, which proxies requests between the client and the connection pool. The obvious advantage of this approach is that you can move your application from one application server to another without the need for reconfiguring its datasources. On the other hand, by modifying the datasource within the configuration file, you will be able to utilize the full benefits of the application server, many of which are required for enterprise applications. Configuring the Enterprise JavaBeans container The Enterprise JavaBeans (EJB) container is a fundamental part of the Java Enterprise architecture. The EJB container provides the environment used to host and manage the EJB components deployed in the container. The container is responsible for providing a standard set of services, including caching, concurrency, persistence, security, transaction management, and locking services. The container also provides distributed access and lookup functions for hosted components, and it intercepts all method invocations on hosted components to enforce declarative security and transaction contexts. Take a look at the following figure: As depicted in this image, you will be able to deploy the full set of EJB components within WildFly: Stateless session bean (SLSB): SLSBs are objects whose instances have no conversational state. This means that all bean instances are equivalent when they are not servicing a client. Stateful session bean (SFSB): SFSBs support conversational services with tightly coupled clients. A stateful session bean accomplishes a task for a particular client. It maintains the state for the duration of a client session. After session completion, the state is not retained. Message-driven bean (MDB): MDBs are a kind of enterprise beans that are able to asynchronously process messages sent by any JMS producer. Singleton EJB: This is essentially similar to a stateless session bean; however, it uses a single instance to serve the client requests. Thus, you are guaranteed to use the same instance across invocations. Singletons can use a set of events with a richer life cycle and a stricter locking policy to control concurrent access to the instance. No-interface EJB: This is just another view of the standard session bean, except that local clients do not require a separate interface, that is, all public methods of the bean class are automatically exposed to the caller. Interfaces should only be used in EJB 3.x if you have multiple implementations. Asynchronous EJB: These are able to process client requests asynchronously just like MDBs, except that they expose a typed interface and follow a more complex approach to processing client requests, which are composed of: The fire-and-forget asynchronous void methods, which are invoked by the client The retrieve-result-later asynchronous methods having a Future<?> return type EJB components that don't keep conversational states (SLSB and MDB) can be optionally configured to emit timed notifications. Configuring the EJB components Now that we have briefly outlined the basic types of EJB, we will look at the specific details of the application server configuration. This comprises the following components: The SLSB configuration The SFSB configuration The MDB configuration The Timer service configuration Let's see them all in detail. Configuring the stateless session beans EJBs are configured within the ejb3.2.0 subsystem. By default, no stateless session bean instances exist in WildFly at startup time. As individual beans are invoked, the EJB container initializes new SLSB instances. These instances are then kept in a pool that will be used to service future EJB method calls. The EJB remains active for the duration of the client's method call. After the method call is complete, the EJB instance is returned to the pool. Because the EJB container unbinds stateless session beans from clients after each method call, the actual bean class instance that a client uses can be different from invocation to invocation. Have a look at the following diagram: If all instances of an EJB class are active and the pool's maximum pool size has been reached, new clients requesting the EJB class will be blocked until an active EJB completes a method call. Depending on how you have configured your stateless pool, an acquisition timeout can be triggered if you are not able to acquire an instance from the pool within a maximum time. You can either configure your session pool through your main configuration file or programmatically. Let's look at both approaches, starting with the main configuration file. In order to configure your pool, you can operate on two parameters: the maximum size of the pool (max-pool-size) and the instance acquisition timeout (instance-acquisition-timeout). Let's see an example: <subsystem > <session-bean> <stateless>    <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> ... </session-bean> ... <pools> <bean-instance-pools>    <strict-max-pool name="slsb-strict-max-pool" max-pool-size=      "25" instance-acquisition-timeout="5" instance-acquisition-      timeout-unit="MINUTES"/> </bean-instance-pools> </pools> ... </subsystem> In this example, we have configured the SLSB pool with a strict upper limit of 25 elements. The strict maximum pool is the only available pool instance implementation; it allows a fixed number of concurrent requests to run at one time. If there are more requests running than the pool's strict maximum size, those requests will get blocked until an instance becomes available. Within the pool configuration, we have also set an instance-acquisition-timeout value of 5 minutes, which will come into play if your requests are larger than the pool size. You can configure as many pools as you like. The pool used by the EJB container is indicated by the attribute pool-name on the bean-instance-pool-ref element. For example, here we have added one more pool configuration, largepool, and set it as the EJB container's pool implementation. Have a look at the following code: <subsystem > <session-bean>    <stateless>      <bean-instance-pool-ref pool-name="large-pool"/>    </stateless> </session-bean> <pools>    <bean-instance-pools>      <strict-max-pool name="large-pool" max-pool-size="100"        instance-acquisition-timeout="5"        instance-acquisition-timeout-unit="MINUTES"/>    <strict-max-pool name="slsb-strict-max-pool"      max-pool-size="25" instance-acquisition-timeout="5"      instance-acquisition-timeout-unit="MINUTES"/>    </bean-instance-pools> </pools> </subsystem> Using the CLI to configure the stateless pool size We have detailed the steps necessary to configure the SLSB pool size through the main configuration file. However, the suggested best practice is to use CLI to alter the server model. Here's how you can add a new pool named large-pool to your EJB 3 subsystem: /subsystem=ejb3/strict-max-bean-instance-pool=large-pool: add(max-pool-size=100) Now, you can set this pool as the default to be used by the EJB container, as follows: /subsystem=ejb3:write-attribute(name=default-slsb-instance-pool, value=large-pool) Finally, you can, at any time, change the pool size property by operating on the max-pool-size attribute, as follows: /subsystem=ejb3/strict-max-bean-instance-pool=large-pool:write- attribute(name="max-pool-size",value=50) Summary In this article, we continued the analysis of the application server configuration by looking at Java's enterprise services. We first learned how to configure datasources, which can be used to add database connectivity to your applications. Installing a datasource in WildFly 8 requires two simple steps: installing the JDBC driver and adding the datasource into the server configuration. We then looked at the enterprise JavaBeans subsystem, which allows you to configure and tune your EJB container. We looked at the basic EJB component configuration of SLSB. Resources for Article: Further resources on this subject: Dart with JavaScript [article] Creating Java EE Applications [article] OpenShift for Java Developers [article]
Read more
  • 0
  • 0
  • 10248

article-image-active-directory-migration
Packt
28 Mar 2013
6 min read
Save for later

Active Directory migration

Packt
28 Mar 2013
6 min read
(For more resources related to this topic, see here.) Getting ready The following prerequisites have to be met before we can introduce the first Windows Server 2012 Domain Controller into the existing Active Directory domain: In order to add a Windows Server 2012 Domain Controller, the Forest Functional Level (FFL) must be Windows Server 2003. ADPREP is part of the domain controller process and the schema will get upgraded during this process. So the account must have the Schema and Enterprise admins privileges to install the first Windows Server 2012 Domain Controller. If there is a firewall between the new server and the existing domain controllers, make sure all the RPC high ports are open between these servers. The domain controller installation and replication can be controlled by a static or a range of RPC ports by modifying the registry on the domain controllers. The new Windows 2012 server's primary DNS IP address must be the IP address of an existing domain controller. The new server must be able to access the existing Active Directory domain and controllers by NetBIOS and Fully Qualified Domain Name (FQDN). If the new domain controller will be in a new site or in a new subnet, make sure to update the Active Directory Sites and Services with this information. In Windows Server 2012, domain controllers can be remotely deployed by using the Server Manager. The following recipe provides the step-by-step instructions on how to deploy a domain controller in an existing Active Directory environment. How to do it... Install and configure a Windows Server 2012. Join the new Windows Server 2012 to the existing Active Directory domain. Open Server Manager. Navigate to the All Servers group in the left-hand side pane. From the Server Name box, right-click on the appropriate server and select the Add Roles and Features option. You can also select Add Roles and Features from the Manage menu in the command bar. If the correct server is not listed here, you can manually add it from the Manage tab on the top right-hand side and select Add Server. Click on Next on the Welcome window. In the Select Installation Type window, select Role based or Feature based installation. Click on Next. In the Select destination server window, select Select a server from the server pool option and the correct server from the Server Pool box. Click on Next. On the Select server roles window, select Active Directory Domain Services. You will see a pop-up window to confirm the installation of Group Policy Management Tool. It is not required to install the administrative tools on a domain controller. However, this tool is required for the Group Policy Object management and administration. Click on Next. Click on Next in the Select features window. Click on Next on the Active Directory Domain Services window. In the Confirm Installation Selections window, select the Restart the destination server automatically if required option. In the pop-up window click on Yes to confirm the restart option and click on Install. This will begin the installation process. You will see the progress on the installation window itself. This window can be closed without interrupting the installation process. You can get the status update from the notification section in the command bar as shown in the following screenshot: The Post-deployment Configuration option needs to be completed after the Active Directory Domain Services role installation. This process will promote the new server as a domain controller. From the notification window, select Promote this server to a domain controller hyperlink. From the Deployment Configuration window, you should be able to: Install a new forest Install a new child domain Add an additional domain controller for an existing domain Specify alternative credentials for the domain controller promotion, and so on Since our goal is to install an additional domain controller to an existing domain, select the Add a domain controller to an existing domain option. Click on Next. In the Domain Controller Options window, you will see the following options: Domain Name System (DNS) server Global Catalog (GC) Read only Domain controller (RODC) Site name: Type the Directory Service Restore Mode (DSRM) password Select Domain Name System (DNS) server and Global Catalog (GC) checkboxes and provide the Directory Services Restore Mode (DSRM) password. Click on Next. Click on Next on the DNS Options window. In the Additional Options window you will see the following options: Install from media Replicate from Accept the default options unless you have technical reasons to modify these. Click on Next. In the Paths window, you can specify the AD Database, Log, and SYSVOL locations. Select the appropriate locations and then click on Next. Review the Microsoft Infrastructure Planning and Design (IPD) guides for best practices recommendations. For performance improvements, it is recommended to place database, log, and so on in separate drives. Click on Next on the Preparation Options window. During this process the Active Directory Schema and Domain Preparation will happen in the background. You should be able to review the selected option on the next screen. You can export these settings and configurations to a PowerShell script by clicking on the View Script option in the bottom-right corner of the screen. This script can be used for future domain controller deployments. Click on Next to continue with the installation. The prerequisite checking process will happen in the background. You will see the result in the Prerequisites Check window. This is a new enhancement in Windows Server 2012. Review the result and click on Install. The progress of the domain controller promotion will display on the Installation window. The following warning message will be displayed on the destination server before it restarts the server: You can review the %systemroot%debugdcpromo.log and %SystemRoot%debugnetsetup.log log files to get more information about DCPROMO and domain join-related issues. Summary Thus we learned the details of how to do Active Directory migration and its prerequisites, schema upgrade procedure, verification of the schema version, and installation of the Windows Server 2012 Domain Controller in the existing Windows Server 2008 and Server 2008 R2 domain. Resources for Article : Further resources on this subject: Migrating from MS SQL Server 2008 to EnterpriseDB [Article] Moving a Database from SQL Server 2005 to SQL Server 2008 in Three Steps [Article] Authoring an EnterpriseDB report with Report Builder 2.0 [Article]
Read more
  • 0
  • 0
  • 9815

article-image-mastering-centos-7-linux-server-0
Packt
30 Dec 2015
19 min read
Save for later

Mastering CentOS 7 Linux Server

Packt
30 Dec 2015
19 min read
In this article written by Bhaskarjyoti Roy, author of the book Mastering CentOS 7 Linux Server, will introduce some advanced user and group management scenarios along with some examples on how to handle advanced level options such as password aging, managing sudoers, and so on, on a day to day basis. Here, we are assuming that we have already successfully installed CentOS 7 along with a root and user credentials as we do in the traditional format. Also, the command examples, in this chapter, assume you are logged in or switched to the root user. (For more resources related to this topic, see here.)  The following topics will be covered: User and group management from GUI and command line Quotas Password aging Sudoers Managing users and groups from GUI and command line We can add a user to the system using useradd from the command line with a simple command as follows: useradd testuser This creates a user entry in the /etc/passwd file and automatically creates the home directory for the user in /home. The /etc/passwd entry looks like this: testuser:x:1001:1001::/home/testuser:/bin/bash But, as we all know, the user is in a locked state and cannot login to the system unless we add a password for the user using the command: passwd testuser This will, in turn, modify the /etc/shadow file, at the same time unlock the user, and the user will be able to login to the system. By default, the preceding set of commands will create both a user and a group for the testuser on the system. What if we want a certain set of users to be a part of a common group? We will use the -g option along with the useradd command to define the group for the user, but we have to make sure that the group already exists. So, to create users such as testuser1, testuser2, and testuser3 and make them part of a common group called testgroup, we will first create the group and then we create the users using the -g or -G switch. So we will do this: # To create the group : groupadd testgroup # To create the user with the above group and provide password and unlock user at the same time : useradd testuser1 -G testgroup passwd testuser1 useradd testuser2 -g 1002 passwd testuser2 Here, we have used both -g and -G. The difference between them is: with -G, we create the user with its default group and assign the user to the common testgroup as well, but with -g, we created the user as part of the testgroup only. In both cases, we can use either the gid or the group name obtained from the /etc/group file. There are a couple of more options that we can use for an advanced level user creation, for example, for system users with uid less than 500, we have to use the -r option, which will create a user on the system but the uid will be less than 500. We also can use -u to define a specific uid, which must be unique and greater than 499. Common options that we can use with the useradd command are: -c: This option is used for comments, generally to define the user's real name such as -c "John Doe". -d: This option is used to define home-dir; by default, the home directory is created in /home such as -d /var/<user name>. -g: This option is used for the group name or the group number for the user's default group. The group must already have been created earlier. -G: This option is used for additional group names or group numbers, separated by commas, of which the user is a member. Again, these groups must also have been created earlier. -r: This option is used to create a system account with a UID less than 500 and without a home directory. -u: This option is the user ID for the user. It must be unique and greater than 499. There are few quick options that we use with the passwd command as well. These are: -l: This option is to lock the password for the user's account -u: This option is to unlock the password for the user's account -e: This option is to expire the password for the user -x: This option is to define the maximum days for the password lifetime -n: This option is to define the minimum days for the password lifetime Quotas In order to control the disk space used in the Linux filesystem, we must use quota, which enables us to control the disk space and thus helps us resolve low disk space issues to a great extent. For this, we have to enable user and group quota on the Linux system. In CentOS 7, the user and group quota are not enabled by default so we have to enable them first. To check whether quota is enabled, or not, we issue the following command: mount | grep ' / ' The image shows that the root filesystem is enabled without quota as mentioned by the noquota in the output. Now, we have to enable quota on the root (/) filesystem, and to do that, we have to first edit the file /etc/default/grub and add the following to the GRUB_CMDLINE_LINUX: rootflags=usrquota,grpquota The GRUB_CMDLINE_LINUX line should read as follows: GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto  vconsole.keymap=us rhgb quiet rootflags=usrquota,grpquota" The /etc/default/grub should like the following screenshot: Since we have to reflect the changes we just made, we should backup the grub configuration using the following command: cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.original Now, we have to rebuild the grub with the changes we just made using the command: grub2-mkconfig -o /boot/grub2/grub.cfg Next, reboot the system. Once it's up, login and verify that the quota is enabled using the command we used before: mount | grep ' / ' It should now show us that the quota is enabled and will show us an output as follows: /dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,usrquota,grpquota) Now, since quota is enabled, we will further install quota using the following to operate quota for different users and groups, and so on: yum -y install quota Once quota is installed, we check the current quota for users using the following command: repquota -as The preceding command will report user quotas in a human readable format. From the preceding screenshot, there are two ways we can limit quota for users and groups, one is setting soft and hard limits for the size of disk space used or another is limiting the user or group by limiting the number of files they can create. In both cases, soft and hard limits are used. A soft limit is something that warns the user when the soft limit is reached and the hard limit is the limit that they cannot bypass. We will use the following command to modify a user quota: edquota -u username Now, we will use the following command to modify the group quota: edquota -g groupname If you have other partitions mounted separately, you have to modify the /etc/fstab to enable quota on the filesystem by adding usrquota and grpquota after the defaults for that specific partition as in the following screenshot, where we have enabled the quota for the /var partition: Once you are finished enabling quota, remount the filesystem and run the following commands: To remount /var : mount -o remount /var To enable quota : quotacheck -avugm quotaon -avug Quota is something all system admins use to handle disk space consumed on a server by users or groups and limit over usage of the space. It thus helps them manage the disk space usage on the system. In this regard, it should be noted that you plan before your installation and create partitions accordingly as well so that the disk space is used properly. Multiple separate partitions such as /var and /home etc are always suggested, as generally, these are the partitions, which consume most space on a Linux system. So, if we keep them on a separate partition, it will not eat up the root ('/') filesystem space and will be more failsafe than using an entire filesystem mounted as only root. Password aging It is a good policy to have password aging so that the users are forced to change their password at a certain interval. This, in turn, helps to keep the security of the system as well. We can use chage to configure the password to expire the first time the user logs in to the system. Note: This process will not work if the user logs in to the system using SSH. This method of using chage will ensure that the user is forced to change the password right away. If we use only chage <username>, it will display the current password aging value for the specified user and will allow them to be changed interactively. The following steps need to be performed to accomplish password aging: Lock the user. If the user doesn't exist, we will use the useradd command to create the user. However, we will not assign any password to the user so that it remains locked. But, if the user already exists on the system, we will use the usermod command to lock the user: Usermod -L <username> Force immediate password change using the following command: chage -d 0 <username> Unlock the account, This can be achieved in two ways. One is to assign an initial password and the other way is to assign a null password. We will take the first approach as the second one, though possible, is not a good practice in terms of security. Therefore, here is what we do to assign an initial password: Use the python command to start the command-line python interpreter: import crypt; print crypt.crypt("Q!W@E#R$","Bing0000/") Here, we have used the Q!W@E#R$ password with a salt combination of the alphanumeric character: Bing0000 followed by a (/) character. The output is the encrypted password, similar to 'BiagqBsi6gl1o'. Press Ctrl + D to exit the Python interpreter. At the shell, enter the following command with the encrypted output of the Python interpreter: usermod -p "<encrypted-password>" <username> So, here, in our case, if the username is testuser, we will use the following command: usermod -p "BiagqBsi6gl1o" testuser Now, upon initial login using the "Q!W@E#R$" password, the user will be prompted for a new password. Setting the password policy This is a set of rules defined in some files, which have to be followed when a system user is setting up. It's an important factor in security because one of the many security breach histories was started with hacking user passwords. This is the reason why most organizations set a password policy for their users. All usernames and passwords must comply with this. A password policy usually is defined by the following: Password aging Password length Password complexity Limit login failures Limit prior password reuse Configuring password aging and password length Password aging and password length are defined in /etc/login.defs. Aging basically means the maximum number of days a password might be used, minimum number of days allowed between password changes, and number of warnings before the password expires. Length refers to the number of characters required for creating the password. To configure password aging and length, we should edit the /etc/login.defs file and set different PASS values according to the policy set by the organization. Note: The password aging controls defined here does not affect existing users; it only affects the newly created users. So, we must set these policies when setting up the system or the server at the beginning. The values we modify are: PASS_MAX_DAYS: The maximum number of days a password can be used PASS_MIN_DAYS: The minimum number of days allowed between password changes PASS_MIN_LEN: The minimum acceptable password length PASS_WARN_AGE: The number of days warning to be given before a password expires Let's take a look at a sample configuration of the login.defs file: Configuring password complexity and limiting reused password usage By editing the /etc/pam.d/system-auth file, we can configure the password complexity and the number of reused passwords to be denied. A password complexity refers to the complexity of the characters used in the password, and the reused password deny refers to denying the desired number of passwords the user used in the past. By setting the complexity, we force the usage of the desired number of capital characters, lowercase characters, numbers, and symbols in a password. The password will be denied by the system until and unless the complexity set by the rules are met. We do this using the following terms: Force capital characters in passwords: ucredit=-X, where X is the number of capital characters required in the password Force lower case characters in passwords: lcredit=-X, where X is the number of lower case characters required in the password Force numbers in passwords: dcredit=-X, where X is the number numbers required in the password Force the use of symbols in passwords: ocredit=-X, where X is the number of symbols required in the password For example: password requisite pam_cracklib.so try_first_pass retry=3 type= ucredit=-2 lcredit=-2 dcredit=-2 ocredit=-2 Deny reused passwords: remember=X, where X is the number of past passwords to be denied For example: password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5 Let's now take a look at a sample configuration of /etc/pam.d/system-auth: Configuring login failures We set the number of login failures allowed by a user in the /etc/pam.d/password-auth, /etc/pam.d/system-auth, and /etc/pam.d/login files. When a user's failed login attempts are higher than the number defined here, the account is locked and only a system administrator can unlock the account. To configure this, make the following additions to the files. The following deny=X parameter configures this, where X is the number of failed login attempts allowed: Add these two lines to the /etc/pam.d/password-auth and /etc/pam.d/system-auth files and only the first line to the /etc/pam.d/login file: auth        required    pam_tally2.so file=/var/log/tallylog deny=3 no_magic_root unlock_time=300 account     required    pam_tally2.so The following screenshot is a sample /etc/pam.d/system-auth file: The following is a sample /etc/pam.d/login file: To see failures, use the following command: pam_tally2 –user=<User Name> To reset the failure attempts and to enable the user to login again, use the following command: pam_tally2 –user=<User Name> --reset Sudoers Separation of user privilege is one of the main features in Linux operating systems. Normal users operate in limited privileged sessions to limit the scope of their influence on the entire system. One special user exists on Linux that we know about already is root, which has super-user privileges. This account doesn't have any restrictions that are present to normal users. Users can execute commands with super-user or root privileges in a number of different ways. There are mainly three different ways to obtain root privileges on a system: Login to the system as root Login to the system as any user and then use the su - command. This will ask you for the root password and once authenticated, will give you the root shell session. We can disconnect this root shell using Ctrl + D or using the command exit. Once exited, we will come back to our normal user shell. Run commands with root privileges using sudo without spawning a root shell or logging in as root. This sudo command works as follows: sudo <command to execute> Unlike su, sudo will request the password of the user calling the command, not the root password. Sudo doesn't work by default and requires to be set up before it functions correctly. In the following section, we will see how to configure sudo and modify the /etc/sudoers file so that it works the way we want it to. visudo Sudo is modified or implemented using the /etc/sudoers file, and visudo is the command that enables us to edit the file. Note: This file should not be edited using a normal text editor to avoid potential race conditions in updating the file with other processes. Instead, the visudo command should be used. The visudo command opens a text editor normally, but then validates the syntax of the file upon saving. This prevents configuration errors from blocking sudo operations. By default, visudo opens the /etc/sudoers file in Vi Editor, but we can configure it to use the nano text editor instead. For that, we have to make sure nano is already installed or we can install nano using: yum install nano -y Now, we can change it to use nano by editing the ~/.bashrc file: export EDITOR=/usr/bin/nano Then, source the file using: . ~/.bashrc Now, we can use visudo with nano to edit the /etc/sudoers file. So, let's open the /etc/sudoers file using visudo and learn a few things. We can use different kinds of aliases for different set of commands, software, services, users, groups, and so on. For example: Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig and many more ... We can use these aliases to assign a set of command execution rights to a user or a group. For example, if we want to assign the NETWORKING set of commands to the group netadmin we will define: %netadmin ALL = NETWORKING Otherwise, if we want to allow the wheel group users to run all the commands, we use the following command: %wheel  ALL=(ALL)  ALL If we want a specific user, john, to get access to all commands we use the following command: john  ALL=(ALL)  ALL We can create different groups of users, with overlapping membership: User_Alias      GROUPONE = abby, brent, carl User_Alias      GROUPTWO = brent, doris, eric, User_Alias      GROUPTHREE = doris, felicia, grant Group names must start with a capital letter. We can then allow members of GROUPTWO to update the yum database and all the commands assigned to the preceding software by creating a rule like this: GROUPTWO    ALL = SOFTWARE If we do not specify a user/group to run, sudo defaults to the root user. We can allow members of GROUPTHREE to shutdown and reboot the machine by creating a command alias and using that in a rule for GROUPTHREE: Cmnd_Alias      POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart GROUPTHREE  ALL = POWER We create a command alias called POWER that contains commands to power off and reboot the machine. We then allow the members of GROUPTHREE to execute these commands. We can also create Run as aliases, which can replace the portion of the rule that specifies to the user to execute the command as: Runas_Alias     WEB = www-data, apache GROUPONE    ALL = (WEB) ALL This will allow anyone who is a member of GROUPONE to execute commands as the www-data user or the apache user. Just keep in mind that later, rules will override previous rules when there is a conflict between the two. There are a number of ways that you can achieve more control over how sudo handles a command. Here are some examples: The updatedb command associated with the mlocate package is relatively harmless. If we want to allow users to execute it with root privileges without having to type a password, we can make a rule like this: GROUPONE    ALL = NOPASSWD: /usr/bin/updatedb NOPASSWD is a tag that means no password will be requested. It has a companion command called PASSWD, which is the default behavior. A tag is relevant for the rest of the rule unless overruled by its twin tag later down the line. For instance, we can have a line like this: GROUPTWO    ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill In this case, a user can run the updatedb command without a password as the root user, but entering the root password will be required for running the kill command. Another helpful tag is NOEXEC, which can be used to prevent some dangerous behavior in certain programs. For example, some programs, such as less, can spawn other commands by typing this from within their interface: !command_to_run This basically executes any command the user gives it with the same permissions that less is running under, which can be quite dangerous. To restrict this, we could use a line like this: username    ALL = NOEXEC: /usr/bin/less We should now have clear understanding of what sudo is and how do we modify and provide access rights using visudo. There are many more things left here. You can check the default /etc/sudoers file, which has a good number of examples, using the visudo command, or you can read the sudoers manual as well. One point to remember is that root privileges are not given to regular users often. It is important for us to understand what these command do when you execute with root privileges. Do not take the responsibility lightly. Learn the best way to use these tools for your use case, and lock down any functionality that is not needed. Reference Now, let's take a look at the major reference used throughout the chapter: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/index.html Summary In all we learned about some advanced user management and how to manage users through the command line, along with password aging, quota, exposure to /etc/sudoers, and how to modify them using visudo. User and password management is a regular task that a system administrator performs on servers, and it has a very important role in the overall security of the system. Resources for Article:   Further resources on this subject: SELinux - Highly Secured Web Hosting for Python-based Web Applications [article] A Peek Under the Hood – Facts, Types, and Providers [article] Puppet Language and Style [article]
Read more
  • 0
  • 0
  • 9745

article-image-squid-proxy-server-3-getting-started
Packt
06 Apr 2011
12 min read
Save for later

Squid Proxy Server 3: getting started

Packt
06 Apr 2011
12 min read
What is a proxy server? A proxy server is a computer system sitting between the client requesting a web document and the target server (another computer system) serving the document. In its simplest form, a proxy server facilitates communication between client and target server without modifying requests or replies. When we initiate a request for a resource from the target server, the proxy server hijacks our connection and represents itself as a client to the target server, requesting the resource on our behalf. If a reply is received, the proxy server returns it to us, giving a feel that we have communicated with the target server. In advanced forms, a proxy server can filter requests based on various rules and may allow communication only when requests can be validated against the available rules. The rules are generally based on an IP address of a client or target server, protocol, content type of web documents, web content type, and so on. As seen in the preceding image, clients can't make direct requests to the web servers. To facilitate communication between clients and web servers, we have connected them using a proxy server which is acting as a medium of communication for clients and web servers. Sometimes, a proxy server can modify requests or replies, or can even store the replies from the target server locally for fulfilling the same request from the same or other clients at a later stage. Storing the replies locally for use at a later time is known as caching. Caching is a popular technique used by proxy servers to save bandwidth, empowering web servers, and improving the end user's browsing experience. Proxy servers are mostly deployed to perform the following: Reduce bandwidth usage Enhance the user's browsing experience by reducing page load time which, in turn, is achieved by caching web documents Enforce network access policies Monitoring user traffic or reporting Internet usage for individual users or groups Enhance user privacy by not exposing a user's machine directly to Internet Distribute load among different web servers to reduce load on a single server Empower a poorly performing web server Filter requests or replies using an integrated virus/malware detection system Load balance network traffic across multiple Internet connections Relay traffic around within a local area network In simple terms, a proxy server is an agent between a client and target server that has a list of rules against which it validates every request or reply, and then allows or denies access accordingly. What is a reverse proxy? Reverse proxying is a technique of storing the replies or resources from a web server locally so that the subsequent requests to the same resource can be satisfied from the local copy on the proxy server, sometimes without even actually contacting the web server. The proxy server or web cache checks if the locally stored copy of the web document is still valid before serving the cached copy. The life of the locally stored web document is calculated from the additional HTTP headers received from the web server. Using HTTP headers, web servers can control whether a given document/response should be cached by a proxy server or not. Web caching is mostly used: To reduce bandwidth usage. A large number of static web documents like CSS and JavaScript files, images, videos, and so on can be cached as they don't change frequently and constitutes the major part of a response from a web server. By ISPs to reduce average page load time to enhance browsing experience for their customers on Dial-Up or broadband. To take a load off a very busy web server by serving static pages/documents from a proxy server's cache. How to download Squid Squid is available in several forms (compressed source archives, source code from a version control system, binary packages such as RPM, DEB, and so on) from Squid's official website, various Squid mirrors worldwide, and software repositories of almost all the popular operating systems. Squid is also shipped with many Linux/Unix distributions. There are various versions and releases of Squid available for download from Squid's official website. To get the most out of a Squid installation its best to check out the latest source code from a Version Control System (VCS) so that we get the latest features and fixes. But be warned, the latest source code from a VCS is generally leading edge and may not be stable or may not even work properly. Though code from a VCS is good for learning or testing Squid's new features, you are strongly advised not to use code from a VCS for production deployments. If we want to play safe, we should probably download the latest stable version or stable version from the older releases. Stable versions are generally tested before they are released and are supposed to work out of the box. Stable versions can directly be used in production deployments. Time for action – identifying the right version A list of available versions of Squid is maintained here. For production environments, we should use versions listed under the Stable Versions section only. If we want to test new Squid features in our environment or if we intend to provide feedback to the Squid community about the new version, then we should be using one of the Beta Versions. As we can see in the preceding screenshot, the website contains the First Production Release Date and Latest Release Date for the stable versions. If we click on any of the versions, we are directed to a page containing a list of all the releases in that particular version. Let's have a look at the page for version 3.1: For every release, along with a release date, there are links for downloading compressed source archives. Different versions of Squid may have different features. For example, all the features available in Squid version 2.7 may or may not be available in newer versions such as Squid 3.x. Some features may have been deprecated or have become redundant over time and they are generally removed. On the other hand, Squid 3.x may have several new features or existing features in an improved and revised manner. Therefore, we should always aim for the latest version, but depending on the environment, we may go for stable or beta version. Also, if we need specific features that are not available in the latest version, we may choose from the available releases in a different branch. What just happened? We had a brief look at the pages containing the different versions and releases of Squid, on Squid's official website. We also learned which versions and releases that we should download and use for different types of usage. Methods of obtaining Squid After identifying the version of Squid that we should be using for compiling and installation, let's have a look at the ways in which we can obtain Squid release 3.1.10. Using source archives Compressed source archives are the most popular way of getting Squid. To download the source archive, please visit Squid download page, http://www.squid-cache.org/Download/. This web page has links for downloading the different versions and releases of Squid, either from the official website or available mirrors worldwide. We can use either HTTP or FTP for getting the Squid source archive. Time for action – downloading Squid Now we are going to download Squid 3.1.10 from Squid's official website: Let's go to the web page. Now we need to click on the link to Version 3.1, as shown in the following screenshot: We'll be taken to a page displaying the various releases in version 3.1. The link with the display text tar.gz in the Download column is a link to the compressed source archive for Squid release 3.1.10, as shown in the following screenshot: To download Squid 3.1.10 using the web browser, just click on the link. Alternatively, we can use wget to download the source archive from the command line as follows: wget http://www.squid-cache.org/Versions/v3/3.1/squid-3.1.10.tar.gz What just happened? We successfully retrieved Squid version 3.1.10 from Squid's official website. The process of retrieving other stable or beta versions is very similar. Obtaining the latest source code from Bazaar VCS Advanced users may be interested in getting the very latest source code from the Squid code repository, using Bazaar. We can safely skip this section if we are not familiar with VCS in general. Bazaar is a popular version control system used to track project history and facilitate collaboration. From version 3.x onwards, Squid source code has been migrated to Bazaar. Therefore, we should ensure that we have Bazaar installed on our system in order to checkout the source code from repository. To find out more about Bazaar or for Bazaar installation and configuration manuals, please visit Bazaar's official website. Once we have setup Bazaar, we should head to the Squid code repository mirrored on Launchpad. From here we can browse all the versions and branches of Squid. Let's get ourselves familiar with the page layout: In the previous screenshot, Series: trunk represents the development branch, which contains code that is still in development and is not ready for production use. The branches with the status Mature are stable and can be used right away in production environments. Time for action – using Bazaar to obtain source code Now that we are familiar with the various branches, versions, and releases. Let's proceed to checking out the source code with Bazaar. To download code from any branch, the syntax for the command is as follows: bzr branch lp:squid[/branch[/version]] branch and version are optional parameters in the previous code. So, if we want to get branch 3.1, then the command will be as follows: bzr branch lp:squid/3.1 The previous command will fetch source code from Launchpad and may take a considerable amount of time, depending on the Internet connection. If we are willing to download source code for Squid version 3.1.10, then the command will be as follows: bzr branch lp:squid/3.1/3.1.10 In the previous code, 3.1 is the branch name and 3.1.10 is the specific version of Squid that we want to checkout. What just happened? We learned to fetch the source code for any Squid branch or release using Bazaar from Squid's source code hosted on Launchpad. Have a go hero – fetching the source code Using the command syntax that we learned in the previous section, fetch the source code for Squid version 3.0.stable25 from Launchpad. Solution: bzr branch lp:squid/3.0/3.0.stable25 Explanation: If we browse to the particular version on Launchpad, the version number used in the command becomes obvious. Using binary packages Squid binary packages are pre-compiled and ready to install software bundles. Binary packages are available in the software repositories of almost all Linux/Unix-based operating systems. Depending on the operating system, only stable and sometimes well tested beta versions make it to the software repositories, so they are ready for production use. Installing Squid Squid can be installed using the source code we obtained in the previous section, using a package manager which, in turn, uses the binary package available for our operating system. Let's have a detailed look at the ways in which we can install Squid. Installing Squid from source code Installing Squid from source code is a three step process: Select the features and operating system-specific settings. Compile the source code to generate the executables. Place the generated executables and other required files in their designated locations for Squid to function properly. We can perform some of the above steps using automated tools that make the compilation and installation process relatively easy. Compiling Squid Compiling Squid is a process of compiling several files containing C/C++ source code and generating executables. Compiling Squid is really easy and can be done in a few steps. For compiling Squid, we need an ANSI C/C++ compliant compiler. If we already have a GNU C/C++ Compiler (GNU Compiler Collection (GCC) and g++, which are available on almost every Linux/Unix-based operating system by default), we are ready to begin the actual compilation. Why compile? Compiling Squid is a bit of a painful task compared to installing Squid from the binary package. However, we recommend compiling Squid from the source instead of using pre-compiled binaries. Let's walk through a few advantages of compiling Squid from the source: While compiling we can enable extra features, which may not be enabled in the pre-compiled binary package. When compiling, we can also disable extra features that are not needed for a particular environment. For example, we may not need Authentication helpers or ICMP support. configure probes the system for several features and enables or disables them accordingly, while pre-compiled binary packages will have the features detected for the system the source was compiled on. Using configure, we can specify an alternate location for installing Squid. We can even install Squid without root or super user privileges, which may not be possible with pre-compiled binary package. Though compiling Squid from source has a lot of advantages over installing from the binary package, the binary package has its own advantages. For example, when we are in damage control mode or a crisis situation and we need to get the proxy server up and running really quickly, using a binary package for installation will provide a quicker installation. Uncompressing the source archive If we obtained the Squid in a compressed archive format, we must extract it before we can proceed any further. If we obtained Squid from Launchpad using Bazaar, we don't need to perform this step. tar -xvzf squid-3.1.10.tar.gz tar is a popular command which is used to extract compressed archives of various types. On the other hand, it can also be used to compress many files into a single archive. The preceding command will extract the archive to a directory named squid-3.1.10.
Read more
  • 0
  • 0
  • 9481
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-finding-useful-information
Packt
16 Oct 2015
22 min read
Save for later

Finding useful information

Packt
16 Oct 2015
22 min read
In this article written by Benjamin Cane, author of the book Red Hat Enterprise Linux Troubleshooting Guide the author goes on to explain how before starting to explore troubleshooting commands, we should first cover locations of useful information. Useful information is a bit of an ubiquitous term, pretty much every file, directory, or command can provide useful information. What he really plans to cover are places where it is possible to find information for almost any issue. (For more resources related to this topic, see here.) Log files Log files are often the first place to start looking for troubleshooting information. Whenever a service or server is experiencing an issue, checking the log files for errors can often answer many questions quickly. The default location By default, RHEL and most Linux distributions keep their log files in /var/log/, which is actually part of the Filesystem Hierarchy Standard (FHS) maintained by the Linux Foundation. However, while /var/log/ might be the default location not all log files are located there(http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard). While /var/log/httpd/ is the default location for Apache logs, this location can be changed with Apache's configuration files. This is especially common when Apache was installed outside of the standard RHEL package. Like Apache, most services allow for custom log locations. It is not uncommon to find custom directories or file systems outside of /var/log created specifically for log files. Common log files The following table is a short list of common log files and a description of what you can find within them. Do keep in mind that this list is specific to Red Hat Enterprise Linux 7, and while other Linux distributions might follow similar conventions, they are not guaranteed. Log file Description /var/log/messages By default, this log file contains all syslog messages (except e-mail) of INFO or higher priority. /var/log/secure This log file contains authentication related message items such as: SSH logins User creations Sudo violations and privilege escalation /var/log/cron This log file contains a history of crond executions as well as start and end times of cron.daily, cron.weekly, and other executions. /var/log/maillog This log file is the default log location of mail events. If using postfix, this is the default location for all postfix-related messages. /var/log/httpd/ This log directory is the default location for Apache logs. While this is the default location, it is not a guaranteed location for all Apache logs. /var/log/mysql.log This log file is the default log file for mysqld. Much like the httpd logs, this is default and can be changed easily. /var/log/sa/ This directory contains the results of the sa commands that run every 10 minutes by default. For many issues, one of the first log files to review is the /var/log/messages log. On RHEL systems, this log file receives all system logs of INFO priority or higher. In general, this means that any significant event sent to syslog would be captured in this log file. The following is a sample of some of the log messages that can be found in /var/log/messages: Dec 24 18:03:51 localhost systemd: Starting Network Manager Script Dispatcher Service... Dec 24 18:03:51 localhost dbus-daemon: dbus[620]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 24 18:03:51 localhost dbus[620]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Dec 24 18:03:51 localhost systemd: Started Network Manager Script Dispatcher Service. Dec 24 18:06:06 localhost kernel: e1000: enp0s3 NIC Link is Down Dec 24 18:06:06 localhost kernel: e1000: enp0s8 NIC Link is Down Dec 24 18:06:06 localhost NetworkManager[750]: <info> (enp0s3): link disconnected (deferring action for 4 seconds) Dec 24 18:06:06 localhost NetworkManager[750]: <info> (enp0s8): link disconnected (deferring action for 4 seconds) Dec 24 18:06:10 localhost NetworkManager[750]: <info> (enp0s3): link disconnected (calling deferred action) Dec 24 18:06:10 localhost NetworkManager[750]: <info> (enp0s8): link disconnected (calling deferred action) Dec 24 18:06:12 localhost kernel: e1000: enp0s3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Dec 24 18:06:12 localhost kernel: e1000: enp0s8 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Dec 24 18:06:12 localhost NetworkManager[750]: <info> (enp0s3): link connected Dec 24 18:06:12 localhost NetworkManager[750]: <info> (enp0s8): link connected Dec 24 18:06:39 localhost kernel: atkbd serio0: Spurious NAK on isa0060/serio0. Some program might be trying to access hardware directly. Dec 24 18:07:10 localhost systemd: Starting Session 53 of user root. Dec 24 18:07:10 localhost systemd: Started Session 53 of user root. Dec 24 18:07:10 localhost systemd-logind: New session 53 of user root. As we can see, there are more than a few log messages within this sample that could be useful while troubleshooting issues. Finding logs that are not in the default location Many times log files are not in /var/log/, which can be either because someone modified the log location to some place apart from the default, or simply because the service in question defaults to another location. In general, there are three ways to find log files not in /var/log/. Checking syslog configuration If you know a service is using syslog for its logging, the best place to check to find which log file its messages are being written to is the rsyslog configuration files. The rsyslog service has two locations for configuration. The first is the /etc/rsyslog.d directory. The /etc/rsyslog.d directory is an include directory for custom rsyslog configurations. The second is the /etc/rsyslog.conf configuration file. This is the main configuration file for rsyslog and contains many of the default syslog configurations. The following is a sample of the default contents of /etc/rsyslog.conf: #### RULES #### # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron By reviewing the contents of this file, it is fairly easy to identify which log files contain the information required, if not, at least, the possible location of syslog managed log files. Checking the application's configuration Not every application utilizes syslog; for those that don't, one of the easiest ways to find the application's log file is to read the application's configuration files. A quick and useful method for finding log file locations from configuration files is to use the grep command to search the file for the word log: $ grep log /etc/samba/smb.conf # files are rotated when they reach the size specified with "max log size". # log files split per-machine: log file = /var/log/samba/log.%m # maximum size of 50KB per log file, then rotate: max log size = 50 The grep command is a very useful command that can be used to search files or directories for specific strings or patterns. The simplest command can be seen in the preceding snippet where the grep command is used to search the /etc/samba/smb.conf file for any instance of the pattern "log". After reviewing the output of the preceding grep command, we can see that the configured log location for samba is /var/log/samba/log.%m. It is important to note that %m, in this example, is actually replaced with a "machine name" when creating the file. This is actually a variable within the samba configuration file. These variables are unique to each application but this method for making dynamic configuration values is a common practice. Other examples The following are examples of using the grep command to search for the word "log" in the Apache and MySQL configuration files: $ grep log /etc/httpd/conf/httpd.conf # ErrorLog: The location of the error log file. # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. ErrorLog "logs/error_log" $ grep log /etc/my.cnf # log_bin log-error=/var/log/mysqld.log In both instances, this method was able to identify the configuration parameter for the service's log file. With the previous three examples, it is easy to see how effective searching through configuration files can be. Using the find command The find command, is another useful method for finding log files. The find command is used to search a directory structure for specified files. A quick way of finding log files is to simply use the find command to search for any files that end in ".log": # find /opt/appxyz/ -type f -name "*.log" /opt/appxyz/logs/daily/7-1-15/alert.log /opt/appxyz/logs/daily/7-2-15/alert.log /opt/appxyz/logs/daily/7-3-15/alert.log /opt/appxyz/logs/daily/7-4-15/alert.log /opt/appxyz/logs/daily/7-5-15/alert.log The preceding is generally considered a last resort solution, and is mostly used when the previous methods do not produce results. When executing the find command, it is considered a best practice to be very specific about which directory to search. When being executed against very large directories, the performance of the server can be degraded. Configuration files As discussed previously, configuration files for an application or service can be excellent sources of information. While configuration files won't provide you with specific errors such as log files, they can provide you with critical information (for example, enabled/disabled features, output directories, and log file locations). Default system configuration directory In general, system, and service configuration files are located within the /etc/ directory on most Linux distributions. However, this does not mean that every configuration file is located within the /etc/ directory. In fact, it is not uncommon for applications to include a configuration directory within the application's home directory. So how do you know when to look in the /etc/ versus an application directory for configuration files? A general rule of thumb is, if the package is part of the RHEL distribution, it is safe to assume that the configuration is within the /etc/ directory. Anything else may or may not be present in the /etc/ directory. For these situations, you simply have to look for them. Finding configuration files In most scenarios, it is possible to find system configuration files within the /etc/ directory with a simple directory listing using the ls command: $ ls -la /etc/ | grep my -rw-r--r--. 1 root root 570 Nov 17 2014 my.cnf drwxr-xr-x. 2 root root 64 Jan 9 2015 my.cnf.d The preceding code snippet uses ls to perform a directory listing and redirects that output to grep in order to search the output for the string "my". We can see from the output that there is a my.cnf configuration file and a my.cnf.d configuration directory. The MySQL processes use these for its configuration. We were able to find these by assuming that anything related to MySQL would have the string "my" in it. Using the rpm command If the configuration files were deployed as part of a RPM package, it is possible to use the rpm command to identify configuration files. To do this, simply execute the rpm command with the –q (query) flag, and the –c (configfiles) flag, followed by the name of the package: $ rpm -q -c httpd /etc/httpd/conf.d/autoindex.conf /etc/httpd/conf.d/userdir.conf /etc/httpd/conf.d/welcome.conf /etc/httpd/conf.modules.d/00-base.conf /etc/httpd/conf.modules.d/00-dav.conf /etc/httpd/conf.modules.d/00-lua.conf /etc/httpd/conf.modules.d/00-mpm.conf /etc/httpd/conf.modules.d/00-proxy.conf /etc/httpd/conf.modules.d/00-systemd.conf /etc/httpd/conf.modules.d/01-cgi.conf /etc/httpd/conf/httpd.conf /etc/httpd/conf/magic /etc/logrotate.d/httpd /etc/sysconfig/htcacheclean /etc/sysconfig/httpd The rpm command is used to manage RPM packages and is a very useful command when troubleshooting. We will cover this command further as we explore commands for troubleshooting. Using the find command Much like finding log files, to find configuration files on a system, it is possible to utilize the find command. When searching for log files, the find command was used to search for all files where the name ends in ".log". In the following example, the find command is being used to search for all files where the name begins with "http". This find command should return at least a few results, which will provide configuration files related to the HTTPD (Apache) service: # find /etc -type f -name "http*" /etc/httpd/conf/httpd.conf /etc/sysconfig/httpd /etc/logrotate.d/httpd The preceding example searches the /etc directory; however, this could also be used to search any application home directory for user configuration files. Similar to searching for log files, using the find command to search for configuration files is generally considered a last resort step and should not be the first method used. The proc filesystem An extremely useful source of information is the proc filesystem. This is a special filesystem that is maintained by the Linux kernel. The proc filesystem can be used to find useful information about running processes, as well as other system information. For example, if we wanted to identify the filesystems supported by a system, we could simply read the /proc/filesystems file: $ cat /proc/filesystems nodev sysfs nodev rootfs nodev bdev nodev proc nodev cgroup nodev cpuset nodev tmpfs nodev devtmpfs nodev debugfs nodev securityfs nodev sockfs nodev pipefs nodev anon_inodefs nodev configfs nodev devpts nodev ramfs nodev hugetlbfs nodev autofs nodev pstore nodev mqueue nodev selinuxfs xfs nodev rpc_pipefs nodev nfsd This filesystem is extremely useful and contains quite a bit of information about a running system. The proc filesystem will be used throughout the troubleshooting steps. It is used in various ways while troubleshooting everything from specific processes, to read-only filesystems. Troubleshooting commands This section will cover frequently used troubleshooting commands that can be used to gather information from the system or a running service. While it is not feasible to cover every possible command, the commands used do cover fundamental troubleshooting steps for Linux systems. Command-line basics The troubleshooting steps used are primarily command-line based. While it is possible to perform many of these things from a graphical desktop environment, the more advanced items are command-line specific. As such, the reader has at least a basic understanding of Linux. To be more specific, we assumes that the reader has logged into a server via SSH and is familiar with basic commands such as cd, cp, mv, rm, and ls. For those who might not have much familiarity, I wanted to quickly cover some basic command-line usage that will be required. Command flags Many readers are probably familiar with the following command: $ ls -la total 588 drwx------. 5 vagrant vagrant 4096 Jul 4 21:26 . drwxr-xr-x. 3 root root 20 Jul 22 2014 .. -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c Most should recognize that this is the ls command and it is used to perform a directory listing. What might not be familiar is what exactly the –la part of the command is or does. To understand this better, let's look at the ls command by itself: $ ls app.c application app.py bomber.py index.html lookbusy-1.4 lookbusy-1.4.tar.gz lotsofiles The previous execution of the ls command looks very different from the previous. The reason for this is because the latter is the default output for ls. The –la portion of the command is what is commonly referred to as command flags or options. The command flags allow a user to change the default behavior of the command providing it with specific options. In fact, the –la flags are two separate options, –l and –a; they can even be specified separately: $ ls -l -a total 588 drwx------. 5 vagrant vagrant 4096 Jul 4 21:26 . drwxr-xr-x. 3 root root 20 Jul 22 2014 .. -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c We can see from the preceding snippet that the output of ls –la is exactly the same as ls –l –a. For common commands, such as the ls command, it does not matter if the flags are grouped or separated, they will be parsed in the same way. Will show both grouped and ungrouped. If grouping or ungrouping is performed for any specific reason it will be called out; otherwise, the grouping or ungrouping used for visual appeal and memorization. In addition to grouping and ungrouping, we will also show flags in their long format. In the previous examples, we showed the flag -a, this is known as a short flag. This same option can also be provided in the long format --all: $ ls -l --all total 588 drwx------. 5 vagrant vagrant 4096 Jul 4 21:26 . drwxr-xr-x. 3 root root 20 Jul 22 2014 .. -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c The –a and the --all flags are essentially the same option; it can simply be represented in both short and long form. One important thing to remember is that not every short flag has a long form and vice versa. Each command has its own syntax, some commands only support the short form, others only support the long form, but many support both. In most cases, the long and short flags will both be documented within the commands man page. Piping command output Another common command-line practice that will be used several times is piping output. Specifically, examples such as the following: $ ls -l --all | grep app -rw-rw-r--. 1 vagrant vagrant 153104 Jun 10 17:03 app.c -rwxrwxr-x. 1 vagrant vagrant 29390 May 18 00:47 application -rw-rw-r--. 1 vagrant vagrant 1198 Jun 10 17:03 app.py In the preceding example, the output of the ls -l --all command is piped to the grep command. By placing | or the pipe character between the two commands, the output of the first command is "piped" to the input for the second command. The example preceding the ls command will be executed; with that, the grep command will then search that output for any instance of the pattern "app". Piping output to grep will actually be used quite often, as it is a simple way to trim the output into a maintainable size. Many times the examples will also contain multiple levels of piping: $ ls -la | grep app | awk '{print $4,$9}' vagrant app.c vagrant application vagrant app.py In the preceding code the output of ls -la is piped to the input of grep; however, this time, the output of grep is also piped to the input of awk. While many commands can be piped to, not every command supports this. In general, commands that accept user input from files or command-line also accept piped input. As with the flags, a command's man page can be used to identify whether the command accepts piped input or not. Gathering general information When managing the same servers for a long time, you start to remember key information about those servers. Such as the amount of physical memory, the size and layout of their filesystems, and what processes should be running. However, when you are not familiar with the server in question it is always a good idea to gather this type of information. The commands in this section are commands that can be used to gather this type of general information. w – show who is logged on and what they are doing Early in my systems administration career, I had a mentor who used to tell me I always run w when I log into a server. This simple tip has actually been very useful over and over again in my career. The w command is simple; when executed it will output information such as system uptime, load average, and who is logged in: # w 04:07:37 up 14:26, 2 users, load average: 0.00, 0.01, 0.05 USER TTY LOGIN@ IDLE JCPU PCPU WHAT root tty1 Wed13 11:24m 0.13s 0.13s -bash root pts/0 20:47 1.00s 0.21s 0.19s -bash This information can be extremely useful when working with unfamiliar systems. The output can be useful even when you are familiar with the system. With this command, you can see: When this system was last rebooted:04:07:37 up 14:26:This information can be extremely useful; whether it is an alert for a service like Apache being down, or a user calling in because they were locked out of the system. When these issues are caused by an unexpected reboot, the reported issue does not often include this information. By running the w command, it is easy to see the time elapsed since the last reboot. The load average of the system:load average: 0.00, 0.01, 0.05:The load average is a very important measurement of system health. To summarize it, the load average is the average number of processes in a wait state over a period of time. The three numbers in the output of w represent different times.The numbers are ordered from left to right as 1 minute, 5 minutes, and 15 minutes. Who is logged in and what they are running: USER TTY LOGIN@ IDLE JCPU PCPU WHAT root tty1 Wed13 11:24m 0.13s 0.13s -bash The final piece of information that the w command provides is users that are currently logged in and what command they are executing. This is essentially the same output as the who command, which includes the user logged in, when they logged in, how long they have been idle, and what command their shell is running. The last item in that list is extremely important. Oftentimes, when working with big teams, it is common for more than one person to respond to an issue or ticket. By running the w command immediately after login, you will see what other users are doing, preventing you from overriding any troubleshooting or corrective steps the other person has taken. rpm – RPM package manager The rpm command is used to manage Red Hat package manager (RPM). With this command, you can install and remove RPM packages, as well as search for packages that are already installed. We saw earlier how the rpm command can be used to look for configuration files. The following are several additional ways we can use the rpm command to find critical information. Listing all packages installed Often when troubleshooting services, a critical step is identifying the version of the service and how it was installed. To list all RPM packages installed on a system, simply execute the rpm command with -q (query) and -a (all): # rpm -q -a kpatch-0.0-1.el7.noarch virt-what-1.13-5.el7.x86_64 filesystem-3.2-18.el7.x86_64 gssproxy-0.3.0-9.el7.x86_64 hicolor-icon-theme-0.12-7.el7.noarch The rpm command is a very diverse command with many flags. In the preceding example the -q and -a flags are used. The -q flag tells the rpm command that the action being taken is a query; you can think of this as being put into a "search mode". The -a or --all flag tells the rpm command to list all packages. A useful feature is to add the --last flag to the preceding command, as this causes the rpm command to list the packages by install time with the latest being first. Listing all files deployed by a package Another useful rpm function is to show all of the files deployed by a specific package: # rpm -q --filesbypkg kpatch-0.0-1.el7.noarch kpatch /usr/bin/kpatch kpatch /usr/lib/systemd/system/kpatch.service In the preceding example, we again use the -q flag to specify that we are running a query, along with the --filesbypkg flag. The --filesbypkg flag will cause the rpm command to list all of the files deployed by the specified package. This example can be very useful when trying to identify a service's configuration file location. Using package verification In this third example, we are going to use an extremely useful feature of rpm, verify. The rpm command has the ability to verify whether or not the files deployed by a specified package have been altered from their original contents. To do this, we will use the -V (verify) flag: # rpm -V httpd S.5....T. c /etc/httpd/conf/httpd.conf In the preceding example, we simply run the rpm command with the -V flag followed by a package name. As the -q flag is used for querying, the -V flag is for verifying. With this command, we can see that only the /etc/httpd/conf/httpd.conf file was listed; this is because rpm will only output files that have been altered. In the first column of this output, we can see which verification checks the file failed. While this column is a bit cryptic at first, the rpm man page has a useful table (as shown in the following list) explaining what each character means: S: This means that the file size differs M: This means that the mode differs (includes permissions and file type) 5: This means that the digest (formerly MD5 sum) differs D: This means indicates the device major/minor number mismatch L: This means indicates the readLink(2) path mismatch U: This means that the user ownership differs G: This means that the group ownership differs T: This means that mTime differs P: This means that caPabilities differs Using this list we can see that the httpd.conf's file size, MD5 sum, and mtime (Modify Time) are not what was deployed by httpd.rpm. This means that it is highly likely that the httpd.conf file has been modified after installation. While the rpm command might not seem like a troubleshooting command at first, the preceding examples show just how powerful of a troubleshooting tool it can be. With these examples, it is simple to identify important files and whether or not those files have been modified from the deployed version. Summary Overall we learned that log files, configuration files, and the /proc filesystem are key sources of information during troubleshooting. We also covered the basic use of many fundamental troubleshooting commands. You also might have noticed that quite a few commands are also used in day-to-day life for nontroubleshooting purposes. While these commands might not explain the issue themselves, they can help gather information about the issue, which leads to a more accurate and quick resolution. Familiarity with these fundamental commands is critical to your success during troubleshooting. Resources for Article: Further resources on this subject: Linux Shell Scripting[article] Embedded Linux and Its Elements[article] Installing Red Hat CloudForms on Red Hat OpenStack [article]
Read more
  • 0
  • 0
  • 9330

article-image-troubleshooting-websphere-security-related-issues
Packt
11 Aug 2011
8 min read
Save for later

Troubleshooting WebSphere Security-related Issues

Packt
11 Aug 2011
8 min read
  IBM WebSphere Application Server v7.0 Security Secure your IBM WebSphere applications with Java EE and JAAS security standards using this book and eBook Troubleshooting general security configuration exceptions The selected cases in this subsection concerns the situations when various aspects of configuring security are carried out and, as a result, error conditions occur. Identifying problems with the Deployment Manager—node agent communication blues Several of the problems that may take place due to either wrong or incomplete security configuration are found in the communication of the administrative layers of the WebSphere environment, i.e., between the deployment manager and the node agent(s). A couple of the most common situations are shown below, along with recommendations as to how to correct the condition. Receiving the message HMGR0149E: node agent rejected The message HMGR0149E is the result of the Deployment Manager rejecting a request to connect from the node agent. This type of error and the display of this message normally takes place when security changes in the Deployment Manager were not synchronized with the node in question. An example of log file clip where this message is found can be seen in the following screenshot: One way to fix this problem is by using the syncNode.sh command. The syntax for this command is: syncNode.sh dmgr_host [dmgr_port] [-conntype <type>] [-stopservers] [-restart] [-quiet] [-nowait] [-logfile <filename>] [-replacelog] [-trace] [-username <username>] [-password <password>] [-localusername <localusername>] [-localpassword <localpassword>] [-profileName <profile>] syncNode.sh [-help] Furthermore, a very simple procedure to correct this problem is given next: Stop the affected node agent(s). Execute, on the node agent OS host, the syncNode.sh command. Monitor the SystemOut.log file for both dmgr and nodeagent processes. Start the node agent. For additional information on messages from the high availability manager, refer to the WAS ND7 Information Center link: http://publib.boulder.ibm.com/infocenter/wasinfo/ v7r0/topic/com.ibm.websphere.messages.doc/com.ibm. ws.hamanager.nls.HAManagerMessages.html Receiving the message ADMS0005E: node agent unable to synchronize This message, ADMS0005E, is the result of the node agent attempting to synchronize configuration with the Deployment Manager. It is likely caused when changes in security-related configuration occurred and the node agent were not available. The following screenshot shows an example of this type of error. One way to solve the issue is to shut down the node agent, and then, manually execute the command syncNode.sh from the node OS host using a user ID and password that has administrative privileges on the Deployment Manager. For syntax or usage information about this command, kindly refer to the previous example. In case this action does not solve the problem, follow the next procedure: Stop the node agent(s) Using the ISC, disable global security Restart the Deployment Manager Start the node agent(s) Perform a full synchronization using the ISC Using the ISC, enable global security Synchronize changes with all nodes Stop the node agent(s) Restart the Deployment Manager to activate global security Start the node agent(s) For additional information on messages about the administrative synchronization, refer to the WAS ND7 Information Center link: http://publib.boulder.ibm.com/infocenter/wasinfo/ v7r0/topic/com.ibm.websphere.messages.doc/com.ibm. ws.management.resources.sync.html Troubleshooting runtime security exceptions To close the section on troubleshooting, this subsection presents several cases of error or exception conditions that occur due to security configuration of various WAS ND7 environment components. Such components can be all within WAS or some components could be external, for example, the IHS/WebSphere Plug-in. Troubleshooting HTTPS communication between WebSphere Plug-in and Application Server When setting up the HTTPS communication between the WebSphere Plug-in and the WebSphere Application Server there may be instances in which exceptions and errors may occur during the configuration phase. Some of the most common are listed next. Receiving the message SSL0227E: SSL handshake fails The message SSL0227E is a common one when the main IHS process is attempting to retrieve the SSL certificate indicated by the property SSLServerCert located in the httpd.conf file. What this message is stating is that the intended SSL certificate cannot be found by its label from the key ring indicated by the directive KeyFile in the same configuration file. An example of this type of message is shown in the following screenshot. In order to correct this error, there are two possibilities that can be explored. On the one hand, one needs to insure that the directive KeyFile is pointing to the correct key ring file. That is, that the key ring file actually stores the target SSL certificate to be used with this IHS server. On the other hand, there may be a typographic error in the value of the property SSLServerCert. In other words, the label that is mapped to the target SSL certificate was misspelled in the httpd.conf file. In both cases, the command gsk7capicmd can be used to list the content of the key ring file. The syntax for listing the contents of a key ring file is: <IHS_ROOT_Directory>/bin/gsk7capicmd -cert -list all -db <Path_To_ kdb_File> -pw <kdb_File_Password> For additional information on messages about handshaking issues, refer to the IHS v7 Information Center link: http://publib.boulder.ibm.com/infocenter/wasinfo/v7r0/ topic/com.ibm.websphere.ihs.doc/info/ihs/ihs/rihs_ troubhandmsg.html Receiving ws_config_parser errors while loading the plugin configuration file If the configParserParse message of the ws_config_parser component is observed in the errors log file of the IBM HTTP Server; the following screenshot is an example of a possible output that may be found in the error logs. There may be a couple of reasons for this type of message to appear in the logs. One reason for this type of message is that it occurs at the time in which the IHS process is being brought down. The WebSphere Plug-in module is in its cycle to reparse the plugin-cfg.xml file while the IHS process is shutting down, therefore the ws_config_parser component does not have enough resources to perform the parsing of the configuration file and throws this message, possibly multiple times in a row. In order to ensure that this is the correct interpretation of the message, it is necessary to find an indicator, such as a 'shutting down' type of message like the one shown in the next screenshot: The other reason why this message may appear in the logs is very likely that the process owner of the IHS process does not have the correct privileges to read the plugin-cfg.xml file. In this case, ensure that the definition for the property User in the httpd.conf file has enough privileges to read the plug-in configuration file defined for the property WebSpherePluginConfig of the httpd.conf file. For additional information on messages about WebSphere Plug-in issues, refer to the article Error message definitions for WebSphere Application Server's webserver plugin component. Receiving the message GSK_ERROR_BAD_CERT: No suitable certificate found The message GSK_ERROR_BAD_CERT appears in log files when the WebSphere Plug-in is attempting to establish an SSL connection with the back-end WebSphere Application Server and it does not have a way to validate the SSL certificate sent by the WebSphere Application Server. An example of this type of message is shown in the next screenshot: One way to solve this problem is by adding to the IHS key ring file the signer certificate from the WebSphere Application Server. When doing this, care must be taken to correctly select the WebSphere trust store. In other words, the correct scope for your target Application Server needs to be identified so that the appropriate trust store can be accessed. For instance, if it was desired to obtain the root certificate (aka, signer certificate) used by the Chap7AppServer Application Server, one needs to identify the scope for that application server. Therefore, one should start with the following breadcrumb in the ISC (Deployment Manager console): Security | SSL certificate and key management | Manage endpoint security configurations. The following screenshot illustrates a portion of the resulting page: Once the appropriate scope is identified, continue by completing the breadcrumb: Security | SSL certificate and key management | Manage endpoint security configurations | Chap7AppServer | Key stores and certificates | NodeDefaultTrustStore | Signer certificates. The following screenshot shows a portion of a resulting page. You are now in position to extract the Application Server signer SSL certificate. Once this certificate is extracted, it needs to be imported into the IHS key ring file as a root certificate.
Read more
  • 0
  • 0
  • 9271

article-image-core-http-module-nginx
Packt
08 Jul 2011
8 min read
Save for later

The Core HTTP Module in Nginx

Packt
08 Jul 2011
8 min read
Nginx 1 Web Server Implementation Cookbook Over 100 recipes to master using the Nginx HTTP server and reverse proxy   Setting up the number of worker processes correctly Nginx like any other UNIX-based server software, works by spawning multiple processes and allows the configuration of various parameters around them as well. One of the basic configurations is the number of worker processes spawned! It is by far one of the first things that one has to configure in Nginx. How to do it... This particular configuration can be found at the top of the sample configuration file nginx.conf: user www www; worker_processes 5; error_log logs/error.log; pid logs/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 4096; } In the preceding configuration, we can see how the various process configurations work. You first set the UNIX user under which the process runs, then you can set the number of worker processes that Nginx needs to spawn, after that we have some file locations where the errors are logged and the PIDs (process IDs) are saved. How it works... By default, worker_processes is set at 2. It is a crucial setting in a high performance environment as Nginx uses it for the following reasons: It uses SMP, which allows you to efficiently use multi-cores or multi-processors systems very efficiently and have a definite performance gain. It increases the number of processes decreases latency as workers get blocked on disk I/O. It limits the number of connections per process when any of the various supported event types are used. A worker process cannot have more connections than specified by the worker_connections directive. There's more... It is recommended that you set worker_processes as the number of cores available on your server. If you know the values of worker_processes and worker_connections, one can easily calculate the maximum number of connections that Nginx can handle in the current setup. Maximum clients = worker_processes * worker_connections   Increasing the size of uploaded files Usually when you are running a site where the user uploads a lot of files, you will see that when they upload a file which is more than 1MB in size you get an Nginx error stating, "Request entity too Large" (413), as shown in the following screenshot. We will look at how Nginx can be configured to handle larger uploads. How to do it... This is controlled by one simple part of the Nginx configuration. You can simply paste this in the server part of the Nginx configuration: client_max_body_size 100M; # M stands for megabytes This preceding configuration will allow you to upload a 100 megabyte file. Anything more than that, and you will receive a 413. You can set this to any value which is less than the available disk space to Nginx, which is primarily because Nginx downloads the file to a temporary location before forwarding it to the backend application. There's more... Nginx also lets us control other factors related to people uploading files on the web application, like timeouts in case the client has a slow connection. A slow client can keep one of your application threads busy and thus potentially slow down your application. This is a problem that is experienced on all the heavy multimedia user-driven sites, where the consumer uploads all kinds of rich data such as images, documents, videos, and so on. So it is sensible to set low timeouts. client_body_timeout 60; # parameter in seconds client_body_buffer_size 8k; client_header_timeout 60; # parameter in seconds client_header_buffer_size 1k; So, here the first two settings help you control the timeout when the body is not received at one read-step (basically, if the server is queried and no response comes back). Similarly, you can set the timeout for the HTTP header as well. The following table lists out the various directives and limits you can set around client uploading.   Using dynamic SSI for simple sites With the advent of modern feature-full web servers, most of them have Server-Side Includes (SSI) built in. Nginx provides easy SSI support which can let you do pretty much all basic web stuff. How to do it... Let's take a simple example and start understanding what one can achieve with it. Add the following code to the nginx.conf file: server { ..... location / { ssi on; root /var/www/www.example1.com; } } Add the following code to the index.html file: <html> <body> <!--# block name="header_default" --> the header testing <!--# endblock --> <!--# include file="header.html" stub="header_default" --> <!--# echo var="name" default="no" --> <!--# include file="footer.html"--> </body> </html> Add the following code to the header.html file: <h2>Simple header</h2> Add the following code to the footer.html file: <h2>Simple footer</h2> How it works... This is a simple example where we can see that you can simply include some partials in the larger page, and in addition to that you can create block as well within the page. So the <block> directive allows you to create silent blocks that can be included later, while the <include> directive can be used to include HTML partials from other files, or even URL end points. The <echo> directive is used to output certain variables from within the Nginx context. There's more... You can utilize this feature for all kinds of interesting setups where: You are serving different blocks of HTML for different browsers types You want to optimize and speed up certain common blocks of the sites You want to build a simple site with template inheritance without installing any other scripting language   Adding content before and after a particular page Today, in most of the sites that we visit, the webpage structure is formally divided into a set of boxes. Usually, all sites have a static header and a footer block. Here, in this following page you can see the YUI builder generating the basic framework of such a page. In such a scenario, Nginx has a really useful way of adding content before and after it serves a certain page. This will potentially allow you to separate the various blocks and optimize their performance individually, as well. Let's have a look at an example page: So here we want to insert the header block before the content, and then append the footer block: How to do it… The sample configuration for this particular page would look like this: server { listen 80; server_name www.example1.com; location / { add_before_body /red_block add_after_body /blue_block; ... } location /red_block/ { ... } location /blue_block/ { .... } } This can act as a performance enhancer by allowing you to load CSS based upon the browser only. There can be cases where you want to introduce something into the header or the footer on short notice, without modifying your backend application. This provides an easy fix for those situations. This module is not installed by default and it is necessary to enable it when building Nginx. ./configure –with-http_addition_module   Enabling auto indexing of a directory Nginx has an inbuilt auto-indexing module. Any request where the index file is not found will route to this module. This is similar to the directory listing that Apache displays. How to do it... Here is the example of one such Nginx directory listing. It is pretty useful when you want to share some files over your local network. To start auto index on any directory all you need to do is to carry out the following example and place it in the server section of the Nginx configuration file: server { location 80; server_name www.example1.com; location / { root /var/www/test; autoindex on; } } How it works... This will simply enable auto indexing when the user types in http://www.example1.com. You can also control some other things in the listings in this way: autoindex_exact_size off; This will turn off the exact file size listing and will only show the estimated sizes. This can be useful when you are worried about file privacy issues. autoindex_localtime on; This will represent the timestamps on the files as your local server time (it is GMT by default): This image displays a sample index auto-generated by Nginx using the preceding configuration. You can see the filenames, timestamp, and the file sizes as the three data columns.  
Read more
  • 0
  • 0
  • 9198

article-image-copying-database-sql-server-2005-sql-server-2008-using-copy-database-wizard
Packt
24 Oct 2009
3 min read
Save for later

Copying a Database from SQL Server 2005 to SQL Server 2008 using the Copy Database Wizard

Packt
24 Oct 2009
3 min read
(For more resources on Microsoft, see here.) Using the Copy Database Wizard you will be creating an SQL Server Integration Services package which will be executed by an SQL Server Agent job. It is therefore necessary to set up the SQL Server Agent to work with a proxy that you need to create which can execute the package. Since the proxy needs a credential to workout outside the SQL 2008 boundary you need to create a Credential and a Principal who has the permissions. Creating a credential has been described elsewhere. The main steps in migration using this route are: Create an Credential Create an SQL Server Agent Proxy to work with SSIS Package execution Create the job using the Copy Database Wizard Creating the Proxy In the SQL Server 2008 Management Studio expand the SQL Server Agent node and then expand the Proxies node. You can create proxies for various actions that you may undertake. In the present case the Copy Database wizard creates an Integration Services package and therefore a proxy is needed for this. Right click the SSIS Package Execution folder as shown in the next figure. Click on New Proxy.... This opens the New Proxy Account window as shown. Here Proxy name is the one you provide which will be needed in the Copy Database Wizard. Credential name is the one you created earlier which uses a database login name and password. Description is an optional info to keep track of the proxy. As seen in the previous figure you can create different proxies to deal with different activities. In the present case a proxy will be created for Integration Service Package execution as shown in the next figure. The name CopyPubx has been created as shown. Now click on the ellipsis button along the Credential name and this brings up the Select Credential window as shown. Now click on the Browse... button. This brings up the Browse for Objects window displaying the credential you created earlier. Place a checkmark as shown and click on the OK button. The [mysorian] credential is entered into the Select Credential window. Click on the OK button on the Select Credential window. The credential name gets entered into the New Proxy Account's Credential name. The optional description can be anything suitable as shown. Place a checkmark on the SQL Server Integration Services Package as shown and click on Principals. Since the present proxy is going to be used by the sysadmin, there is no need to add it specifically. Click on the OK button to close this New Proxy Account window. You can now expand the SSIS Package Execution node of the Proxies and verify that CopyPubx has been added. There are two other proxies created in the same way in this folder. Since the SQL Server Agent is needed for this process to succeed, make sure the SQL Server Agent is running. If it has not started yet, you can start this service from the Control Panel.  
Read more
  • 0
  • 0
  • 9030
article-image-advanced-user-management
Packt
30 Dec 2015
20 min read
Save for later

Advanced User Management

Packt
30 Dec 2015
20 min read
In this article written by Bhaskarjyoti Roy, author of the book Mastering CentOS 7 Linux Server, will introduce some advanced user and group management scenarios along with some examples on how to handle advanced level options such as password aging, managing sudoers, and so on, on a day to day basis. Here, we are assuming that we have already successfully installed CentOS 7 along with a root and user credentials as we do in the traditional format. Also, the command examples, in this chapter, assume you are logged in or switched to the root user. (For more resources related to this topic, see here.)  The following topics will be covered: User and group management from GUI and command line Quotas Password aging Sudoers Managing users and groups from GUI and command line We can add a user to the system using useradd from the command line with a simple command as follows: useradd testuser This creates a user entry in the /etc/passwd file and automatically creates the home directory for the user in /home. The /etc/passwd entry looks like this: testuser:x:1001:1001::/home/testuser:/bin/bash But, as we all know, the user is in a locked state and cannot login to the system unless we add a password for the user using the command: passwd testuser This will, in turn, modify the /etc/shadow file, at the same time unlock the user, and the user will be able to login to the system. By default, the preceding set of commands will create both a user and a group for the testuser on the system. What if we want a certain set of users to be a part of a common group? We will use the -g option along with the useradd command to define the group for the user, but we have to make sure that the group already exists. So, to create users such as testuser1, testuser2, and testuser3 and make them part of a common group called testgroup, we will first create the group and then we create the users using the -g or -G switch. So we will do this: # To create the group : groupadd testgroup # To create the user with the above group and provide password and unlock user at the same time : useradd testuser1 -G testgroup passwd testuser1 useradd testuser2 -g 1002 passwd testuser2 Here, we have used both -g and -G. The difference between them is: with -G, we create the user with its default group and assign the user to the common testgroup as well, but with -g, we created the user as part of the testgroup only. In both cases, we can use either the gid or the group name obtained from the /etc/group file. There are a couple of more options that we can use for an advanced level user creation, for example, for system users with uid less than 500, we have to use the -r option, which will create a user on the system but the uid will be less than 500. We also can use -u to define a specific uid, which must be unique and greater than 499. Common options that we can use with the useradd command are: -c: This option is used for comments, generally to define the user's real name such as -c "John Doe". -d: This option is used to define home-dir; by default, the home directory is created in /home such as -d /var/<user name>. -g: This option is used for the group name or the group number for the user's default group. The group must already have been created earlier. -G: This option is used for additional group names or group numbers, separated by commas, of which the user is a member. Again, these groups must also have been created earlier. -r: This option is used to create a system account with a UID less than 500 and without a home directory. -u: This option is the user ID for the user. It must be unique and greater than 499. There are few quick options that we use with the passwd command as well. These are: -l: This option is to lock the password for the user's account -u: This option is to unlock the password for the user's account -e: This option is to expire the password for the user -x: This option is to define the maximum days for the password lifetime -n: This option is to define the minimum days for the password lifetime Quotas In order to control the disk space used in the Linux filesystem, we must use quota, which enables us to control the disk space and thus helps us resolve low disk space issues to a great extent. For this, we have to enable user and group quota on the Linux system. In CentOS 7, the user and group quota are not enabled by default so we have to enable them first. To check whether quota is enabled, or not, we issue the following command: mount | grep ' / ' The image shows that the root filesystem is enabled without quota as mentioned by the noquota in the output. Now, we have to enable quota on the root (/) filesystem, and to do that, we have to first edit the file /etc/default/grub and add the following to the GRUB_CMDLINE_LINUX: rootflags=usrquota,grpquota The GRUB_CMDLINE_LINUX line should read as follows: GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/swap vconsole.font=latarcyrheb-sun16 rd.lvm.lv=centos/root crashkernel=auto  vconsole.keymap=us rhgb quiet rootflags=usrquota,grpquota" The /etc/default/grub should like the following screenshot: Since we have to reflect the changes we just made, we should backup the grub configuration using the following command: cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.original Now, we have to rebuild the grub with the changes we just made using the command: grub2-mkconfig -o /boot/grub2/grub.cfg Next, reboot the system. Once it's up, login and verify that the quota is enabled using the command we used before: mount | grep ' / ' It should now show us that the quota is enabled and will show us an output as follows: /dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,usrquota,grpquota) Now, since quota is enabled, we will further install quota using the following to operate quota for different users and groups, and so on: yum -y install quota Once quota is installed, we check the current quota for users using the following command: repquota -as The preceding command will report user quotas in a human readable format. From the preceding screenshot, there are two ways we can limit quota for users and groups, one is setting soft and hard limits for the size of disk space used or another is limiting the user or group by limiting the number of files they can create. In both cases, soft and hard limits are used. A soft limit is something that warns the user when the soft limit is reached and the hard limit is the limit that they cannot bypass. We will use the following command to modify a user quota: edquota -u username Now, we will use the following command to modify the group quota: edquota -g groupname If you have other partitions mounted separately, you have to modify the /etc/fstab to enable quota on the filesystem by adding usrquota and grpquota after the defaults for that specific partition as in the following screenshot, where we have enabled the quota for the /var partition: Once you are finished enabling quota, remount the filesystem and run the following commands: To remount /var : mount -o remount /var To enable quota : quotacheck -avugm quotaon -avug Quota is something all system admins use to handle disk space consumed on a server by users or groups and limit over usage of the space. It thus helps them manage the disk space usage on the system. In this regard, it should be noted that you plan before your installation and create partitions accordingly as well so that the disk space is used properly. Multiple separate partitions such as /var and /home etc are always suggested, as generally, these are the partitions, which consume most space on a Linux system. So, if we keep them on a separate partition, it will not eat up the root ('/') filesystem space and will be more failsafe than using an entire filesystem mounted as only root. Password aging It is a good policy to have password aging so that the users are forced to change their password at a certain interval. This, in turn, helps to keep the security of the system as well. We can use chage to configure the password to expire the first time the user logs in to the system. Note: This process will not work if the user logs in to the system using SSH. This method of using chage will ensure that the user is forced to change the password right away. If we use only chage <username>, it will display the current password aging value for the specified user and will allow them to be changed interactively. The following steps need to be performed to accomplish password aging: Lock the user. If the user doesn't exist, we will use the useradd command to create the user. However, we will not assign any password to the user so that it remains locked. But, if the user already exists on the system, we will use the usermod command to lock the user: Usermod -L <username> Force immediate password change using the following command: chage -d 0 <username> Unlock the account, This can be achieved in two ways. One is to assign an initial password and the other way is to assign a null password. We will take the first approach as the second one, though possible, is not a good practice in terms of security. Therefore, here is what we do to assign an initial password: Use the python command to start the command-line python interpreter: import crypt; print crypt.crypt("Q!W@E#R$","Bing0000/") Here, we have used the Q!W@E#R$ password with a salt combination of the alphanumeric character: Bing0000 followed by a (/) character. The output is the encrypted password, similar to 'BiagqBsi6gl1o'. Press Ctrl + D to exit the Python interpreter. At the shell, enter the following command with the encrypted output of the Python interpreter: usermod -p "<encrypted-password>" <username> So, here, in our case, if the username is testuser, we will use the following command: usermod -p "BiagqBsi6gl1o" testuser Now, upon initial login using the "Q!W@E#R$" password, the user will be prompted for a new password. Setting the password policy This is a set of rules defined in some files, which have to be followed when a system user is setting up. It's an important factor in security because one of the many security breach histories was started with hacking user passwords. This is the reason why most organizations set a password policy for their users. All usernames and passwords must comply with this. A password policy usually is defined by the following: Password aging Password length Password complexity Limit login failures Limit prior password reuse Configuring password aging and password length Password aging and password length are defined in /etc/login.defs. Aging basically means the maximum number of days a password might be used, minimum number of days allowed between password changes, and number of warnings before the password expires. Length refers to the number of characters required for creating the password. To configure password aging and length, we should edit the /etc/login.defs file and set different PASS values according to the policy set by the organization. Note: The password aging controls defined here does not affect existing users; it only affects the newly created users. So, we must set these policies when setting up the system or the server at the beginning. The values we modify are: PASS_MAX_DAYS: The maximum number of days a password can be used PASS_MIN_DAYS: The minimum number of days allowed between password changes PASS_MIN_LEN: The minimum acceptable password length PASS_WARN_AGE: The number of days warning to be given before a password expires Let's take a look at a sample configuration of the login.defs file: Configuring password complexity and limiting reused password usage By editing the /etc/pam.d/system-auth file, we can configure the password complexity and the number of reused passwords to be denied. A password complexity refers to the complexity of the characters used in the password, and the reused password deny refers to denying the desired number of passwords the user used in the past. By setting the complexity, we force the usage of the desired number of capital characters, lowercase characters, numbers, and symbols in a password. The password will be denied by the system until and unless the complexity set by the rules are met. We do this using the following terms: Force capital characters in passwords: ucredit=-X, where X is the number of capital characters required in the password Force lower case characters in passwords: lcredit=-X, where X is the number of lower case characters required in the password Force numbers in passwords: dcredit=-X, where X is the number numbers required in the password Force the use of symbols in passwords: ocredit=-X, where X is the number of symbols required in the password For example: password requisite pam_cracklib.so try_first_pass retry=3 type= ucredit=-2 lcredit=-2 dcredit=-2 ocredit=-2 Deny reused passwords: remember=X, where X is the number of past passwords to be denied For example: password sufficient pam_unix.so sha512 shadow nullok try_first_pass use_authtok remember=5 Let's now take a look at a sample configuration of /etc/pam.d/system-auth: Configuring login failures We set the number of login failures allowed by a user in the /etc/pam.d/password-auth, /etc/pam.d/system-auth, and /etc/pam.d/login files. When a user's failed login attempts are higher than the number defined here, the account is locked and only a system administrator can unlock the account. To configure this, make the following additions to the files. The following deny=X parameter configures this, where X is the number of failed login attempts allowed: Add these two lines to the /etc/pam.d/password-auth and /etc/pam.d/system-auth files and only the first line to the /etc/pam.d/login file: auth        required    pam_tally2.so file=/var/log/tallylog deny=3 no_magic_root unlock_time=300 account     required    pam_tally2.so The following screenshot is a sample /etc/pam.d/system-auth file: The following is a sample /etc/pam.d/login file: To see failures, use the following command: pam_tally2 –user=<User Name> To reset the failure attempts and to enable the user to login again, use the following command: pam_tally2 –user=<User Name> --reset Sudoers Separation of user privilege is one of the main features in Linux operating systems. Normal users operate in limited privileged sessions to limit the scope of their influence on the entire system. One special user exists on Linux that we know about already is root, which has super-user privileges. This account doesn't have any restrictions that are present to normal users. Users can execute commands with super-user or root privileges in a number of different ways. There are mainly three different ways to obtain root privileges on a system: Login to the system as root Login to the system as any user and then use the su - command. This will ask you for the root password and once authenticated, will give you the root shell session. We can disconnect this root shell using Ctrl + D or using the command exit. Once exited, we will come back to our normal user shell. Run commands with root privileges using sudo without spawning a root shell or logging in as root. This sudo command works as follows: sudo <command to execute> Unlike su, sudo will request the password of the user calling the command, not the root password. Sudo doesn't work by default and requires to be set up before it functions correctly. In the following section, we will see how to configure sudo and modify the /etc/sudoers file so that it works the way we want it to. visudo Sudo is modified or implemented using the /etc/sudoers file, and visudo is the command that enables us to edit the file. Note: This file should not be edited using a normal text editor to avoid potential race conditions in updating the file with other processes. Instead, the visudo command should be used. The visudo command opens a text editor normally, but then validates the syntax of the file upon saving. This prevents configuration errors from blocking sudo operations. By default, visudo opens the /etc/sudoers file in Vi Editor, but we can configure it to use the nano text editor instead. For that, we have to make sure nano is already installed or we can install nano using: yum install nano -y Now, we can change it to use nano by editing the ~/.bashrc file: export EDITOR=/usr/bin/nano Then, source the file using: . ~/.bashrc Now, we can use visudo with nano to edit the /etc/sudoers file. So, let's open the /etc/sudoers file using visudo and learn a few things. We can use different kinds of aliases for different set of commands, software, services, users, groups, and so on. For example: Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig and many more ... We can use these aliases to assign a set of command execution rights to a user or a group. For example, if we want to assign the NETWORKING set of commands to the group netadmin we will define: %netadmin ALL = NETWORKING Otherwise, if we want to allow the wheel group users to run all the commands, we use the following command: %wheel  ALL=(ALL)  ALL If we want a specific user, john, to get access to all commands we use the following command: john  ALL=(ALL)  ALL We can create different groups of users, with overlapping membership: User_Alias      GROUPONE = abby, brent, carl User_Alias      GROUPTWO = brent, doris, eric, User_Alias      GROUPTHREE = doris, felicia, grant Group names must start with a capital letter. We can then allow members of GROUPTWO to update the yum database and all the commands assigned to the preceding software by creating a rule like this: GROUPTWO    ALL = SOFTWARE If we do not specify a user/group to run, sudo defaults to the root user. We can allow members of GROUPTHREE to shutdown and reboot the machine by creating a command alias and using that in a rule for GROUPTHREE: Cmnd_Alias      POWER = /sbin/shutdown, /sbin/halt, /sbin/reboot, /sbin/restart GROUPTHREE  ALL = POWER We create a command alias called POWER that contains commands to power off and reboot the machine. We then allow the members of GROUPTHREE to execute these commands. We can also create Run as aliases, which can replace the portion of the rule that specifies to the user to execute the command as: Runas_Alias     WEB = www-data, apache GROUPONE    ALL = (WEB) ALL This will allow anyone who is a member of GROUPONE to execute commands as the www-data user or the apache user. Just keep in mind that later, rules will override previous rules when there is a conflict between the two. There are a number of ways that you can achieve more control over how sudo handles a command. Here are some examples: The updatedb command associated with the mlocate package is relatively harmless. If we want to allow users to execute it with root privileges without having to type a password, we can make a rule like this: GROUPONE    ALL = NOPASSWD: /usr/bin/updatedb NOPASSWD is a tag that means no password will be requested. It has a companion command called PASSWD, which is the default behavior. A tag is relevant for the rest of the rule unless overruled by its twin tag later down the line. For instance, we can have a line like this: GROUPTWO    ALL = NOPASSWD: /usr/bin/updatedb, PASSWD: /bin/kill In this case, a user can run the updatedb command without a password as the root user, but entering the root password will be required for running the kill command. Another helpful tag is NOEXEC, which can be used to prevent some dangerous behavior in certain programs. For example, some programs, such as less, can spawn other commands by typing this from within their interface: !command_to_run This basically executes any command the user gives it with the same permissions that less is running under, which can be quite dangerous. To restrict this, we could use a line like this: username    ALL = NOEXEC: /usr/bin/less We should now have clear understanding of what sudo is and how do we modify and provide access rights using visudo. There are many more things left here. You can check the default /etc/sudoers file, which has a good number of examples, using the visudo command, or you can read the sudoers manual as well. One point to remember is that root privileges are not given to regular users often. It is important for us to understand what these command do when you execute with root privileges. Do not take the responsibility lightly. Learn the best way to use these tools for your use case, and lock down any functionality that is not needed. Reference Now, let's take a look at the major reference used throughout the chapter: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/index.html Summary In all we learned about some advanced user management and how to manage users through the command line, along with password aging, quota, exposure to /etc/sudoers, and how to modify them using visudo. User and password management is a regular task that a system administrator performs on servers, and it has a very important role in the overall security of the system. Resources for Article: Further resources on this subject: SELinux - Highly Secured Web Hosting for Python-based Web Applications [article] A Peek Under the Hood – Facts, Types, and Providers [article] Puppet Language and Style [article]
Read more
  • 0
  • 0
  • 8945

article-image-vbnet-application-sql-anywhere-10-database-part-1
Packt
24 Oct 2009
4 min read
Save for later

VB.NET Application with SQL Anywhere 10 database: Part 1

Packt
24 Oct 2009
4 min read
SQL Anywhere 10 SQL Anywhere 10 is the latest version of Sybase's feature rich SQL Anywhere database technology. It is highly scalable from the small foot-print UltraLite database all the way to its enterprise server with gigabytes of data. It is a comprehensive database package with built-in support for a wide range of applications, including session based synchronization; data exchange with both relational and non-relational data bases; secure store and forward messaging; messaging with FTP and email; and asynchronous access to mobile web services. You may download an evaluation version of the software and take it for a test drive. Sybase Central is a graphical database management interface to the database and its various supporting applications. The integration features are used in this article to create a Windows application retrieving data from the SQL Anywhere 10’s demonstration database, a database which is a part of the default installation of the developer edition. Overview of SQL Anywhere 10 From Sybase Central you can connect to the demo database quite easily by clicking on the Connections menu item and choosing Connect with SQL Anywhere 10. Figure 1 shows the SQL Anywhere management interface, Sybase Central. Using this interface you may also create an ODBC DSN by following the trail; Tools --> SQL Anywhere 10 --> open ODBC Administrator. Figure 1   It is very easy to connect to the database using the ODBC driver which is provided with the default installation of this product. The Figure 2 shows the User DSN installed with the default installation in the ODBC Data Source Administrator window. Figure 2 The Username is DBA and the Password is sql (case sensitive) for the demo database, demo.db. Please refer to the article, "Migrating from Oracle 10G XE to SQL Anywhere 10" which describes connecting to the demo database in detail. Figure 3 shows the demo database and its objects. Figure 3 VB.NET Windows Application We will create an ASP.NET 2.0 Windows application called SqlAny. We will create forms which display retrieved data from a table on the database as well as from a stored procedure after accepting a parameter passed to the stored procedure interactively. The Figure 4 shows the details of the project in the Solution Explorer as well as the Object Browser. Figure 4 Accessing SQL Anywhere Explorer SQL Anywhere Explorer is a component of SQL Anywhere that lets you connect to SQL Anywhere and UltraLite  databases from Visual Studio .NET. From the View menu of Visual Studio, you can access the SQL Anywhere Explorer as shown in Figure 5 - SQL Anywhere 10 is integrated with Visual Studio (both 1.1 and 2.0 versions). Figure 5   Alternatively, you can access SQL Anywhere Explorer from the Tools menu item as shown in Figure 6. In this case the Sybase Central management interface opens in a separate window. Interactive SQL is another of SQL Anywhere 10's tools for working with SQL queries on this database. Figure 6   When you click on SQL Anywhere Explorer from the View menu, you will be lead to the following window shown in Figure 7 which allows you to establish a data connection. Figure 7 Click on the drop-down, Add Connection, which opens the window shown in Figure 8 where you will be given a choice of two connections that you may connect to, SQL Anywhere or UltraLite. These are both databases. Both can run on mobile devices, but UltraLite has a smaller footprint. Figure 8 By choosing to connect to SQL Anywhere you invoke the authentication window for making the connection, as shown in Figure 9. The Username is DBA and the Password is sql. After entering these values you can get to the ODBC DSN mentioned earlier, from the drop-down. You may also test the connectivity which you see as being a success, for the entered values of Username, Password, and ODBC DSN. Figure 9   Visual Studio makes a data connection as shown in Figure 10. The nodes for Tables, Views, and Procedures are all expanded in this figure showing all the objects that can be accessed on this database. Since we logged in as DBA, all permissions are in place. Figure 10 Before the connection is made, SQL Anywhere starts up as shown in Figure 11. This message console gets minimized and stays up in the system tray of the desktop. This can be restored and closed by activating the icon in the tray.   Figure 11
Read more
  • 0
  • 0
  • 8913

article-image-working-sbs-services-user-part-2
Packt
26 Oct 2009
10 min read
Save for later

Working with SBS Services as a User: Part 2

Packt
26 Oct 2009
10 min read
Managing files One service that SBS 2008 provides for users is a secure place to store files. Both web sites and file shares are provided by default to assist with this. Enabling collaboration on documents, where multiple people will want to read or update a file is best delivered using the CompanyWeb site. The CompanyWeb site is the internal web site and it is built on Windows SharePoint Services technologies. In this section, I will explore: File management aspects of CompanyWeb Searching across the network for information User file recovery Internal Web Site Access SBS 2008 provides an intranet for sharing information. This site is called the CompanyWeb and can be accessed internally by visiting http://companyweb. To access it remotely, click on the Internal Web Site button that will open up the URL https://remote.yourdomain.co.uk:987. It is important that you note the full URL with :987 on the end, otherwise you will not see your CompanyWeb. CompanyWeb, in its simplest form, is a little like a file share, but has considerably more functionality such as the ability to store more than just files, be accessible over the Internet and your local network, host applications, and much more. For file management, it enables flow control such as document check-in and check-out for locking of updates and an approval process for those updates. It can also inform users when changes have taken place, so that they do not need to check on the web site as it will tell them. Finally, it can enable multiple people to work on a document and it will arbitrate the updates so the owner can see all the comments and changes. While we are looking at CompanyWeb from a file management perspective, it is worth pointing out that any Windows SharePoint Services site also has the capability to run surveys, provide groups, web-based calendars, run web-based applications that are built on top of the SharePoint services, host blog and wiki pages, and perform as your fax center. In looking at file management, I will briefly explain how to: Upload a document via the web interface Add a document via email attachment Edit a document stored in CompanyWeb Check Out/In a document Recover a deleted document Uploading documents Navigate to http://CompanyWeb in your browser and then to the Shared Documents section. You can create other document libraries by clicking on Site Actions in the righthand corner of the screen and then selecting Create. From here, you can upload documents in three different ways. You can upload single or multiple documents from the Upload menu. If you chose this option, you will be prompted to Browse for a single file and then click on OK to upload the file. If you chose Upload Multiple Documents from the menu or the Upload Document screen, you will be presented with the multiple upload tool. Navigate to the folder with the files you wish to upload, check the items, and click OK to start the upload. The final mechanism to load documents is to choose to Open with Windows Explorer from the Actions menu. This will open an Explorer window that you can then copy and paste into as if you had two local folders open on your computer. Uploading using email I know this might sound a little strange, but the process of emailing documents backwards and forwards between people, for ideas and changes, can make "keeping up to date" very confusing for everyone. Using CompanyWeb in this way enables each user to update their copy of the document and then merge them all together so the differences can be accepted or rejected by the owner. To upload a document via email, create a new email in Outlook and attach a document as per normal. Then, go to the Insert tab and click on the small arrow on the bottom right of the Include section. In the task pane that opens on the righthand side, change the Attachment Options to Shared attachments and type http://CompanyWeb into the box labeled Create Document Workspace at:. This will create the additional text in the mail and include a link to the site that was created under CompanyWeb. This site is secured so that only the people on the To line and the person who sent it have access. Send the email, and the attachment will be loaded to the special site. Each user can open the attachment as per normal, save it to their hard disk, and edit the document. The user can make as many changes as they like and finally, save the updates to the CompanyWeb site. If their changes are to an earlier version, they will be asked to either overwrite or merge the changes. The following sample shows the writing from Molly and Lizzy in two different colors so that the document owner can read and consider all the changes and then accept all or some of them.   Opening documents and Checking Out and In Once you have documents stored on the CompanyWeb site, you can open them by simply clicking on the links. You will be prompted if you want to open a Read Only copy or Edit the document. Click OK once you have selected the right option. This simple mechanism is fine where there is no control, but you might want to ensure that no one else can modify the document while you are doing so. In the previous section, I showed the conflict resolution process, but this can be avoided by individuals checking documents in and out. When a document is checked out, you can only view the document unless you are the person who checked it out, in which case you can edit it. To check a document out, hover over the document and click on the downward arrow that appears on the right of the filename. A menu will appear and you can select Check Out from that menu. You can then edit the document while others cannot. Once you are finished, you need to check the document back in. This can be done from Word or back on the web site on the same drop-down menu where you checked it out. Recovering a deleted document in CompanyWeb If you delete a document in CompanyWeb, there is a recycle bin to recover documents from. On almost all lefthand navigation panes is the Recycle Bin link. Click this and you will be asked to select the documents to recover and then click on Restore Selection. Searching for information You can search for any file, email, calendar appointment, or document stored on your hard disk with SBS 2008 and Windows Vista or Windows XP and Windows Search. Just as with the email search facility, you can also search for any file, or the contents of any file on both the CompanyWeb site and on your computer. To search on CompanyWeb, type the key words that you are interested in into the search box in the top right corner and then click on the magnifying glass. This will then display you a varied set of results as you can see in the following example. If you are using Vista, you can type a search into the Start menu or select Search from the Start menu and again type the key words you are looking for in the top right corner. The Windows search will search your files, emails, calendar and contacts, and browser history to find a list of matches for you. You can get the latest version of Desktop Search for Windows Vista and Windows XP by following http://davidoverton.com/r.ashx?1K. User file recovery We have already covered how you recover deleted emails and documents in CompanyWeb, but users need something a little more sophisticated with file recovery on their desktop. Generally, when an administrator is asked to recover a file for a user, it is either because they have just deleted it and it is not in the recycle bin or they still have the file, but it has become corrupt or they wish to undo changes made over the last day or two. When you turn on folder redirection or when you are using Windows Vista, users get the ability to roll back time to a version of the file or folder that was copied over the previous few days. This means that not only can we undelete files from the recycle bin, but we can revert back to an earlier copy of a file that has not been deleted from 3-7 days previous without needing to access the backups. If the file has been deleted, we can look into the folder from an earlier time snap-shot as opposed to just the still existing files. To access this facility, right-click on the folder for which you want to get an earlier version and select Properties. Now, move to the Previous Versions tab. You can now Open the folder to view, as is shown on the right below, Copy the folder to a new location, or Revert the folder to the selected version, overwriting the current files. Remote access Now that the client computers are configured to work with SBS 2008, you need to check that the remote access tools are working. These are: Remote Web Workplace Outlook Web Access Internal Web Site Access Connecting to a PC on the SBS 2008 LAN Connecting via a Virtual Private Network (VPN) Remote Web Workplace, remote email, and intranet access The Remote Web Workplace is the primary location to use to access computers and services inside your SBS 2008 network when you are not yourself connected to it. To access the site, open your browser and go to https://remote.yourdomain.co.uk/remote. If you forget the /remote from the URL, you will get a 403 – Forbidden: Access is denied error. You will be presented with a sign-in screen where you enter your user name and password. Once you are through the login screen, you will see options for the provided three sections and a number of links. Customizing Remote Web Workplace You can customize the information that is present on the Welcome screen of the Remote Web Workplace, including the links shown, the background bitmaps, and company icons. Two of the links shown on the Welcome Page have a URL that starts with https://sites, which will not work from the Internet, so these will need to be changed. To do this, go to the Shares Folders and Web Sites tab and select Web Sites. Click on the View site properties button in the righthand task pane and navigate to the Home page links section. From here, you can choose what is displayed on the front page, removing options if desired. To alter the URLs of the links, click on the Manage links… button.
Read more
  • 0
  • 0
  • 8494
article-image-nginx-web-services-configuration-and-implementation
Packt
08 Jul 2011
6 min read
Save for later

Nginx Web Services: Configuration and Implementation

Packt
08 Jul 2011
6 min read
Nginx 1 Web Server Implementation Cookbook Installing new modules and compiling Nginx Today, most softwares are designed to be modular and extensible. Nginx, with its great community, has an amazing set of modules out there that lets it do some pretty interesting things. Although most operating system distributions have Nginx binaries in their repositories, it is a necessary skill to be able to compile new, bleeding edge modules, and try them out. Now we will outline how one can go about compiling and installing Nginx with its numerous third-party modules. How to do it... The first step is to get the latest Nginx distribution, so that you are in sync with the security and performance patches (http://sysoev.ru/nginx/nginx-0.7.67.tar.gz). Do note that you will require sudo or root access to do some of the installation steps going ahead. Un-tar the Nginx source code. This is simple, you will need to enter the following command: tar -xvzf nginx-0.7.67.tar.gz Go into the directory and configure it. This is essential, as here you can enable and disable the core modules that already come with Nginx. Following is a sample configure command: ./configure --with-debug --with-http_ssl_module --with-http_realip_module --with-http_ssl_module --with-http_perl_module --with-http_stub_status_module You can figure out more about what other modules and configuration flags use: ./configure --help If you get an error, then you will need to install the build dependencies, depending on your system. For example, if you are running a Debian based system, you can enter the following command: apt-get build-dep nginx This will install all the required build dependencies, like PCRE and TLS libraries. After this, you can simply go ahead and build it: sudo make install This was the plain vanilla installation! If you want to install some new modules, we take the example of the HTTP subscribe-publish module. Download your module (http://pushmodule.slact.net/downloads/nginx_http_push_module-0.692.tar.gz). Un-tar it at a certain location:/path/to/module. Reconfigure Nginx installation: ./configure ..... --add-module=/path/to/module The important part is to point the –add-module flag to the right module path. The rest is handled by the Nginx configuration script. You can continue to build and install Nginx as shown in step 5. sudo make install If you have followed steps 1 to 10, it will be really easy for you to install any Nginx module. There's more... If you want to check that the module is installed correctly, you can enter the following command: nginx -V A sample output is something as shown in the following screenshot: This basically gives you the compilation flags that were used to install this particular binary of Nginx, indirectly listing the various modules that were compiled into it. Running Nginx in debug mode Nginx is a fairly stable piece of software which has been running in production for over a decade and has built a very strong developer community around it. But, like all software there are issues and bugs which crop up under the most critical of situations. When that happens, it's usually best to reload Nginx with higher levels of error logging and if possible, in the debug mode. How to do it... If you want the debug mode, then you will need to compile Nginx with the debug flag (--with-debug). In most cases, most of the distributions have packages where Nginx is precompiled with debug flag. Here are the various levels of debugging that you can utilize: error_log LOGFILE [debug | info | notice | warn | error | crit | debug_core | debug_alloc | debug_mutex | debug_event | debug_http | debug_imap]; Downloading the example code You can download the example code files here If you do not set the error log location, it will log to a compiled-in default log location. This logging is in addition to the normal error logging that you can do per site. Here is what the various specific debug flags do: There's more... Nginx allows us to log errors for specific IP addresses. Here is a sample configuration that will log errors from 192.168.1.1 and the IP range of 192.168.10.0/24: error_log logs/error.log; events { debug_connection 192.168.1.1; debug_connection 192.168.10.0/24; } This is extremely useful when you want to debug in the production environment, as logging for all cases has unnecessary performance overheads. This feature allows you to not set a global debug on the error_log, while being able to see the debug output for specific matched IP blocks based on the user's IP address. Easy reloading of Nginx using the CLI Depending on the system that you have, it will offer one clean way of reloading your Nginx setup Debian based: /etc/init.d/Nginx reload Fedora based: service Nginx reload FreeBSD/BSD: service Nginx reload Windows: Nginx -s reload All the preceding commands reload Nginx; they send a HUP signal to the main Nginx process. You can send quite a few control signals to the Nginx master process, as outlined in the following table. These let you manage some of the basic administrative tasks: How to do it... Let me run you through the simple steps of how you can reload Nginx from the command line. Open a terminal on your system. Most UNIX-based systems already have fairly powerful terminals, while you can use PuTTY on Windows systems. Type in ps auxww | grep nginx. This will output something as shown in the following screenshot: If nothing comes, then it means that Nginx is not running on your system. If you get the preceding output, then you can see the master process and the two worker processes (it may be more, depending on your worker_processes configuration). The important number is 3322, which is basically the PID of the master process. To reload Nginx, you can issue the command kill -HUP <PID of the nginx master process>. In this case, the PID of the master process is 3322. This will basically read the configurations again, gracefully close your current connections, and start new worker processes. You can issue another ps auxww | grep nginx to see new PIDs for the worker processes (4582,4583): If the worker PIDs do not change it means that you may have a problem while reloading the configuration files. Go ahead and check the Nginx error log. This is very useful while writing scripts, which control Nginx configuration. A good example is when you are deploying code on production; you will temporarily point the site to a static landing page.  
Read more
  • 0
  • 0
  • 8339

article-image-securing-data-cell-level-intermediate
Packt
01 Aug 2013
4 min read
Save for later

Securing data at the cell level (Intermediate)

Packt
01 Aug 2013
4 min read
(For more resources related to this topic, see here.) Getting ready The following prerequisite is essential for our recipe to continue the recipe: SQL Server 2012 Management Studio (SSMS). The AdventureWorks2012 database. We can obtain the necessary database files and database product samples from SQL Server Database Product Samples landing page (http://msftdbprodsamples.codeplex.com/releases/view/55330). These sample databases cannot be installed on any version of SQL Server other than SQL Server 2012 RTM or higher. Ensure you install the databases to your specified 2012 version instance. For this article I have created a new OLAP database using the AdventureWorksDM.xmlafile. Also, ensure that the user who is granting permissions is a member of Analysis Services server role or member of Analysis Services database role that has Administrator permissions. How to do it... The following steps are continued from the previous recipe, but I believe it is necessary to reiterate them from the beginning. Hence, this recipe's steps are listed as follows: Start the SQL Server Management Studio and connect to the SQL Server 2012 Analysis Services instance. Expand the Databases folder. Choose the AdventureWorksDM database (created within the Getting ready section as previously mentioned) and expand the Roles folder. If you are reading this recipe directly without the previous recipes, you can create the necessary roles as per the Creating security roles(Intermediate) recipe. Right-click on the role (here I have selected the DBIA_Processor role) to choose Role Properties. Click on Cell Data on the Select a page option to present a relevant permissions list. In some cases, if you have observed that there is no option available in the Cube drop-down list in the Cell Data option, ensure you check that the relevant cube is set with appropriate Access and Local Cube/Drillthrough options by choosing the Cubes option on the left-hand side on Select a page. Refer to the following screenshot: Now let us continue with the Cell Data options: Click on Cell Data in the Select a page option to present a relevant permissions list. Select the appropriate cube from the drop-down list; here I have selected the Adventure Works DW2012 cube. Choose the Enable read permissions option and then click on the Edit MDX button. You will be presented with the MDX Builder screen. Then, choose the presented Metadata measure value to grant this permission. Similarly, for the Enable read-contingent permissions option, follow the previous step. Finally, click on the Enable read/write permissions option. As a final check, either we can click on the Check button or the OK button, which will check whether valid syntax is parsed from the MDX expressions previously mentioned. If there are any syntax errors, you can fix them by choosing the relevant Edit MDX button to correct. This completes the steps to secure the data at the cell level using a defined role in the Analysis Services database. How it works... There are a few guidelines and some contextual information that will help us understand how we can best secure the data in a cell. Nevertheless, whether the database role has read, read-contingent, or read/write permissions to the cell data, we need to ensure that we are granting permissions to derived cells correctly. By default, a derived cell obtains the relevant data from the other cells. So, the appropriate database role has the required permissions to the derived cell but not to the cells from which the derived cell obtain its values. Irrespective of the database role, whether the members have read or write permissions on some or all the cells within a cube, the members of the database role have no permissions to view any cube data. Once the denied permissions on certain dimensions are effective, the cell level security cannot expand the rights of the database role members to include cell members from that dimension. The blank expression within the relevant box will have no effect in spite of clicking on Enable read/write permissions. Summary Many databases insufficiently implement security through row- and column-level restrictions. Column-level security is only sufficient when the data schema is static, well known, and aligned with security concerns. Row-level security breaks down when a single record conveys multiple levels of information. The ability to control access at the cell level based on security labels, intrinsically within the relational engine, is an unprecedented capability. It has the potential to markedly improve the management of sensitive information in many sectors, and to enhance the ability to leverage data quickly and flexibly for operational needs. This article showed us just how to secure the data at the cell level. Resources for Article: Further resources on this subject: Getting Started with Microsoft SQL Server 2008 R2 [Article] Microsoft SQL Server 2008 High Availability: Installing Database Mirroring [Article] SQL Server and PowerShell Basic Tasks [Article]
Read more
  • 0
  • 0
  • 7996
Modal Close icon
Modal Close icon