Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Servers

95 Articles
article-image-microsoft-sql-server-2008-r2-managing-core-database-engine
Packt
29 Jun 2011
9 min read
Save for later

Microsoft SQL Server 2008 R2: Managing the Core Database Engine

Packt
29 Jun 2011
9 min read
Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook         Implementing Central Management feature enhancements Central Management Server (CMS) is a feature that enables the DBA to administer multiple servers by designating a set of server groups, which maintain the connection information for one or more instances of SQL Server. In this recipe, we will go through the implementation strategy for the CMS feature in your SQL Server data platform. Before we continue into the recipe, it is essential for you to understand the core functionality of this feature, which uses the multiple features from the SQL Server core database engine: SQL Server system repository Manage the SQL Server farm from one location Easy reporting about system state, configuration, and performance Getting ready As per the feature characteristics described previously, there are three important components in the system. These components are: Storage Execution Reporting Storage is used to store the system repository, Execution manages the SQL Server instances, and Reporting shows the statistical information. The CMS feature implementation is a dual-process (setup and configuration) making use of SQL Server components. The prerequisite of services is to install SQL Server 2008 R2 and Reporting Services. To support the extensive features of the core database engine Enterprise Edition, it is recommended that the CMS be a single point of resource to manage the multiple instances of SQL Server. However, for cost purposes, you can still make use of the Standard Edition. CMS is introduced in SQL Server 2008, as earlier versions of SQL Server cannot be designated as a Central Management Server. The main characteristic of CMS is to execute TSQL and Policy-based management policies at the same time against these designated server groups. How to do it... To get started, we will need to complete the following steps to implement the central management server feature to the existing SQL Server data platform: SQL Server Management Studio is the main tool to designate the CMS feature. Open SQL Server Management Studio, on the View menu, click Registered Servers. In Registered Servers, expand Database Engine, right-click Central Management Servers, point to New, and then click Central Management Servers. In the New Server Registration dialog box, register the instance of SQL Server that will be the central management server. In Registered Servers, right-click the central management server, point to New, and then click New Server Group. Type a group name and description, and then click OK. In Registered Servers, right-click the central management server group, and then click New Server Registration. In the New Server Registration dialog box, register one or more instances of SQL Server that will be members of this server group, see the following screenshot: After you have registered a server, the Central Management Server (CMS) will be able to execute queries against all servers in the group at the same time. TSQL statements and Policy-based management policies can be executed at the same time against the server groups; CMS maintains the connection information for one of more instances of SQL Server. It is essential to consider the effective permissions on the servers in the server groups, which might vary; hence, it is best to use Windows Authentication in this case. To enable the single-point of resource to execute a single query on multiple CMS servers, it is easier to access the registration properties of a group of servers. Using that specific server group, there are multiple options of new query, object explorer, evaluate policies, and import policies. The CMS feature is a powerful manageability feature and is easy to set up. However, it is potentially dangerous to implement a policy that may not be suitable for a set of server groups. Therefore, it is best to represent the CMS query editor with a specific color that can easily catch our eye to avoid any 'unintentional execution of queries or policies'. To enable such a representation follow these steps: Open SQL Server Management Studio, on the Tools menu, click on Options. In Options, select Editor Tab and Status Bar. Select the Status Bar Layout and Colors section from the right-hand side of the options window. In the following screenshot, we can see that we have selected the Pink color, which highlights that extra care is needed before executing any queries, as shown in the next screenshot: Now, let us evaluate how to create a CMS server group, evaluate policies, and execute queries on multiple servers. Open SQL Server Management Studio; on the View menu, click on Registered Servers, expand Database Engine, right-click Central Management Servers, point to New, and then click Central Management Servers. In the New Server Registration dialog box, register the instance of SQL Server that will be the Central Management Server. In Registered Servers, right-click on Central Management Server. Point to New and click on New Server Group. Now type a group name and description, for instance, UAT or Production and click OK. See the following screenshot: To register the servers to these individual server groups, in Registered Servers right-click on Central Management Server Group and then click New Server Registration. In the New Server Registration dialog box, register one or multiple instances that are applicable to the relevant server groups, such as Development or Production. As a best practice, it is ideal to create individual groups within CMS to enable a single-query execution against multiple servers as in the following screenshot: SQL Server 2008 R2 provides a 'Policy Based Management framework' that consists of multiple sets of policy files that you can import as best practice policies. You can then evaluate the policies against a target set that includes instances, instance objects, databases, or database objects. You can evaluate policies manually, set policies to evaluate a target set according to a schedule, or set policies to evaluate a target set according to an event. To evaluate a policy complete the following steps: To evaluate the policy against multiple configuration targets in the SQL Server Management Studio, click on the View menu to select Registered Servers and expand the Central Management Server. Right-click on the Production server group and click on Evaluate Policies. The Evaluate Policies screen will be presented as shown in the next screenshot. Click on Source to Choose Source. The relevant best practices policies are stored as XML files under %:Program FilesMicrosoft SQL Server100ToolsPoliciesDatabaseEngine1033 directory, which is shown in the next screenshot: For this recipe, we have chosen to select the policy from SQL Server 2008 R2 Best Practices Analyzer that is downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=0fd439d7-4bff-4df7-a52f-9a1be8725591 and installed on the server. Let us evaluate the policy of Backup and Data File Location against the production instances. This policy will perform the steps to: Check if the database and the backups are on separate backup devices. If they are on the same backup device, and the device that contains the database fails, your backups will be unavailable. Putting the data and backups on separate devices optimizes the I/O performance for both the production use of the database and writing the backups, as shown in the following screenshot. (This is one of the Best Practices of Maintenance policies from Microsoft SQL Server 2008 R2 Best Practices Analyzer.) The ease of executing a single query against multiple-servers at the same time is one of my favorite manageability features. The results that are returned with this query can be combined into a single results pane, or in a separate results pane, which can include additional columns for the server name and the login that is used by the query on each server. To accomplish this task, let us execute a query to obtain the Product Version, Product Level, Edition, and EngineEdition that are registered against the Production server group. In SQL Server Management Studio, on the View menu, click Registered Servers. Expand a Central Management Server, right-click a server group, point to Connect, and then click New Query. In Query Editor, execute the following TSQL statement: SELECT SERVERPROPERTY('ProductVersion') AS ProductVersion, SERVERPROPERTY('ProductLevel') AS ProductLevel, SERVERPROPERTY('Edition') AS Edition, SERVERPROPERTY('EngineEdition') AS EngineEdition; GO The results are displayed as follows: From the preceding screenshot, the results with a pink bar at the bottom state that this is a Production group. To change the multi-server results options, in the Management Studio, on the Tools menu, click Options. Expand the Query Results, expand SQL Server, and then click Multi-server Results. On the Multi-server Results page, specify the option settings that you want, and then click OK. We now should have successfully completed the CMS feature implementation into the SQL Server data platform. How it works... The SQL Server instance that is designated as a Central Management Server maintains server groups, which maintain the connection information for one or more instances of SQL Server. CMS and all subsidiary servers on the network are registered using Windows Authentication only; in case local server groups are registered using Windows Authentication and SQL Server Authentication. The storage for centralized management server groups, connection information, and authentication details are available from themsdb system database in: dbo.sysmanagement_shared_registered_servers_internal dbo.sysmanagement_shared_server_groups_internal system tables dbo.sysmanagement_shared_registered_servers dbo.sysmanagement_shared_server_groups system views The user and permissions information on the registered servers may vary based on the user permission that uses Windows Authentication connection. For instance, in our recipe, on the dbia-ssqaSQL2K8 instance, the login is a member of the SYSADMIN fixed server role and on the DBIA-SSQASSQA instance, the login is a member of the DBCREATOR fixed server role. In this case, we may not be able to run certain SYSADMIN privileged statements on DBIA-SSQASSQA instance. There's more... Using SQL Server 2008 R2 Management Studio with CMS, we can manage the lower SQL Server version instances, such as 2005 (90), 2000 (80), and 7.0 (70), which is truly a central management of instances.
Read more
  • 0
  • 0
  • 6155

article-image-creating-analysis-services-cube-visual-studio-2008-part-1
Packt
22 Oct 2009
4 min read
Save for later

Creating an Analysis Services Cube with Visual Studio 2008 - Part 1

Packt
22 Oct 2009
4 min read
  Reviewing Jayaram's other OLAP related articles may greatly help in understanding this article. Installing SQL Server Analysis Services SQL Server Analysis Services (SSAS) gets installed during the SQL Server 2008 (February 2008 CTP) installation. At that time the Service Accounts for the various components are also set up as shown in this screen shot taken during a typical installation. For full details review the following article on SQL Server 2008 installation. The Analysis Services is also configured during the same installation as shown displaying the default directories for Data, log file, backup etc. Starting and Stopping the SSAS The Analysis Services can be started and stopped from the Services in the local machine which can be accessed from All Programs | Control Panel | Administrative Tools | Services. Connecting to SSAS from SQL Server Management Studio File | Connect Object Explorer... in SQL Server Management Studio allows you to access the UI shown below where you can provide login information and get connected to the Analysis Services as shown. Out of the box, the SSAS comes up with a simple folder structure as shown. It does not come with databases. You may create a new database (this is not to be confused with databases on the Database Engine of SQL Server 2008). The database you create here will have the same folder structure as the Analysis Services Project folder structure when you start with VS 2008. Creating an Analysis Services Project in VS 2008 File | New | Project... opens the New Project window. Choose Business Intelligence Projects and from the Visual Studio installed templates choose Analysis Services Project. Change the default name of project to Nwind2008. This creates an empty project with a number of folders as shown. Although this is an empty structure you can deploy it on the server. To deploy right click with Nwind2008 highlighted and choose to deploy. After the processing is successfully completed you will see that Nwind 2008 is now available on the localhost. Adding a Data Source Right click on Data Sources folder and this will display the menu item New Data Source... as shown. Click on New Data Source... to open the Data Source Wizard as shown in the next figure. Read carefully the instructions on this page. In particular note that in order to change features of the data source you need to call up the next wizard - The Data Source View Wizard. Click on the Next button. This opens the page "Select how to define the connection" of the wizard as shown in the next figure. Here you can either create a data source based on an existing connection or use a new connection, or create a data source based on another object. The window also displays the existing connections under Data Connections and for each highlighted connection displays the properties on the right side. The New... button allows you to create a new connection by guiding you with more interactive windows. Click on Hodentek2.Northwind (<Server Name>.<database name>) (this will be different in your case) choose this data source and click on the Next button. This opens the Impersonation Page of the Data Source Wizard as shown with all text boxes empty, but with the Windows user name and password as default choice. Northwind database is not installed when you install SQL Server 2008. You may follow the procedure described in these articles to bring the Northwind database to SQL server 2008. Copying a Database from SQL Server 2005 to SQL Server 2008 using the Copy Database Wizard and Moving a Database from SQL Server 2005 to SQL Server 2008 in Three Steps. Change it to Use the service account and click on the Next button. This displays the Completing the Wizard page as shown where a Data source name and its connection information are displayed. Click on the Finish button. This adds the chosen data source to the Data Sources folder as shown. Northwind.ds is the file that is added and any changes to the source can be made by double clicking this file and applying the changes.  
Read more
  • 0
  • 0
  • 5380

article-image-core-sql-server-2008-r2-technologies-deploying-master-data-services
Packt
27 May 2011
8 min read
Save for later

Core SQL Server 2008 R2 Technologies: Deploying Master Data Services

Packt
27 May 2011
8 min read
Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system       Installing and configuring a Master Data Services Solution The administrative control of data is a primary task in any line of business management; as long as the data-flow is intended to meet enterprise level business needs, it serves the purpose. The essence of Master Data Management (MDM) is identified when the data is used for operational and analytical processes; however, those management processes must be able to clearly define the business concepts, identify different ways the data sets represent commonly understood concepts, and integrate the data into a consistent view that is available across the organization. SQL Server 2008 R2 introduces Master Data Services (MDS), which is designed to provide hierarchies that can be customized to group and summarize the master data. This kind of hierarchical representation helps to change the reporting structures and incorporate new aspects of the business rules by reducing the data-duplication. The best usage of MDS will help us to maintain a central database of master data, which is managed for reporting and analysis. In this recipe, we will go through the important steps to install and configure an MDS solution on an existing data platform. The core components of MDS are MDS configuration manager, MDS manager, and MDS web service. Getting ready There are certain pre-installation tasks required to install Master Data Services (MDS) and the setup must meet minimum requirements. The installation requirement is divided into MDS setup, MDM web application services, and MDS database. The MDS setup prerequisites are as follows: The MDS setup is possible only on the 64-bit versions of SQL Server 2008 R2 Datacenter, SQL Server 2008 R2 Enterprise, and SQL Server 2008 R2 Developer editions The supported operating systems are Enterprise Editions of Windows Server 2008 R2, Windows Server 2008, and Ultimate editions of Windows 7, and Windows Vista Ultimate. Also, Windows 7 or Vista Professional edition support this feature Microsoft .NET framework 3.5 Service Pack1 is required, to download refer to http://go.microsoft.com/fwlink/?LinkId=128220 The user account used to install MDS must be a member of the Administrators group on the local server The MDM web application and web services prerequisites are as follows: MDM is a web application hosted by IIS On a Windows Server 2008 or Windows Server 2008 R2 machine, use Server Manager to install Web Server (IIS) role and refer to http://msdn.microsoft.com/en-us/library/ee633744.aspx for the required role services The MDS database prerequisites are as follows: MDS requires a database to support MDM web applications and web services In this recipe, the machine that hosts an MDS database is using an instance of SQL Server 2008 R2 database engine How to do it... To implement the Master Data Management solution, MDS must be installed, which has two main components, an MDS database component and an MDS web component. Ensure that all of the prerequisites (mentioned in the earlier sections) are in place and by default the MDS is not installed as a part of the regular SQL Server 2008 R2 installation. We need to install the MDS separately as follows: On the SQL Server 2008 R2 installation media, navigate to the MasterDataServicesX641033_ENU folder and double-click on masterdataservices.msi file that will present a welcome screen Similar to the SQL Server installation screens, the MDS setup requires default information on the remaining steps that are self-explanatory The presented screens are License Agreement, Registration Information, Feature selection, and Ready to install to complete the Master Data Services installation Now that we have installed the Master Data Services components on the server, we need to configure the MDS to make it available for use. The MDM configuration is implemented as a two-fold phase: Databases and Web Configuration. To launch the MDS configuration manager, go to Start | All Programs | SQL Server 2008 R2 | Master Data Services | Configuration Manager. The tabs on the left-hand side represent Databases and Web configuration. Databases configure and store MDS configuration, web configuration for MDS, and web service for application integration. Make sure you click on the Databases page to create an MDS database to configure and store MDS objects. The database that is used or created must be on SQL Server 2008 R2. To ensure that the supplied configuration is working, click on the Test Connection button. Click Next to create a new Master Data Services database. On the Create Database screen, provide the database name and collation (choose SQL Server default collation or Windows collation). The collation is a single-selection, when we choose the SQL Server default collation, the Windows collation and other options (binary/case-sensitive) are disabled. The Collation is an important step to plan when Unicode data is stored and managed within the database. It is essential to plan proper collation settings during the SQL Server installation. The remaining two options Service Account and Administrator Account are self-explanatory. The username specified in the Service Account screen is added as member of dbo role for DBIASSQA_MDS_DB database. The user name specified in the Administrator Account screen will be the site administrator (super user) and will also have permissions to log on and add users to MDM applications. The account must be a domain account that is required to use application pools to connect to a database. Access the database using MDS manager site and web services based on the permissions. The Summary screen is an informative window that will help to review the details, click Next to start the configuration. The status for the MDS database creation and configuration will be shown as a Success. Click Finish to complete the Create Database wizard. To hold the metadata for the MDS repository, proceed to Metadata Services configuration manager. It is ideal to leave all the System Settings to default except Master Data Manager URL for notifications value. The MDS configuration manager for Databases is complete, so we will now proceed to the Web Configuration tab. Click on the Web Configuration option, which will present the following screenshot. The configuration will provide you with various options such as: Select the required website (from Web site drop-down) and web applications that hold MDS and can be configured Select required websites that do not have MDS and click on Create Application Create a new website by clicking on Create Site–that is specific to MDS only–that automatically creates a web application To choose an existing SQL server instance and database for MDS web application to access, click on the Select button. For this recipe, we will use an existing website: DBIASSQA-MDS, so select an existing SQL Server instance: DBIA-SSQASQL2K8R2 and database: DBIASSQA_MDS_DB. To configure the web services to enable programmatic access to MDS, click on the Enable Web services for this Web application option. Click on Apply to complete the MDS configuration. This completes the configuration and presents a popup indicating that the MDS configuration is complete. On the Configuration Complete screen, choose the Launch Web application in browser option and click OK to open the MDS Getting Started page in the browser. This completes the configuration of the Master Data Services databases and web configuration that will enable you to feature the MDS solution on an existing data platform. How it works... The Master Data Services setup will host the MDS web application and installs relevant MDS folders and files at the location and assigns permission to the objects. The setup will register MDS assemblies in the Global Assembly Cache (GAC). The MDS snap-in for Windows PowerShell is registered and installs the MDS configuration manager. A new windows group called MDS_ServiceAccount is created to contain the MDS service accounts for application pools. The MDS installation path creates a folder MDSTempDir where temporary compilation files are compiled for the MDM web application, and permissions are assigned for the MDS_ServiceAccount group. The web application configuration section follows a different workflow method, by including an option to Create Site followed by specifying settings for the new site and the MDS web application configured as the root web application in the site. This process will not allow you to configure the MDS web application under any virtual paths or specify an alias for the application. The Create New Application process enables you to select a website to create an MDS application in and the Create Application screen allows you to specify settings for the new application. The MDS application will be configured as an application in the selected site at the virtual path and at a specified alias.
Read more
  • 0
  • 0
  • 5130

article-image-tuning-server-performance-memory-management-and-swap
Packt
24 Jun 2015
7 min read
Save for later

Tuning server performance with memory management and swap

Packt
24 Jun 2015
7 min read
In this article, by Jonathan Hobson, the author of Troubleshooting CentOS, we will learn about memory management, swap, and swappiness. (For more resources related to this topic, see here.) A deeper understanding of the underlying active processes in CentOS 7 is an essential skill for any troubleshooter. From high load averages to slow response times, system overloads to dead and dying processes, there comes a time when every server may start to feel sluggish, act impoverished, or fail to respond, and as a consequence, it will require your immediate attention. Regardless of how you look at it, the question of memory usage remains critical to the life cycle of a system, and whether you are maintaining system health or troubleshooting a particular service or application, you will always need to remember that the use of memory is a critical resource to your system. For this reason, we will begin by calling the free command in the following way: # free -m The main elements of the preceding command will look similar to this:          Total   used   free   shared   buffers   cached Mem:     1837     274   1563         8         0       108 -/+ buffers/cache: 164   1673 Swap:     2063       0   2063 In the example shown, I have used the -m option to ensure that the output is formatted in megabytes. This makes it easier to read, but for the sake of troubleshooting, rather than trying to understand every numeric value shown, let's reduce the scope of the original output to highlight the relevant area of concern: -/+ buffers/cache: 164   1673 The importance of this line is based on the fact that it accounts for the associated buffers and caches to illustrate what memory is currently being used and what is held in reserve. Where the first value indicates how much memory is being used, the second value tells us how much memory is available to our applications. In the example shown, this instance translates into 164 MB of used memory and 1673 MB of available memory. Bearing this in mind, let me draw your attention to the final line in order that we can examine the importance of swap: Swap:     2063       0   2063 Swapping typically occurs when memory usage is impacting performance. As we can see from the preceding example, the first value tells us that there is a total amount of system swap set at 2063 MB, with the second value indicating how much swap is being used (0 MB), while the third value shows the amount of swap that is still available to the system as a whole (2063 MB). So yes, based on the example data shown here, we can conclude that this is a healthy system, and no swap is being used, but while we are here, let's use this time to discover more about the swap space on your system. To begin, we will revisit the contents of the proc directory and reveal the total and used swap size by typing the following command: # cat /proc/swaps Assuming that you understand the output shown, you should then investigate the level of swappiness used by your system with the following command: # cat /proc/sys/vm/swappiness Having done this, you will now see a numeric value between the ranges of 0-100. The numeric value is a percentage and it implies that, if your system has a value of 30, for example, it will begin to use swap memory at 70 percent occupation of RAM. The default for all Linux systems is usually set with a notional value between 30 to 60, but you can use either of the following commands to temporarily change and modify the swappiness of your system. This can be achieved by replacing the value of X with a numeric value from 1-100 by typing: # echo X > /proc/sys/vm/swappiness Or more specifically, this can also be achieved with: # sysctl -w vm.swappiness=X If you change your mind at any point, then you have two options in order to ensure that no permanent changes have been made. You can either repeat one of the preceding two commands and return the original values, or issue a full system reboot. On the other hand, if you want to make the change persist, then you should edit the /etc/sysctl.conf file and add your swappiness preferences in the following way: vm.swappiness=X When complete, simply save and close the file to ensure that the changes take effect. The level of swappiness controls the tendency of the kernel to move a process out of the physical RAM on to a swap disk. This is memory management at work, but it is important to realize that swapping will not occur immediately, as the level of swappiness is actually expressed as a percentage value. For this reason, the process of swapping should be viewed more as a measurement of preference when using the cache, and as every administrator will know, there is an option for you to clear the swap by using the commands swapoff -a and swapon -a to achieve the desired result. The golden rule is to realize that a system displaying a level of swappiness close to the maximum value (100) will prefer to begin swapping inactive pages. This is because a value of 100 is a representative of 0 percent occupation of RAM. By comparison, the closer your system is to the lowest value (0), the less likely swapping is to occur as 0 is representative of 100 percent occupation of RAM. Generally speaking, we would all probably agree that systems with a very large pool of RAM would not benefit from aggressive swapping. However, and just to confuse things further, let's look at it in a different way. We all know that a desktop computer will benefit from a low swappiness value, but in certain situations, you may also find that a system with a large pool of RAM (running batch jobs) may also benefit from a moderate to aggressive swap in a fashion similar to a system that attempts to do a lot but only uses small amounts of RAM. So, in reality, there are no hard and fast rules; the use of swap should be based on the needs of the system in question rather than looking for a single solution that can be applied across the board. Taking this further, special care and consideration should be taken while making changes to the swapping values as RAM that is not used by an application is used as disk cache. In this situation, by decreasing swappiness, you are actually increasing the chance of that application not being swapped-out, and you are thereby decreasing the overall size of the disk cache. This can make disk access slower. However, if you do increase the preference to swap, then because hard disks are slower than memory modules, it can lead to a slower response time across the overall system. Swapping can be confusing, but by knowing this, we can also appreciate the hidden irony of swappiness. As Newton's third law of motion states, for every action, there is an equal and opposite reaction, and finding the optimum swappiness value may require some additional experimentation. Summary In this article, we learned some basic yet vital commands that help us gauge and maintain server performance with the help of swapiness. Resources for Article: Further resources on this subject: Installing CentOS [article] Managing public and private groups [article] Installing PostgreSQL [article]
Read more
  • 0
  • 0
  • 5030

article-image-data-processing-using-derived-column-and-aggregate-data-transformations
Packt
22 Oct 2009
3 min read
Save for later

Data Processing using Derived Column and Aggregate Data Transformations

Packt
22 Oct 2009
3 min read
Details of Data Processing: This package accomplishes the following: Connect to the SQL Server 2005 and access the MyNorthwind (a copy of the Northwind) database. A Data Reader Source will extract data using the following query: Select orderID, quantity, UnitPrice from [Order details] A Derived Column data transformation will add a column to the data flow which contains a combination of data from two other columns. A Conditional Split transformation divides the data flow into two flows based on a certain value in the derived column. The rows that satisfies the condition are captured in a recordset destination. The rest of the flow (bad data) gets routed to the next processor, the Aggregate Transformation. The Aggregate transformation orders the data coming into it showing group averages of the derived column. This grouped data then gets into a recordset destination. Although the final data ends up in recordset destinations, they can also be persisted to other destinations such as Data Reader, OLEDB and File system. Implementation in the Visual Studio 2005 Designer Before going into the details of the data processing let us take a look at the building blocks from the VS 2005 Toolbox that can be used to implement it and the interconnections between the building blocks. In addition to the data flow controls three data viewers are also inserted to stop and inspect the data before it goes any further. Data Extraction The data is extracted from MyNorthwind database on the SQL Server 2005. While configuring the Data Reader, you make use of a Connection Manager. The details of this Connection Manager is shown in its editor as in the next figure. The connection uses the SqlClient Data Provider. The SQL Server is a default installation with the name [.] and as it is set for SQL Server authentication username and password are required. The database connection is chosen from the drop-down as shown. The connectivity test can be performed and it now allows the data reader to extract data from the database. The Data Reader is added to the design surface by double clicking the control in the Toolbox. The Data Reader uses the connection manager to access the SQL Server as you see in the next figure in its Connection Manager's tab. The Data Reader's Property shows some of the other configuration details made by the other tabs in the above editor. In particular it shows the query used in extracting the data. It extracts three columns from the Order Details table in the MyNorthwind database and provides this to the next component in the data processing implementation.    
Read more
  • 0
  • 0
  • 4429

article-image-ibm-websphere-application-server-security-threefold-view
Packt
21 Feb 2011
8 min read
Save for later

IBM WebSphere Application Server Security: A Threefold View

Packt
21 Feb 2011
8 min read
  IBM WebSphere Application Server v7.0 Security Secure your IBM WebSphere applications with Java EE and JAAS security standards using this book and eBook Imagine yourself at an athletic event. Hey! No, no-you are at the right place. Yes, this is a technical article. Just bear with me for a minute. Well, now that the little misunderstanding is out of the way let's go back to the beginning. The home crowd is really excited about the performance of its team. However, that superb performance has not been yet reflected on the scoreboard. When finally that performance pays off with the long-waited score, 'it' happens! The score gets called off. It is not at all unlikely that a controversial call would be made, or worse yet, not made! Or so we think. There is a group of players and fans of the team that just scored that 'see' the play as a masterpiece of athletic execution. Then there is another group, that of players and coaches of the visiting team who clearly see a violation to the rules just before the score. And there is a third group, the referees. Well, who knows what they see! The fact is that for the same action, there may be several perceptions of the same set of events. Albert Einstein and other scientists provided a great example of multi-perception with the wave-particle duality concept. In a similar fashion, a WebSphere based environment could be analyzed in a number of forms. None of the forms or views is absolutely correct or incorrect. Each view, however, helps to focus on the appropriate set of components and their relationships for a given situation or need. WebSphere Application Server technology is a long and complex subject. This article provides three WAS ND environment views, emphasizing security, which will help the reader connect individual security tasks to the big picture. One view aids the WebSphere administrator to relate isolated security tasks to the overall middleware infrastructure (for example, messaging systems, directory services, and backend databases to name a few). This is useful in possible interactions with teams responsible for such technologies. On the other hand, a second view helps the administrator to link specific security configuration tasks to a particular Enterprise Application (for example, EJB applications, Service Integration Bus, and many more) set of components. This view will help the administrator to relate to possible development team needs. The article also includes a third view, one that focuses on the J2EE technology stack as it relates to security. This view could help blend the former two views. Enterprise Application-Server infrastructure architecture view This article starts with the Application Server infrastructure architecture view. The actual order of each of these major article sub-sections is really unimportant. However, since it needs to be a beginning, the infrastructure architecture view is thus selected. A possibly more formal name for what it is desired to convey in this section would be the Enterprise J2EE Application server infrastructure architecture. In this way, the scope of technologies that make up the application-centric architecture is well defined as that pertaining to J2EE applications. Nevertheless, this type of architecture is not exclusive to a WebSphere Application Server Network Deployment environment. Well, it's not in a way. If the architecture does not mention specific implementations of a function, it is a generic view of the architecture. On the other hand, if the architecture view defines or includes specific branded technologies of a function (for example, IHS for a web server function), then it is a specialized architecture. The point is that other J2EE application server products not related to the WebSphere umbrella may use the same generic type of infrastructure architecture. Therefore, this view has to do with J2EE application servers and the enterprise infrastructure components needed to sustain such application servers in a way that they can host a variety of enterprise applications (also known as J2EE applications). The following diagram provides an example of a basic WebSphere Application Server infrastructure architecture topology: The use of multiple user registries is new in version 7.0 Simple infrastructure architecture characteristics The architecture is basic since it only shows the minimum infrastructure components needed by a WebSphere Application Server infrastructure to become functional. In this diagram, the infrastructure elements are presented as they relate to each other functionally. In other words, the diagram is generic enough that it only shows and identifies the components by their main function. For instance, the infrastructure diagram includes, among others, proxy and messaging servers. Nothing in the diagram implies the mapping of a given functional component to a specific physical element such as an OS server or a specialized appliance. Branded infrastructure elements The infrastructure architecture presented in the diagram depicts a WebSphere clustered environment. The only technologies identified by their brand are the IBM HTTP Server (IHS) web server component (represented by the two rectangles (light blue) labeled IHS) and the WebSphere Application Server (WAS) nodes (represented by the rectangles (green) labeled WAS). These two simple components offer a variety of architectural choices, such as: Hosting both components in a single OS host under a WAS node Host each component in their own OS host in the same sub-network (normally an intranet) Host each component in different OS hosts in different sub-network (normally a DMZ for the IHS and intranet for the WAS) The choice for a specific architecture will be made in terms of a variety of requirements for your environment, including security requirements. Generic infrastructure components The infrastructure diagram also includes a number of components that are only identified by their function but no information is provided as to the specific technology/product implementing the function. For instance, there are four shapes (light yellow) labeled DB, Messaging, Legacy Systems, and Service Providers. In your environment, there may be choices to make in terms of the specific component. Take for instance, the DB component. Identifying what DB server or servers will be part of the architecture is dependent on the type of database employed by the enterprise application being hosted. Some corporations limit the number of database types to less than a handful. Nevertheless, the objective of the WebSphere Administrator responsible for the environment is to identify which type of databases will be interfacing with the WAS environment. Once that fact is determined, the appropriate brand/product could be added to the architecture diagram. Other technologies/components that need to be identified in a similar way are the user registry (represented by the shape (light purple) labeled User Registry), the security access component (represented in the diagram by the oval (yellow) labeled Security Access). A common type of user registry used in WebSphere environments is an LDAP server. Furthermore, a popular security access product is SiteMinder (formerly by Netegrity, now offered by CA). The remaining group of elements in the architecture has the function to front-end the IHS/WAS environment in order to provide high availability and added security. Proxy servers may be used or not, depending on whether the IHS function can be brought to the DMZ in its own OS host. Specialized appliances offered by companies such as CISCO or F5 normally implement load balancers. However, some software products can be used to implement this function. An example to the latter is the IBM WebSphere Edge suite. In general, most corporations already own and use firewalls and load balancers; so for the WebSphere administrator, it is just a matter of integrating them to the WebSphere infrastructure. Using the infrastructure architecture view Some of the benefits of picturing your WebSphere environment using the infrastructure architecture view come from realizing the following important points: Identify the technology or technology choices to be used to implement a specific function. For instance, what type of user registry to use. An immediate result of the previous point is identifying the corporate group the WebSphere administrator would be working with in order to integrate (that is, configure) said technology and WebSphere. Once the initial architecture has been laid out, the WebSphere administrator will be responsible to identify the type of security involved to secure the interactions between the various infrastructure architecture components. For instance, what type of communication will take place between the IHS and the Security Access component, if any. What is the best way to secure the communication channel? How is the IHS component authenticated to the Security Access component?  
Read more
  • 0
  • 0
  • 4373
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-sql-server-2008-r2-hierarchies-collections-and-mds-metadata
Packt
21 Jul 2011
9 min read
Save for later

Microsoft SQL Server 2008 R2: Hierarchies, Collections, and MDS Metadata

Packt
21 Jul 2011
9 min read
  Microsoft SQL Server 2008 R2 Master Data Services Manage and maintain your organization's master data effectively with Microsoft SQL Server 2008 R2 Master Data Services         Read more about this book       (For more resources on this subject, see here.) The reader is advised to refer the previous article on Creating and Using Models since this article is related to it. Master Data Services includes a Hierarchy Management feature, where we can: Browse all levels of a hierarchy Move members within a hierarchy Access the Explorer grid and all its functionality for all members of a given hierarchy. As we've seen already, there are two types of hierarchies in MDS—Derived Hierarchies and Explicit Hierarchies. We will look at how to create and use both types of hierarchies now. Derived Hierarchies In our example scenario, as we have stores in many different cities and states, we have a requirement to create a "Stores by Geography" hierarchy. In order to create the hierarchy, carry out the following steps: Navigate to the System Administration function, which can be accessed using the Master Data Manager home page. Hover over the Manage menu and click on the Derived Hierarchies menu item, which will open the Derived Hierarchy Maintenance page. Click on the green plus icon to add a Derived Hierarchy, which will open the Add Derived Hierarchy page. Enter Stores By Geography as the Derived Hierarchy Name and click on save. The Edit Derived Hierarchy page will now be displayed, where we can build the hierarchy. On the left-hand side of the screen we can pick entities to be in our hierarchy, whereas the middle pane of the screen displays the hierarchy in its current state. A preview of the hierarchy with real data is also available on the right-hand side. Drag the Store entity from the left-hand side of the screen and drop it onto the red Current Levels : Stores By Geography node in the center of the screen: The choice of entities of on the left hand side will now change to the only two entities that are related to Store, namely City and StoreType. Repeat the drag-and-drop process, but this time drag the City entity onto the red Current Levels node so that the Current Levels hierarchy is as follows: The Available Entities and Hierarchies pane will now be updated to show the State entity, as this is the only entity related to the City entity. Drag the State entity over to the red Current levels node, above the City entity. The Available Entities and Hierarchies pane will now be updated to show the Country. Drag the Country entity over to the red Current Levels node, above the State entity. This is the last step in building our Stores By Geography hierarchy, which will now be complete. We will now look at how we can browse and edit our new hierarchy. Exploring Derived Hierarchies Before we make any changes to the Derived Hierarchy, we will explore the user interface, so that we are comfortable with how it is used. Carry out the following in order to browse the new hierarchy features: Navigate to the home page and select the Explorer function. Within the Explorer function, hover over the Hierarchies menu, where a menu item called Derived: Stores By Geography should appear. Click on this new item, which will display the Derived Hierarchy, as shown below: The buttons above the hierarchy tree structure are as follows (from left to right): Pin Selected Item—Hides all members apart from the select item and all of its descendants. This option can be useful when browsing large hierarchies. Locate Parent of Selected Item—The immediate parent of the selected member could be hidden, if someone has chosen to pin the item (as above). Locate Parent of Selected Item will locate and display the members parent, as well as any other children of the parent. Refresh Hierarchy—Refreshes the hierarchy tree to display the latest version, as edits could occur outside the immediate tree structure. Show/Hide Names—Toggles the hierarchy view to be either the member code and the name, or just the code. The default is to show the member name and code. Show/Hide Attributes—On the right-hand side of the screen (not shown) the children of the selected item are shown in the Explorer grid, along with all their attributes. This button shows or hides the Explorer grid. View Metadata—Displays a pop-up window that will display the metadata for the selected member. We will discuss metadata towards the end of this article. Select the DE {Germany} member by clicking on it. Note: the checkboxes are not how members are selected; instead, clicking on the member name will select the member. Use the Pin Selected Item button to pin the DE {Germany} member, which will hide the siblings of Germany as shown below: To now locate the parent of DE {Germany}, and display the parent's other children (for example, USA and United Kingdom), click on DE {Germany}, then click on the Locate Parent of Selected Item button. The hierarchy tree will revert back to the original structure that we encountered. Now that we have returned to the original hierarchy structure, expand the US member until the member CA {California} is visible. Click on this member, which will display some of the cities, which we have loaded for the State of California: Editing multiple entitiesThe above point illustrates one of the useful features of the hierarchy editor. Although we can edit the individual entities using their respective Explorer grids, with a Derived Hierarchy, we can edit multiple entities on a single page. We don't actually need to edit the cities for the moment, but we do want to look at showing and hiding the Explorer grid. Click on the Show/Hide Attributes button to hide the Explorer grid. Click on the button again to make the Explorer grid reappear. Finally, we're able to look at the Metadata for the Derived Hierarchy. Click on the View Metadata button to open the Metadata Explorer, which is shown below. This is where we would look for any auxiliary information about the Derived Hierarchy, such as a description to explain what is in the hierarchy. We'll look at metadata in detail at the end of this article. We will now look at how we add a new member in a Derived Hierarchy. Adding a member in a Derived Hierarchy Adding a member in a Derived Hierarchy achieves exactly the same thing as adding a member in the entity itself. The difference is that the member addition process when carried out in a Derived Hierarchy is slightly simplified, as the domain attribute (for example, City in the case of the Store entity) gets automatically completed by MDS. This because in a Derived Hierarchy we choose to add a Store in a particular City, which negates the need to specify the City itself. In our example scenario, we wish to open a new Store in Denver. Carry out the following steps to add the new Store: Expand the US {United States} member of the Stores By Geography hierarchy, and then expand the CO {Colorado} member. Click on the 136 {Denver} member. On the far right-hand side of the screen, the Stores for Denver (of which there are none) will be shown. Click on the green plus icon to begin the process of adding a Store. Enter the Name as AW Denver and enter the Code as 052. Click on the save icon to create the member. Click on the pencil icon to edit the attributes of the new member. Note that the City attribute is already completed for us. Complete the remaining attributes with test data of your choice. Click on the save icon to save the attribute values. Click on the green back arrow button at the top of the screen in order to return to the Derived Hierarchy. Notice that we now have a new Store that exists in the Derived Hierarchy, as well as a new row in the Explorer grid on the right-hand side of the screen. We will now continue to explore the functionality in the hierarchy interface by using Explicit Hierarchies. Explicit Hierarchies Whereas Derived Hierarchies rely on the relationships between different entities in order to exist, all the members within Explicit Hierarchies come from a single entity. The hierarchy is made by making explicit relationships between leaf members and the consolidated members that are used to give the hierarchy more than one level. Explicit Hierarchies are useful in order to represent a ragged hierarchy, which is a hierarchy where the leaf members exist at different levels across the hierarchy. In our example scenario, we wish to create a hierarchy that shows the reporting structures for our stores. Most stores report to a regional center, with the regional centers reporting to Head Office. However, some stores that are deemed to be important report directly to Head Office, which is why we need the Explicit Hierarchy. Creating an Explicit Hierarchy As we saw when creating the original Store entity in the previous article, an Explicit Hierarchy can get automatically created for us when we create an Entity. While that is always an option, right now we will cover how to do this manually. In order to create the Explicit Hierarchy, carry out the following steps: Navigate to the System Administration function. Hover over the Manage menu and click on the Entities menu item. Select the Store entity and then click on the pencil icon to edit the entity. Select Yes from Enable explicit hierarchies and collections drop-down. Enter Store Reporting as the Explicit hierarchy name. Uncheck the checkbox called Include all leaf members in mandatory hierarchy. If the checkbox is unchecked, a special hierarchy node called Unused will be created, where leaf members that are not required in the hierarchy will reside. If the checkbox is checked, then all leaf members will be included in the Explicit Hierarchy. This is shown next: Click on the save icon to make the changes to the entity, which will return us to the Entity Maintenance screen, and conclude the creation of the hierarchy.
Read more
  • 0
  • 0
  • 4263

article-image-microsoft-sql-server-2008-r2-mds-creating-and-using-models
Packt
21 Jul 2011
7 min read
Save for later

Microsoft SQL Server 2008 R2 MDS: Creating and Using Models

Packt
21 Jul 2011
7 min read
Microsoft SQL Server 2008 R2 Master Data Services Manage and maintain your organization's master data effectively with Microsoft SQL Server 2008 R2 Master Data Services MDS object model overview The full MDS object model contains objects such as entities, attributes, members, collections, as well as a number of other objects. The full MDS object model can be summarized by the following diagram: As indicated in the diagram, the Model object is the highest level object in the MDS object model. It represents an MDM subject area, such as Product or Customer. Therefore, it would be normal to have a model called Product, which would contain several different Entities, such as Color, Brand, Style, and an entity called Product itself. Entities have one or more Attributes and instances of entities are known as Members. Instead of an entity just containing attributes, it is possible within MDS to categorize the attributes into groups, which are known as Attribute Groups. There are two types of hierarchies within MDS, namely Derived Hierarchies and Explicit Hierarchies. Derived hierarchies are hierarchies that are created based upon the relationships that exist between entities. For example, the Customer entity may have an attribute called City that is based upon the City entity. In turn, the City entity itself may have a State attribute that is based upon the State entity. The State entity would then have a Country attribute, and so on. This is shown below: Using the relationships between the entities, a Derived Hierarchy called Customer Geography, for example, can be created that would break customers down by their geographic locations. A visual representation of this hierarchy is as follows: A Derived Hierarchy relies on separate entities that share relationships between one another. This is in contrast to an Explicit Hierarchy, whose hierarchy members must all come from a single entity. The point to note though is that the members in an Explicit Hierarchy can be both Leaf Members and also Consolidated Members. The idea behind Explicit Hierarchies is that they can represent ragged data structures whereby the members cannot be easily categorized as being a member of any one given entity. For example, Profit and Loss accounts can exist at different levels. Both the accounts "Cash in Transit" and "Net Profit" are valid accounts, but one is lowlevel and one is a high-level account, plus there are many more accounts that sit in between the two. One solution is to create "Net Profit" as a consolidated member and "Cash In Transit" as a leaf member, all within the same entity. This way, a full ragged Chart of Accounts structure may be created within MDS. Models The first step to get started with Master Data Services is to create a model, as the model is the overall container object for all other objects. We are going to start by creating a practice model called Store—meaning that it will be our single source of retail stores in our fictitious company. Models are created in the Master Data Manager application, within the System Administration functional area. Carry out the following steps to create our Store model: Ensure that the Master Data Manager application is open. To do this, type the URL into your web browser, which will typically be http://servername/MDS/. Click on the System Administration option in the home page, which will take you to the following screen: The tree view that is shown breaks down each of the models that are installed in the MDS environment, showing entities, attributes and other objects within the system. The tree view provides an alternate way to edit some of the MDS objects. Rather than using the tree view, we are going to create a model via the menu at the top of the screen. Hover over the Manage menu and choose the Models menu item. The resulting screen will show a list of the current models within MDS, with the ability to add or edit a model. Click on the familiar Master Data Manager green plus icon in order to add our new model, which will produce the following screen: Enter Store as the Model name. Next there are some settings to configure for the model, via three checkboxes. The settings are: Create entity with same name as model—This is essentially a shortcut to save creating an entity manually after creating the model. We want an entity called Store to be created, so leave this option checked. Create explicit hierarchy with same name as model—Another shortcut, this time to create an Explicit Hierarchy. We will manually create an Explicit Hierarchy later on, so uncheck this box. Include all leaf members in mandatory hierarchy—This setting only applies if we are creating an Explicit Hierarchy. If checked, it will ensure that all leaf members must exist within an Explicit Hierarchy. By unchecking the previous option, this option will be unchecked. Click the "Disk" icon to save and create the new model. Entities and attributes After creating the new model, we are now in a position to create all the entities and attributes that we need to manage our store data. However, before doing this, we need to plan what entities and attributes are needed. This can be summarized by the following table: Attributes Attributes are the building blocks of entities in MDS, and there are three different types, which we need to cover before we can start creating and altering entities. The different types are explained in the following table: Domain attributes and relationships between entities As we left the Create entity with same name as model option checked when creating the model, we will have an entity already created called Store. The task now is to edit the Store entity, in order to set up the attributes that we have defined previously. However, before we can complete the Store entity, we actually need to create several of the other entities. This is because we want some of the Store's attributes to be based on other entities, which is exactly why we need domain attributes. Domain attributes are the method that allow us to create relationships between entities, which we need, for example, in order to create Derived Hierarchies. In addition, they also allow for some good data entity validation as, for example, it would not be possible for a user to enter a new attribute value of "New York" in the City attribute if the member "New York" did not exist in the City entity. If we look at our table of entities and attributes again, there is a column to indicate where we need a domain attribute. If we were building an entity relationship diagram, we would say that a Store has a City and that a City belongs to a State. Creating an entity To focus on the Store entity first, before we can say that its structure is complete, we firstly need to create any entities that the Store entity wants to use as domain attributes. We can clearly see from the previous table that these are City and StoreType. The Country and State entities are also required as domain attributes by other entities. Therefore, carry out the following steps in order to create the remaining entities: Ensure that you are in the System Administration function. Hover over the Manage menu and click on the Entities menu item. Ensure that the Store model is selected from the Model drop-down. Click the green "plus" icon. This will open the Add Entity screen. Enter the following information on the Add Entity screen, as shown below: Enter City as the Entity Name. Choose No to not Enable explicit hierarchies and collections. Click the save icon to create the entity and return to the Entity Maintenance screen. Repeat this process to create the StoreType, Country, and State entities. Once you have finished the Entity Maintenance screen should look as follows:
Read more
  • 0
  • 0
  • 4262

article-image-creating-analysis-services-cube-visual-studio-2008-part-2
Packt
22 Oct 2009
2 min read
Save for later

Creating an Analysis Services Cube with Visual Studio 2008 - Part 2

Packt
22 Oct 2009
2 min read
Reviewing Jayaram's other OLAP related articles may greatly help in understanding this article. Creating a New Cube The folder structure for the project developed in Part 1 is shown in the next figure. The Northwind.ds data source and the Northwind.dsv data source view were configured in Part 1. There are no pre-existing cubes in Nwind2008. Right click the Cubes folder and from the drop-down menu you can create a new Cube. Click on New Cube... menu item. This opens the Cube Wizard welcome window as shown. Click on the Next button. This opens the Select Creation Method page of the wizard as shown. There are three options and the default is used for this article. Click on the Next button. This opens the Select Measure Groups tables. At least one table must be chosen to continue. There is even the option of asking for a suggestion. Click on the Suggest button at the top. The program goes through the motions and comes up with two tables as candidates for Measures group, the Products table and the Order Details table. You will see check marks appearing for these two tables. Accept the suggested tables for measures and click on the Next button. This opens the Select Measures window where you can choose measures that you want to include in the Cube as shown. Uncheck the ID related items in the Products table and click on the Next button. This brings up the Select New Dimensions window as shown in the next figure. Here also one could choose the needed items. For this article the default is accepted. Click on the Next button. This takes you to the Completing the Wizard window which shows your Cube contents in a tree view as shown. Now click on the Finish button. This creates the Cube as shown in the Solution Explorer. Now you will see additional tabs open up for the Northwind.cube as shown. Using these tabs you can look at more details. These are outside the scope of this article. Also separate windows gets displayed for Cube's Measures and Dimensions as shown. Also, the Data Source View of the Cube with the relationships between the Dimensions and Measures gets displayed as shown.    
Read more
  • 0
  • 0
  • 4200

article-image-managing-core-microsoft-sql-server-2008-r2-technologies
Packt
27 May 2011
10 min read
Save for later

Managing Core Microsoft SQL Server 2008 R2 Technologies

Packt
27 May 2011
10 min read
  Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system     Introduction SQL Server 2008 R2 has a flurry of new enhancements added to the core database engine and business intelligence suite. The new enhancements within the core database engine are: SQL Azure connectivity (SQLAzure), Data-Tier application (DAC PACK), SQL Server Utility (UCP), and network connectivity. In addition to the new features and internal enhancements, SQL Server 2008 R2 includes new developments to the asynchronous messaging subsystem, such as Service Broker (SB) and external components such as Master Data Services (MDS), StreamInsight, Reporting Services with SharePoint Integration, and PowerPivot for Analysis Services. These recipes involve the planning, design, and implementation of features that are added and they are important to the management of the core technologies of SQL Server 2008 R2. Planning and implementing Self-Service Business Intelligence services Self-Service Business Intelligence (BI) is the new buzzword in the data platform, a new paradigm to the existing BI functionalities. Using Microsoft's Self-Service BI, anyone can easily build the BI applications using traditional desktop tools such as Office Excel and specialized services such as SharePoint. The BI application can be built to manage the published applications in a common way and track data usage having the analytical data connected to its source. The data customization can be accomplished easily by sharing data in a controlled way where the customers can access it from a web browser (intranet or extranet) without using Office applications or Server applications. The external tasks such as security administration and deployment of new hardware are accomplished using the features of SQL Server 2008 R2 and Windows Server 2008 operating system. Self-Service BI can be implemented using PowerPivot, which has two components working together, PowerPivot for Excel and PowerPivot for SharePoint. The PowerPivot for Excel is an add-in that enhances the capabilities of Excel for users and brings the full power of SQL Server Analysis Services right into Excel; whereas, PowerPivot for SharePoint extends the Office SharePoint Services to share and manage the PowerPivot applications that are created with PowerPivot for Excel. In this recipe, we will go through the steps that are required to plan and implement PowerPivot for Excel and PowerPivot for SharePoint. Getting ready PowerPivot for Excel is a component of SQL Server 2008 R2 and is an add-in of Excel 2010 from Office 2010 suite, along with the Office Shared Features. To get started you will need to do the following: Download the PowerPivot for Excel 2010 add-in from: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e081c894-e4ab-42df-8c87-4b99c1f3c49b&displaylang=en. If you install the 32-bit version of Excel, you must use the 32-bit version of PowerPivot. If you install the 64-bit version of Excel, you must use the 64-bit version of PowerPivot. To test PowerPivot features in addition to the Excel add-in you need to download the sample databases for SQL Server 2008 R2. They can be downloaded from: http://msftdbprodsamples.codeplex.com/wikipage?title=Installing%20SQL%20Server%202008R2%20Databases. In this recipe, the sample database built in SQL Server Analysis Services 2008 R2 based on an imaginary company will be referred to. It is available for download from: http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=868662dc-187a-4a85-b611-b7df7dc909fc. Using the windows installer (I) package, you can install SQL Server 2008 R2 sample databases. However, you must make sure that your SQL Server instance meets the following prerequisites: Full-Text Search must be installed with SQL Full-text filter daemon launcher service running FILESTREAM must be enabled To install these prerequisites on existing SQL Server instances, refer to http://msftdbprodsamples.codeplex.com/wikipage?title=Database%20Prerequisites&referringTitle=Installing%20SQL%20Server%202008R2%20Databases. PowerPivot for the SharePoint component can be installed using the SharePoint 2010 setup program SharePoint 2010 is only supported on Windows Server 2008 Service Pack 2 or higher, and Window Server 2008 R2, and only on x64 platform There are two setup options—a New Farm install and an Existing Farm install The New Farm install is typically expected to be used in a single-machine install where PowerPivot will take care of installing and configuring all the relevant services effectively for you To view PowerPivot workbooks published to the PowerPivot Gallery, you will need Silverlight Silverlight is not available in a 64-bit version; you must use the 32-bit Web browser to see PowerPivot workbooks using the PowerPivot Gallery's special views The Self-Service Analytics solution describes the steps required to analyze sales and promotions data and share the analysis to other users This solution consists of two documents and one sample data file (Access database and Excel workbooks), which can be downloaded from: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=fa8175d0-157f-4e45-8053-6f5bb7eb6397&displaylang=en How to do it... In this recipe, we will go through the steps required to plan and implement PowerPivot for Excel and PowerPivot for SharePoint: Connect to your SQL Server relational database server using SQL Server Management Studio and restore the Contoso retail sample database on the SQL Server instance. Office Shared Features installs VSTO 4.0, which is needed as a prerequisite for PowerPivot for Excel. To install the client application once the download is completed, run a setup program (PowerPivot_for_Excel.msi), which is a self-explanatory wizard installation. The initial PowerPivot installation of Excel 2010 program requires COM Add-ins activation. To do so, on the Excel worksheet click File | Options and select the Add-Ins page. In the Add-Ins page from the Manage drop-down list select COM Add-ins and click on the Go button. Finally, select the PowerPivot for Excel option and click the OK button to display the PowerPivot tab back in the Excel 2010 sheet. You can open the PowerPivot window from an Excel file, click on the PowerPivot tab on the Excel ribbon. Launch Excel 2010 from All Programs | Microsoft Office | Microsoft Excel 2010. On the PowerPivot tab, click on the PowerPivot Window. Click on the PowerPivot Window button that will open a PowerPivot Window, as shown in the preceding screenshot. The PowerPivot Window helps you with the key operations of importing data, filtering, and analyzing the data, as well as creating certain Data Analysis Expression (DAX) calculations. Let us see how the PowerPivot provides the several methods to import and enhance data by building a Self-Service BI application. PowerPivot for SharePoint configures your SharePoint 2010 as part of the New Farm install option. After successfully installing PowerPivot for SharePoint, you should verify the installation. Open the Web browser in administrator's privilege mode and enter http://<machinename> with <machinename> being the name of the server machine where you performed the PowerPivot for SharePoint install. You should see a link to the PowerPivot Gallery on the left side of the page, as shown in the following screenshot: Next, launch the SharePoint Central Administration page from Start | All | Programs | Microsoft SharePoint 2010 Products | SharePoint 2010 Central Administration. Click on the Manage web applications link under the heading Application Management. In the Web Applications General Settings dialog, navigate to the value Maximum upload size. For this recipe, let us choose the value 3192 MB. The default value for Maximum Upload size is set to 50MB, for ideal content deployment change the upload size to a minimum 3192MB. Navigate back to SharePoint Central Administration | Application Management | Manage Service Applications. Select ExcelServiceApp1 (Excel Services Application Web Service Application) and click Manage to choose Trusted file locations. Click on http:// to change the Excel Services settings. Navigate to the Workbook Properties section. For this recipe, choose the value of the Maximum Workbook Size as 3192 MB. The new settings will take effect once the IIS services are restarted. We have now successfully completed the PowerPivot for SharePoint installation, now the instance is ready to publish and share the PowerPivot workbook. To ensure the PowerPivot for SharePoint installation is successful and to share a workbook, we can test the process by publishing a PowerPivot workbook instance, as follows: Switch on the machine with Excel and PowerPivot for Excel. Open a workbook. Click on File | Save and Send | Save to SharePoint | Save As. Enter http://<yourPowerPivotserver>/PowerPivot Gallery in the folder path of the Save As dialog, and click Save. How it works... Microsoft Excel is a popular Business Intelligence tool on the client side, and to present data from multiple sources PowerPivot for Excel is required. The installation process of the PowerPivot add-in on the client side is a straightforward process, though there is no requirement of SQL Server 2008 R2 components on the client side. Behind the scenes, the Analysis Services VertiPaq engine from PowerPivot for Excel runs all the 'in-process' for Excel. The connectivity to the Analysis Services data source is managed by MDX, XMLA Source, AMO, and ADOMD.NET libraries, which in turn use the Analysis Services OLE DB provider to connect to the PowerPivot data within the workbook. On the workstation, the Analysis Services VertiPaq engine issues queries and receives data from a variety of data sources, including relational or multidimensional databases, documents, public data stores, or Web services. During data import and client-side data refresh, an ATOM data feed provider is used for importing and refreshing data in the ATOM format. In case of connectivity to non-Microsoft data sources such as Oracle/Teradata/DB2/SYBASE/SQLAzure/OLEDB/ODBC sources and most commonly used file sources such as Excel or flat files, we must acquire and install these drivers manually. PowerPivot for SharePoint installs on top of SharePoint 2010, and adds services and functionality to SharePoint. As we have seen, PowerPivot for Excel is an effective tool to create and edit PowerPivot applications, and for data collaboration, sharing, and reporting PowerPivot for SharePoint. Behind the scene, SQL Server 2008 R2 features SharePoint 2010 integrated mode for Analysis Service which includes the VertiPaq engine to provide in-memory data storage. It will also help in processing very large amounts of data, where high performance is accomplished through columnar storage and data compression. The storage of PowerPivot workbooks is quite large and SharePoint 2010 has a default maximum size limit of 50MB for file size. As per the enterprise storage policies, you need to change the file storage setting in SharePoint to publish and upload PowerPivot workbooks. Internally, PowerPivot for SharePoint components, PowerPivot System Service, and Analysis Services in VertiPaq will provide the hosting capability for PowerPivot applications internally. For client connectivity to publish, it includes a web service component that allows applications to connect to the PowerPivot workbook data from outside the SharePoint farm. There's more... Creating an Excel workbook that contains PowerPivot data requires both Excel 2010 and the PowerPivot for Excel add-in. After you create the workbook, you can publish it to a SharePoint Server 2010 farm that has Excel Services, and a deployment of SQL Server PowerPivot for SharePoint. PowerPivot workbooks can be opened in Excel 2007. However, Excel 2007 cannot be used to create or modify PowerPivot data, or to interact with PivotTables, or PivotCharts that use PowerPivot data. You must use Excel 2010 to get full access to all PowerPivot features.
Read more
  • 0
  • 0
  • 4004
article-image-ground-sql-azure-migration-using-ms-sql-server-integration-services
Packt
18 Nov 2009
5 min read
Save for later

Ground to SQL Azure migration using MS SQL Server Integration Services

Packt
18 Nov 2009
5 min read
Enterprise data can be of very different kinds ranging from flat files to data stored in relational databases with the recent trend of storing data in XML data sources. The extraordinary number of database related products, and their historic evolution, makes this task exacting. The entry of cloud computing has turned this into one of the hottest areas as SSIS has been one of the methods indicated for bringing ground based data to cloud storage in SQL Azure, the next milestone in Microsoft Data Management. The reader may review my book on this site, "Beginners Guide to Microsoft SQL Server Integration Services" to get a jump start on learning this important product from Microsoft. SQL Azure SQL Azure is one of the three pillars of Microsoft's Azure cloud computing platform. It is a relational database built on SQL Server Technologies maintained on Microsoft's physical site where subscribers like you and me can rent out storage of data. Since the access is over the internet it is deemed to be in the cloud and Microsoft would provide all data maintenance. Some of the key benefits of this 'Database usage as a service' are: Manageability High Availability Scalability Which in other words means taking away a lot headache from you like worrying about hardware and software (SQL Azure Provisioning takes care of this), replication, DBAs with attitudes etc. Preparation for this tutorial You need some preparation to work with this tutorial. You must have a SQL Server 2008 installed to start with. You also need to register yourself with Microsoft to get an invitation to use SQL Azure by registering for the SQL Azure CTP. Getting permission is not immediate and may take days. After you register agreeing to the license terms, you get the permission (You become a subscriber to the service) to use the Azure Platform components (SQL Azure is one of them). After subscribing you can create a database on the SQL Azure instance. You will be the administrator of your instance (Your login will be known as the server level principal equivalent to the landbased sa login), and you can web access the server with a specific connection string provided to you and a strong password which you create. When you access the Azure URL, you provide the authentication to get connected to your instance of the server by signing in. Therein, you can create a database or delete an existing database. You have couple of tools available to work with this product. Read the blog post mentioned in the summary. Overview of this tutorial In this tutorial you will be using MS SQL Server Integration Services to create a package that can transfer a table from SQL Server 2008 to SQL Azure for which you have established your credentials. In my case the credentials are: Server: tcp:XXXXXX.ctp.database.windows.net User ID: YYYYY Password: ZZZZZ Database: PPPPPP Trusted_Connection=False; Here XXXXXX, YYYY,ZZZZZ, and PPPPPP are all the author's personal authentication values and you would get yours when you register as previously mentioned. Table to be migrated on SQL Server 2008 The table to be migrated on the SQL Server 2008 (Enterprise server, evaluation edition is shown in the next figure). PrincetonTemp is a simple table in the TestNorthwind database on the default instance of the local server on a Windows XP machine, with a few columns and no primary key. Create a SQL Server Integration Services Package Open BIDS (a Visual Studio add-in extending support to build database applications with SQL Server) and create a new SQL Server Integration Services project[Use File |New |Project...in the IDE]. Herein the Visual Studio 2008 with SP1 is used. You need to provide a name which for this project is GroundToCloud. The program creates the project for you which you can see in the Solution Explorer. By default it creates a package for you, Package.dtsx. You may rename the package (herein ToAzure.dtsx)and the project folders and file appear as shown. Add an ADO.NET Source component Drag and drop a Data Flow Task to the tabbed page Control Flow in the package designer. Into the Data flow tabbed page drag and drop an ADO.NET Source component from the Toolbox. Double click the component you just added, from the pop-up menu choose Edit... The ADO.NET Source editor gets displayed. If there are previously configured connections one of them may show up in this window. We will be creating a new connection and therefore click the New... button to display an empty Configure ADO.NET Connection Manager as shown (again, if there are existing connections they all will show up in this window). A connection is needed in connecting to a source outside the IDE. Double click the New... button to display the Connection Manager window which is all but empty. Fill in the details for your instance of ground based server as shown (the ones shown are for this article at the author's site). You may test the connection by hitting the Test Connection button. Clicking the OK buttons on the Connection Manager and the Configure ADO.NET Connection Manager will bring you back to the ADO.NET Source Editor displaying the connection you have just made as shown. A connection string also gets added to the bottom pane of the package designer as well as to the Configure ADO.NET Connection Manager. Click on the drop-down and pick the table (PrincetonTemp) that needs to be migrated to the cloud based server, SQL Azure. Click OK. The Columns navigation on the left would reveal all the columns in the table if it were to be clicked. The Preview button would return the data returned by a SELECT query on the columns as shown.
Read more
  • 0
  • 0
  • 4000

article-image-oracle-sql-developer-tool-15-sql-server-2005
Packt
24 Oct 2009
4 min read
Save for later

Oracle SQL Developer Tool 1.5 with SQL Server 2005

Packt
24 Oct 2009
4 min read
Installation and a review of some new features Installation The program [EA2 download -Early Adapter] can be downloaded from the following URL. In the present case the Windows option that comes with JDK1.5.0_06 bundled was used. The downloaded ZIP file, sqldeveloper-5073 (100MB) can be unzipped to any suitable location and from within the sqldeveloper folder you can immediately start using the program. The program can be started by double clicking the executable which has an unambiguous fat green arrow. Review of features Adjustable Look and Feel The look and feel is adjustable. You can choose between 'Windows' and 'Oracle'. After choosing 'oracle' you can choose a variety of themes. The one shown is for 'Desert Yellow'. The View Menu The View menu is better organized as shown compared to the previous version. Tools Menu Tools menu is beefed up as well as shown. External Tools The External Tools sub menu item can find existing tools (browsers, notepad, mdb files) and also using a 4 step wizard allows you to create tools, provided you know the details for accessing them. Wizards Diff Wizard allows comparing objects of same type between schema of source and destination as well update the destination based on source. Similarly the Copy Wizard allows you to copy objects from one database schema to another. Versioning Support Versioning support is another new feature in this version.SQL Developer provides integrated support for CVS [concurrent versions system] and Subversion in its source control. CVS allows repository creation on the local PC or, on a remote machine. Source files are held in folder modules. In the case of Subversion the access to the repository is by means of a connection and this is where the master copies are held, files are checked out to a local working folder. Run menu item The Run menu item also contains the debugging options as shown. In the previous version Run and Debug were two menus. Migration Menu The Microsoft Access Exporter can export from 97,2000,2002, and 2003 like in the previous version (1.2) and seems to be essentially the same as the previous version. This version can now create off line migration scripts to ASE 15 and Sybase 12 in addition to several versions of SQL Server 7,2000,2005 and MySQL (3.23,4,5) Connecting to SQL 2005 databases As described in the previous referenced articles at the beginning  you can establish a connection to the server by clicking on the Connection node (positive green sign) in the first figure. This opens New / Select Database Connection window where you will see only Oracle and Access. This is because, at this point no JDBC drivers have been specified for connecting to the other three servers, SQL Server, MySQL, and Sybase. There are two ways you can register JDBC drivers for these databases. For SQL Servers you require the jtds.jar file from the SourceForge.com web site. In the first method you need to go through Tools|Preferences|Database|Third party JDBC Drivers| to find the path to the file as shown in the next figure and use the browse key to locate the driver and add it. The driver file should be in the correct path for the application to find. In the other method that is used here, which in the opinion of the author is simpler, is to go through Help|Check Updates... This brings up the Step 1 of wizard as shown. Read the instructions in this window. Now click Next. This takes you to the next window as shown.   The needed item is already checked. Click Next. The window that comes up next shows compatible drivers for the databases. Choose items needed by placing check marks. In this tutorial both the SQL Server and MySQL drivers were chosen. Click Next. In the window that shows up agree to the licensing[GNU Public] terms after reading the terms. Click on Next in the final window of Step 4. When Step 5 "Download" windows opens the login window also opens. As these drivers are downloaded from the Oracle site, you will have to insert your Oracle login information. Step 5 screen shot is not shown. You will be adding both the JDBC drivers on the final step. Click Finish. In order to install the updates you chose, the SQL Developer 1.5 needs to restart, and it restarts when you click on Yes in the Confirm Exit window. Do you want to Migrate User Settings?  window shows up again. For this article it is a No again. The Oracle SQL Developer window gets displayed. Now you open the screen.  You will see all the five database tabs in the New / Select Database Connection with default connection to the Oracle 10G XE on the local machine[Screen shot not shown].
Read more
  • 0
  • 0
  • 3893

article-image-getting-ready-your-first-biztalk-services-solution
Packt
24 Mar 2014
5 min read
Save for later

Getting Ready for Your First BizTalk Services Solution

Packt
24 Mar 2014
5 min read
(For more resources related to this topic, see here.) Deployment considerations You will need to consider the BizTalk Services edition required for your production use as well as the environment for test and/or staging purposes. This depends on decision points such as: Expected message load on the target system Capabilities that are required now versus 6 months down the line IT requirements around compliance, security, and DR The list of capabilities across different editions is outlined in the Windows Azure documentation page at http://www.windowsazure.com/en-us/documentation/articles/biztalk-editions-feature-chart. Note on BizTalk Services editions and signup BizTalk Services is currently available in four editions: Developer, Basic, Standard, and Premium, each with varying capabilities and prices. You can sign up for BizTalk Services from the Azure portal. The Developer SKU contains all features needed to try and evaluate without worrying about production readiness. We use the Developer edition for all examples. Provisioning BizTalk Services BizTalk Services deployment can be created using the Windows Azure Management Portal or using PowerShell. We will use the former in this example. Certificates and ACS Certificates are required for communication using SSL, and Access Control Service is used to secure the endpoints of the BizTalk Services deployment. First, you need to know whether you need a custom domain for the BizTalk Services deployment. In the case of test or developer deployments, the answer is mostly no. A BizTalk Services deployment will autogenerate a self-signed certificate with an expiry of close to 5 years. The ACS required for deployment will also be autocreated. Certificate and Access Control Service details are required for sending messages to bridges and agreements and can be retrieved from the Dashboard page post deployment. Storage requirements You need to create an Azure SQL database for tracking data. It is recommended to use the Business edition with the appropriate size; for test purposes, you can start with the 1 GB Web edition. You also need to pass the storage account credentials to archive message data. It is recommended that you create a new Azure SQL database and Storage account for use with BizTalk Services only. The BizTalk Services create wizard Now that we have the security and storage details figured out, let us create a BizTalk Services deployment from the Azure Management Portal: From the Management portal, navigate to New | App Services | BizTalk Service | Custom Create. Enter a unique name for the deployment, keeping the following values—EDITION: Developer, REGION: East US, TRACKING DATABASE: Create a new SQL Database instance. In the next page, retain the default database name, choose the SQL server, and enter the server login name and password. There can be six SQL server instances per Azure subscription. In the next page, choose the storage account for archiving and monitoring information. Deploy the solution. The BizTalk Services create wizard from Windows Azure Management Portal The deployment takes roughly 30 minutes to complete. After completion, you will see the status of the deployment as Active. Navigate to the deployment dashboard page; click on CONNECTION INFORMATION and note down the ACS credentials and download the deployment SSL certificate. The SSL certificate needs to be installed on the client machine where the Visual Studio SDK will be used. BizTalk portal registration We have one step remaining, and that is to configure the BizTalk Services Management portal to view agreements, bridges, and their tracking data. For this, perform the following steps: Click on Manage from the Dashboard screen. This will launch <mydeployment>.portal.biztalk.windows.net, where the BizTalk Portal is hosted. Some of the fields, such as the user's live ID and deployment name, will be auto-populated. Enter the ACS Issuer name and ACS Issuer secret noted in the previous step and click on Register. BizTalk Services Portal registration Creating your first BizTalk Services solution Let us put things into action and use the deployment created earlier to address a real-world multichannel sales scenario. Scenario description A trader, Northwind, manages an e-commerce website for online customer purchases. They also receive bulk orders from event firms and corporates for their goods. Northwind needs to develop a solution to validate an order and route the request to the right inventory location for delivery of the goods. The incoming request is an XML file with the order details. The request from event firms and corporates is over FTP, while e-commerce website requests are over HTTP. Post processing of the order, if the customer location is inside the US, then the request are forwarded to a relay service at a US address. For all other locations, the order needs to go to the central site and is sent to a Service Bus Queue at IntlAddress with the location as a promoted property. Prerequisites Before we start, we need to set up the client machine to connect to the deployment created earlier by performing the following steps: Install the certificate downloaded from the deployment on your client box to the trusted root store. This authenticates any SSL traffic that is between your client and the integration solution on Azure. Download and install the BizTalk Services SDK (https://go.microsoft.com/fwLink/?LinkID=313230) so the developer project experience lights up in Visual Studio 2012. Download the BizTalk Services EAI tools' Message Sender and Message Receiver samples from the MSDN Code Gallery available at http://code.msdn.microsoft.com/windowsazure. Realizing the solution We will break down the implementation details into defining the incoming format and creating the bridge, including transports to process incoming messages and the creation of the target endpoints, relay, and Service Bus Queue. Creating a BizTalk Services project You can create a new BizTalk Services project in Visual Studio 2012. BizTalk Services project in Visual Studio Summary This article discussed deployment considerations, provisioning BizTalk Services, BizTalk portal registration, and prerequisites for creating your first BizTalk Services solution. Resources for Article: Further resources on this subject: Using Azure BizTalk Features [Article] BizTalk Application: Dynamics AX Message Outflow [Article] Setting up a BizTalk Server Environment [Article]
Read more
  • 0
  • 0
  • 3818
article-image-sql-server-2008-r2-multiserver-management-using-utility-explorer
Packt
30 Jun 2011
6 min read
Save for later

SQL Server 2008 R2: Multiserver Management Using Utility Explorer

Packt
30 Jun 2011
6 min read
  Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook         Read more about this book       (For more resources on Microsoft SQL Server, see here.) The UCP collects configuration and performance information that includes database file space utilization, CPU utilization, and storage volume utilization from each enrolled instance. Using Utility Explorer helps you to troubleshoot the resource health issues identified by SQL Server UCP. The issues might include mitigating over-utilized CPU on a single instance or multiple instances. UCP also helps in reporting troubleshooting information using SQL Server Utility on issues that might include resolving a failed operation to enroll an instance of SQL Server with a UCP, troubleshooting failed data collection resulting in gray icons in the managed instance list view on a UCP, mitigating performance bottlenecks, or resolving resource health issues. The reader will benefit by referring the previous articles on Best Practices for SQL Server 2008 R2 Administration and Managing the Core Database Engine before proceeding ahead. Getting ready The UCP and all managed instances of SQL Server must satisfy the following prerequisites: UCP SQL Server instance version must be SQL Server 2008 SP2[10.00.4000.00] or higher The managed instances must be a database engine only and the edition must be Datacenter or Enterprise on a production environment UCP managed account must operate within a single Windows domain or domains with two-way trust relationships The SQL Server service accounts for UCP and managed instances must have read permission to Users in Active Directory To set up the SQL Server Utility you need to: Create a UCP from the SQL Server Utility Enroll data-tier applications Enroll instances of SQL Server with the UCP Define Global and Instance level policies, and manage and monitor the instances. Since the UCP itself becomes a managed instance automatically, once the UCP wizard is completed, the Utility Explorer content will display a graphical view of various parameters, as follows: How to do it... To define the global and instance level policies to monitor the multiple instances, use the Utility Explorer from SSMS tool and complete the following steps: Click on Utility Explorer; populate the server that is registered as utility control point. On the right-hand screen, click on the Utility Administration pane. The evaluable time period and tolerance for percent violations are configurable using Policy tab settings. The default upper threshold utilization is 70 percent for CPU, data file space, and storage volume utilization values. To change the policies use the slider-controls (up or down) to the right of each policy description. For this recipe, we have modified the upper thresholds for CPU utilization as 50 percent and data file space utilization as 80 percent. We have also reduced the upper limit for the storage volume utilization parameter. The default lower threshold utilization is 0 percent for CPU, data file space, and storage volume utilization values. To change the policies, use the slider-controls (up only) to the right of each policy description. For this recipe, we have modified (increased) the lower threshold for CPU utilization to 5 percent. Once the threshold parameters are changed, click Apply to take into effect. For the default system settings, either click on the Restore Defaults button or the Discard button, as shown in the following screenshot: Now, let us test whether the defined global policies are working or not. From the Query Editor, open a new connection against SQL instances, which is registered as Managed Instance on UCP, and execute the following time-intensive TSQL statements: create table test ( x int not null, y char(896) not null default (''), z char(120) not null default('') ) go insert test (x) select r from ( selectrow_number() over (order by (select 1)) r from master..spt_values a, master..spt_values b ) p where r <= 4000000 go create clustered index ix_x on test (x, y) with fillfactor=51 go The script will simulate a data load process that will lead into a slow performance on managed SQL instance. After a few minutes, right-click on the Managed Instances option on Utility Explorer, which will produce the following screenshot of managed instances: In addition to the snapshot of utilization information, click on the Managed Instances option on Utility Explorer to obtain information on over-utilized database files on an individual instance (see the next screenshot): We should now have completed the strategic steps to manage multiple instances using the Utility Explorer tool. How it works... The unified view of instances from Utility Explorer is the starting point of application and multi-server management that helps the DBAs to manage the multiple instances efficiently. Within the UCP, each managed instance of SQL Server is instrumented with a data collection set that queries configuration and performance data and stores it back in UMDW on the UCP every 15 minutes. By default, the data-tier applications automatically become managed by the SQL Server utility. Both of these entities are managed and monitored based on the global policy definitions or individual policy definitions. Troubleshooting resource health issues identified by an SQL Server UCP might include mitigating over-utilized CPU on a computer on an instance of SQL Server, or mitigating over-utilized CPU for a data-tier application. Other issues might include resolving over-utilized file space for database files or resolving over-utilization of allocated disk space on a storage volume. The managed instances health parameter collects the following system resource information: CPU utilization for the instance of SQL Server Database files utilization Storage volume space utilization CPU utilization for the computer Status for each parameter is divided into four categories: Well-utilized: Number of managed instances of an SQL Server, which are not violating resource utilization policies. Under-utilized: Number of managed resources, which are violating resource underutilization policies. Over-utilized: Number of managed resources, which are violating resource overutilization policies. No Data Available: Data is not available for managed instances of SQL Server as either the instance of SQL Server has just been enrolled and the first data collection operation is not completed, or because there is a problem with the managed instance of SQL Server collecting and uploading data to the UCP. The data collection process begins immediately, but it can take up to 30 minutes for data to appear in the dashboard and viewpoints in the Utility Explorer content pane. However, the data collection set for each managed instance of an SQL Server will send relevant configuration and performance data to the UCP every 15 minutes. Summary This article on SQL Server 2008 R2 covered Multiserver Management Using Utility Explorer. Further resources on this subject: Best Practices for Microsoft SQL Server 2008 R2 Administration Microsoft SQL Server 2008 R2: Managing the Core Database Engine [Article] Managing Core Microsoft SQL Server 2008 R2 Technologies [Article] SQL Server 2008 R2 Technologies: Deploying Master Data Services [Article] Getting Started with Microsoft SQL Server 2008 R2 [Article] Microsoft SQL Server 2008 - Installation Made Easy [Article] Creating a Web Page for Displaying Data from SQL Server 2008 [Article] Ground to SQL Azure migration using MS SQL Server Integration Services [Article] Microsoft SQL Server 2008 High Availability: Understanding Domains, Users, and Security [Article]
Read more
  • 0
  • 0
  • 3648

article-image-messaging-websphere-application-server-70-part-2
Packt
05 Oct 2009
7 min read
Save for later

Messaging with WebSphere Application Server 7.0 (Part 2)

Packt
05 Oct 2009
7 min read
WebSphere MQ overview WebSphere MQ formerly known as MQ Series is IBM's enterprise messaging solution. In a nutshell, MQ provides the mechanisms for messaging both in point-to-point and publish-subscribe. However, it guarantees to deliver a message only once. This is important for critical business applications which implement messaging. An example of a critical system could be a banking payments system where messages contain messages pertaining to money transfer between banking systems, so guaranteeing delivery of a debit/credit is paramount in this context. Aside from guaranteed delivery, WMQ is often used for messaging between dissimilar systems and the WMQ software provides programming interfaces in most of the common languages, such as Java, C, C++, and so on. If you are using WebSphere, then it is common to find that WMQ is often used with WebSphere when WebSphere is hosting message-enabled applications. It is important that the WebSphere administrator understands how to configure WebSphere resources so that application can be coupled to the MQ queues. Overview of WebSphere MQ example To demonstrate messaging using WebSphere MQ, we are going to re-configure the previously deployed JMS Tester application so that it will use a connection factory which communicates with a queue on a WMQ queue manager as opposed to using the default provider which we demonstrated earlier. Installing WebSphere MQ Before we can install our demo messaging application, we will need to download and install WebSphere MQ 7.0. A free 90-day trial can be found at the following URL: http://www.ibm.com/developerworks/downloads/ws/wmq/. Click the download link as shown below. You will be prompted to register as an IBM website user before you can download the WebSphere MQ Trial. Once you have registered and logged in, the download link above will take you to a page which lists download for different operating systems. Select WebSphere MQ 7.0 90-day trial from the list of available options as shown below. Click continue to go to the download page. You may be asked to fill out a questionnaire detailing why you are evaluating WebSphere MQ (WMQ). Fill out the question as you see fit and submit to move to the download page. As shown above, make sure you use the IBM HTTP Download director as it will ensure that your download will resume, even if your Internet loses a connection. If you do not have a high-speed Internet connection, you can try downloading a free 90-day trial of WebSphere MQ 7.0 overnight while you are asleep. Download the trial to a temp folder, for example c:temp, on your local machine. The screenshot above shows how the IBM HTTP Downloader will prompt for a location where you want to download it to. Once the WMQ install file has been downloaded, you can then upload the file using an appropriate secure copy utility like Winscp to an appropriate folder like /apps/wmq_install on your Linux machine. Once you have the file uploaded to Linux, you can then decompress the file and run the installer to install WebSphere MQ. Running the WMQ installer Now that you have uploaded the WMQv700Trial-x86_linux.tar file on your Linux machine, and follow these steps: You can decompress the file using the following command: gunzip ./WMQv700Trial-x86_linux.tar.gz Then run the un-tar command: tar -xvf ./ WMQv700Trial-x86_linux.tar Before we can run the WMQ installations, we need to accept the license agreement by running the following command: ./mqlicense.sh –accept To run the WebSphere MQ installation, type the following commands: rpm -ivh MQSeriesRuntime-7.0.0-0.i386.rpmrpm -ivh MQSeriesServer-7.0.0-0.i386.rpmrpm -ivh MQSeriesSamples-7.0.0-0.i386.rpm As a result of running the MQSeriesServer installation, a new user called mqm was created. Before running any WMQ command, we need to switch to this user using the following command: su - mqm Then, we can run commands like the dspmqver command which can be run to check that WMQ was installed correctly. To check whether WMQ is installed, run the following command: /opt/mqm/bin/dspmqver The result will be the following message as shown in the screenshot below: Creating a queue manager Before we can complete our WebSphere configuration, we need to create a WMQ queue manager and a queue, then we will use some MQ command line tools to put a test message on an MQ queue and get a message from an MQ queue. To create a new queue manager called TSTDADQ1, use the following command: crtmqm TSTDADQ1 The result will be as shown in the image below. We can now type the following command to list queue managers: dspmq The result of running the dspmq command is shown in the image below. To start the queue manager (QM), type the following command: strmqm The result of starting the QM will be similar to the image below. Now that we have successfully created a QM, we now need to add a queue called LQ.Test where we can put and get messages. To create a local queue on the TSTDADQ1 QM, type the following commands in order: runmqsc TSTDADQ1 You are now running the MQ scripting command line, where you can issue MQ commands to configure the QM. To create the queue, type the following command and hit Enter: define qlocal(LQ.TEST) Then immediately type the following command: end Hit Enter to complete the QM configuration, as shown by the following screenshot. You can use the following command to see if your LQ.TEST queue exists. echo "dis QLOCAL(*)" | runmqsc TSTDADQ1 | grep -i test You have now added a local queue called Q.Test to the TSTDADQ1 queue manager. runmqsc TSTDADQ1DEFINE LISTENER(TSTDADQ1.listener) TRPTYPE (TCP) PORT(1414)START LISTENER(TSTDADQ1.listener)End You can type the following command to ensure that your QM listener is running. ps -ef | grep mqlsr The result will be similar to the image below. To create a default channel, you can run the following command. runmqsc TSTDADQ1DEFINE CHANNEL(SYSTEM.ADMIN.SVRCONN) CHLTYPE(SVRCONN)End We can now use a sample MQ program called amqsput which we can use to put and get a test message from a queue to ensure that our MQ configuration is working before we continue to configure WebSphere. Type the following command to put a test message on the LQ.Test queue: /opt/mqm/samp/bin/amqsput LQ.TEST TSTDADQ1 Then you can type a test message: Test Message and hit Enter; this will put a message on the LQ.Test queue and will exit you from the AMQSPUTQ command tool. Now that we have put a message on the queue, we can read the message by using the MQ Sample command tool called amqsget. Type the following command to get the message you posted earlier: /opt/mqm/samp/bin/amqsget LQ.TEST TSTDADQ1 The result will be that all messages on the LQ.TEST queue will be listed and then the tool will timeout after a few seconds as shown below. We need to do two final steps to complete and that is to add the root user to the mqm group. This is not a standard practice in an enterprise, but we have to do this because our WebSphere installation is running as root. If we did not do this, we would have to reconfigure the user which the WebSphere process is running under and then add the new user to MQ security. To keep things simple, ensure that root is a member of the mqm group, by typing the following command: usermod -a -G mqm root We also need to change WMQ security to ensure that all users of the mqm group have access to all the objects of the TSTDADQ1 queue manager. To change WMQ security to give access to all objects in the QM, type the following command: setmqaut -m TSTDADQ1 -t qmgr -g mqm +all Now, we are ready to re-continue our configuring WebSphere and create the appropriate QCF and queue destinations to access WMQ from WebSphere.
Read more
  • 0
  • 0
  • 3624
Modal Close icon
Modal Close icon