Managing Core Microsoft SQL Server 2008 R2 Technologies

Exclusive offer: get 50% off this eBook here
Microsoft SQL Server 2008 R2 Administration Cookbook

Microsoft SQL Server 2008 R2 Administration Cookbook — Save 50%

Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook

£20.99    £10.50
by Satya Shyam K Jayanty | May 2011 | Cookbooks Enterprise Articles Microsoft

This article by Satya Shyam K Jayanty, author of Microsoft SQL Server 2008 R2 Administration Cookbook, dives into the implementation phases of SQL Server 2008 R2 new features and essential steps in planning and implementing self-service BI services, SQLAzure connectivity, StreamInsight technologies, and deploying the master data services feature. These recipes involve the planning, design, and implementation of features that are important to the management of the core technologies.

In this article, we will cover the following topics:

  • Planning and implementing Self-Service Business Intelligence services
  • Implementing Microsoft StreamInsight technologies platform
  • Implementing SQL Azure connectivity features

 

Microsoft SQL Server 2008 R2 Administration Cookbook

Microsoft SQL Server 2008 R2 Administration Cookbook

Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system

        Read more about this book      

(For more resources on Microsoft SQL Server, see here.)

Introduction

SQL Server 2008 R2 has a flurry of new enhancements added to the core database engine and business intelligence suite. The new enhancements within the core database engine are: SQL Azure connectivity (SQLAzure), Data-Tier application (DAC PACK), SQL Server Utility (UCP), and network connectivity. In addition to the new features and internal enhancements, SQL Server 2008 R2 includes new developments to the asynchronous messaging subsystem, such as Service Broker (SB) and external components such as Master Data Services (MDS), StreamInsight, Reporting Services with SharePoint Integration, and PowerPivot for Analysis Services.

These recipes involve the planning, design, and implementation of features that are added and they are important to the management of the core technologies of SQL Server 2008 R2.

Planning and implementing Self-Service Business Intelligence services

Self-Service Business Intelligence (BI) is the new buzzword in the data platform, a new paradigm to the existing BI functionalities. Using Microsoft's Self-Service BI, anyone can easily build the BI applications using traditional desktop tools such as Office Excel and specialized services such as SharePoint. The BI application can be built to manage the published applications in a common way and track data usage having the analytical data connected to its source. The data customization can be accomplished easily by sharing data in a controlled way where the customers can access it from a web browser (intranet or extranet) without using Office applications or Server applications. The external tasks such as security administration and deployment of new hardware are accomplished using the features of SQL Server 2008 R2 and Windows Server 2008 operating system.

Self-Service BI can be implemented using PowerPivot, which has two components working together, PowerPivot for Excel and PowerPivot for SharePoint. The PowerPivot for Excel is an add-in that enhances the capabilities of Excel for users and brings the full power of SQL Server Analysis Services right into Excel; whereas, PowerPivot for SharePoint extends the Office SharePoint Services to share and manage the PowerPivot applications that are created with PowerPivot for Excel. In this recipe, we will go through the steps that are required to plan and implement PowerPivot for Excel and PowerPivot for SharePoint.

Getting ready

PowerPivot for Excel is a component of SQL Server 2008 R2 and is an add-in of Excel 2010 from Office 2010 suite, along with the Office Shared Features. To get started you will need to do the following:

Using the windows installer (I) package, you can install SQL Server 2008 R2 sample databases. However, you must make sure that your SQL Server instance meets the following prerequisites:

  • Full-Text Search must be installed with SQL Full-text filter daemon launcher service running
  • FILESTREAM must be enabled

    To install these prerequisites on existing SQL Server instances, refer to http://msftdbprodsamples.codeplex.com/wikipage?title=Database%20Prerequisites&referringTitle=Installing%20SQL%20Server%202008R2%20Databases.

  • PowerPivot for the SharePoint component can be installed using the SharePoint 2010 setup program
    • SharePoint 2010 is only supported on Windows Server 2008 Service Pack 2 or higher, and Window Server 2008 R2, and only on x64 platform
  • There are two setup options—a New Farm install and an Existing Farm install
    • The New Farm install is typically expected to be used in a single-machine install where PowerPivot will take care of installing and configuring all the relevant services effectively for you
  • To view PowerPivot workbooks published to the PowerPivot Gallery, you will need Silverlight
    • Silverlight is not available in a 64-bit version; you must use the 32-bit Web browser to see PowerPivot workbooks using the PowerPivot Gallery's special views
  • The Self-Service Analytics solution describes the steps required to analyze sales and promotions data and share the analysis to other users
  • This solution consists of two documents and one sample data file (Access database and Excel workbooks), which can be downloaded from: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=fa8175d0-157f-4e45-8053-6f5bb7eb6397&displaylang=en

How to do it...

In this recipe, we will go through the steps required to plan and implement PowerPivot for Excel and PowerPivot for SharePoint:

  1. Connect to your SQL Server relational database server using SQL Server Management Studio and restore the Contoso retail sample database on the SQL Server instance.
  2. Office Shared Features installs VSTO 4.0, which is needed as a prerequisite for PowerPivot for Excel.
  3. To install the client application once the download is completed, run a setup program (PowerPivot_for_Excel.msi), which is a self-explanatory wizard installation.
  4. The initial PowerPivot installation of Excel 2010 program requires COM Add-ins activation. To do so, on the Excel worksheet click File | Options and select the Add-Ins page.
  5. In the Add-Ins page from the Manage drop-down list select COM Add-ins and click on the Go button.
  6. Finally, select the PowerPivot for Excel option and click the OK button to display the PowerPivot tab back in the Excel 2010 sheet.
  7. You can open the PowerPivot window from an Excel file, click on the PowerPivot tab on the Excel ribbon.
  8. Launch Excel 2010 from All Programs | Microsoft Office | Microsoft Excel 2010. On the PowerPivot tab, click on the PowerPivot Window.

  9. Click on the PowerPivot Window button that will open a PowerPivot Window, as shown in the preceding screenshot.
  10. The PowerPivot Window helps you with the key operations of importing data, filtering, and analyzing the data, as well as creating certain Data Analysis Expression (DAX) calculations.
  11. Let us see how the PowerPivot provides the several methods to import and enhance data by building a Self-Service BI application.

  12. PowerPivot for SharePoint configures your SharePoint 2010 as part of the New Farm install option.
  13. After successfully installing PowerPivot for SharePoint, you should verify the installation.
  14. Open the Web browser in administrator's privilege mode and enter http://<machinename> with <machinename> being the name of the server machine where you performed the PowerPivot for SharePoint install.
  15. You should see a link to the PowerPivot Gallery on the left side of the page, as shown in the following screenshot:
  16. Next, launch the SharePoint Central Administration page from Start | All | Programs | Microsoft SharePoint 2010 Products | SharePoint 2010 Central Administration.
  17. Click on the Manage web applications link under the heading Application Management.
  18. In the Web Applications General Settings dialog, navigate to the value Maximum upload size. For this recipe, let us choose the value 3192 MB.

    The default value for Maximum Upload size is set to 50MB, for ideal content deployment change the upload size to a minimum 3192MB.

  19. Navigate back to SharePoint Central Administration | Application Management | Manage Service Applications.
  20. Select ExcelServiceApp1 (Excel Services Application Web Service Application) and click Manage to choose Trusted file locations.
  21. Click on http:// to change the Excel Services settings. Navigate to the Workbook Properties section. For this recipe, choose the value of the Maximum Workbook Size as 3192 MB.
  22. The new settings will take effect once the IIS services are restarted.

We have now successfully completed the PowerPivot for SharePoint installation, now the instance is ready to publish and share the PowerPivot workbook.

To ensure the PowerPivot for SharePoint installation is successful and to share a workbook, we can test the process by publishing a PowerPivot workbook instance, as follows:

  1. Switch on the machine with Excel and PowerPivot for Excel.
  2. Open a workbook.
  3. Click on File | Save and Send | Save to SharePoint | Save As.
  4. Enter http://<yourPowerPivotserver>/PowerPivot Gallery in the folder path of the Save As dialog, and click Save.

How it works...

Microsoft Excel is a popular Business Intelligence tool on the client side, and to present data from multiple sources PowerPivot for Excel is required. The installation process of the PowerPivot add-in on the client side is a straightforward process, though there is no requirement of SQL Server 2008 R2 components on the client side. Behind the scenes, the Analysis Services VertiPaq engine from PowerPivot for Excel runs all the 'in-process' for Excel. The connectivity to the Analysis Services data source is managed by MDX, XMLA Source, AMO, and ADOMD.NET libraries, which in turn use the Analysis Services OLE DB provider to connect to the PowerPivot data within the workbook.

On the workstation, the Analysis Services VertiPaq engine issues queries and receives data from a variety of data sources, including relational or multidimensional databases, documents, public data stores, or Web services. During data import and client-side data refresh, an ATOM data feed provider is used for importing and refreshing data in the ATOM format.

In case of connectivity to non-Microsoft data sources such as Oracle/Teradata/DB2/SYBASE/SQLAzure/OLEDB/ODBC sources and most commonly used file sources such as Excel or flat files, we must acquire and install these drivers manually.

PowerPivot for SharePoint installs on top of SharePoint 2010, and adds services and functionality to SharePoint. As we have seen, PowerPivot for Excel is an effective tool to create and edit PowerPivot applications, and for data collaboration, sharing, and reporting PowerPivot for SharePoint. Behind the scene, SQL Server 2008 R2 features SharePoint 2010 integrated mode for Analysis Service which includes the VertiPaq engine to provide in-memory data storage. It will also help in processing very large amounts of data, where high performance is accomplished through columnar storage and data compression.

The storage of PowerPivot workbooks is quite large and SharePoint 2010 has a default maximum size limit of 50MB for file size. As per the enterprise storage policies, you need to change the file storage setting in SharePoint to publish and upload PowerPivot workbooks. Internally, PowerPivot for SharePoint components, PowerPivot System Service, and Analysis Services in VertiPaq will provide the hosting capability for PowerPivot applications internally. For client connectivity to publish, it includes a web service component that allows applications to connect to the PowerPivot workbook data from outside the SharePoint farm.

There's more...

Creating an Excel workbook that contains PowerPivot data requires both Excel 2010 and the PowerPivot for Excel add-in. After you create the workbook, you can publish it to a SharePoint Server 2010 farm that has Excel Services, and a deployment of SQL Server PowerPivot for SharePoint. PowerPivot workbooks can be opened in Excel 2007.

However, Excel 2007 cannot be used to create or modify PowerPivot data, or to interact with PivotTables, or PivotCharts that use PowerPivot data. You must use Excel 2010 to get full access to all PowerPivot features.

Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook
Published: May 2011
eBook Price: £20.99
Book Price: £33.99
See more
Select your format and quantity:
        Read more about this book      

(For more resources on Microsoft SQL Server, see here.)

Implementing Microsoft StreamInsight Technologies Platform

Every Enterprise has always been on a continuous press to reduce the lag between data acquisition and data processing. When talking about moving data from a traditional desktop-based application to an Enterprise data warehouse application, the technology must support complex event processing and provide proactive results from monitoring.

SQL Server 2008 R2 has a new feature, StreamInsight. StreamInsight is a powerful platform that enables the processing of high-speed data streams from multiple data sources. The technology is useful for financial applications such as stock exchanges, banks for fraud detection, retail applications such as ISPs for surveillance of internet traffic, and manufacturing for real-time business intelligence. The StreamInsight Standard edition is available in SQL Server 2008 R2 Standard, or Enterprise, or Web editions, and StreamInsight Premium edition in SQL Server 2008 R2 Datacenter, or Developer, or Evaluation editions. However, the premium version of StreamInsight is recommended only when event rates exceed 5000 events per second or latency requirements are less than 5 seconds. For event rates less than 5000 events per second and/or latency requirements in excess of 5 seconds, the standard version of StreamInsight is recommended.

StreamInsight is built to detect, decide, and respond. Event processing is the main component to monitor a system or process by looking for exceptional behavior and generating alerts when such behavior exceeds the business process levels. The reaction is generated in the form of alerts; for real-time applications it requires a recovery action—automated or manually. The event processing component keeps the logic that processes the events separate from the event producers and consumers. These actions can be divided into:

  • Deterministic: In this approach, there is an exact mapping between a situation in the real world and its representation in the event processing system
  • Approximate: Here, the event processing system provides an approximation to real-world events

StreamInsight extends the advantage and effectiveness of event processing, which can reduce the substantial cost of ownership of an event processing application. The cost reduction is visible from the level of abstraction, which is similar to the relational database management system to hold data rather than using a filesystem component. The internals of event processing provide the abstractions for handling events that are higher than conventional programming languages; this is where the reduction in the cost of development and maintenance is visible.

In this recipe, we will look at how to implement StreamInsight onto your existing SQL Server 2008 R2 platform.

Getting ready

Like all other Microsoft products, the StreamInsight client package installs a subset of feature technology, which implements only the client-side functionality. The package installs the components in connecting to an existing StreamInsight server to manage queries and the debugger tool. For the SQL Server 2008 R2 RTM release, StreamInsight client does not require a license, but for server side the license needs to be part of SQL Server license, which is documented in End-User License Agreement (EULA), see http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9e74964e-afc3-44c2-b05e-1ca87b701c2f&displaylang=en.

The Microsoft StreamInsight package is available in a 32-bit (X86) and 64-bit (X64) environment.

The prerequisites for the operating system and other components are as follows:

  • The supported operating systems are Windows Server 2008, Windows 7, Windows Vista, and Windows XP
  • Windows XP and Windows 2003 will have restrictions on the debugging tool, which has limited functionality; the event tracing functionality of the debugging tool is not designed to work on these operating systems
  • Microsoft .NET framework 3.5 Service Pack1 is required; to download refer to: http://go.microsoft.com/fwlink/?LinkId=128220
  • If there are any previous versions of Microsoft StreamInsight that exist in the system, uninstall them first to reinstall the latest version of Microsoft StreamInsight
  • To use the full functionality of Microsoft StreamInsight, you need to install Microsoft SQL Server Compact edition
  • The StreamInsight instance is limited to 64 characters, it must not contain spaces or special characters and should start with a letter or an underscore

How to do it...

Microsoft StreamInsight has two methods to complete the installation process:

  • Attended Installation: It includes an interactive user interface throughout the setup process
  • Unattended Installation: It includes a command-line setup process that requires parameters during the installation to indicate the acceptance of license terms

It is essential that you ensure the prerequisites are complete before proceeding with the installation. In this recipe, we will look at how to implement StreamInsight onto your existing SQL Server 2008 R2 platform.

  1. To install the StreamInsight instance, double-click on the downloaded file and navigate to the Program Maintenance screen.
  2. Click Next to enter the instance name, proceed to the next screen to enter the licensekey or choose Evaluation edition that will be activated with a 180-day expiration.
  3. Now, specify service and group settings. It is essential to add the current user to the StreamInsight Users group.

  4. Choose Next to install the selected services.
  5. The client package installs the client-side libraries and it does not require a SQL Server 2008 R2 product key.
  6. Run the command prompt application as Administrator.
  7. For any unattended installation, the properties are defined using the Property Name=VALUE method, which are case-sensitive and defined as uppercase.
  8. The different Property Names are INSTANCENAME, PRODUCTKEY, CREATESERVICE, INSTANCENAME, IACCEPTLICENSETERMS, and ADDUSERSTOGROUP.
  9. We must supply the value for IACCEPTLICENSETERMS Property Name and all other names can be left as optional.
  10. To install a quiet installation, execute the following code from a command prompt window to create StreamInsight as a Windows Service:

    StreamInsight.msi /quiet /log SI_install_log.
    txt INSTANCENAME=DEFAAULT IACCEPTLICENSETERMS=YES
    PRODUCTKEY=00000-00000-00000-00000-00000 CREATESERVICE=1

  11. For the full functionality of StreamInsight, we need to install a new version of SQL Server Compact edition 3.5 Service Pack2.
  12. Ensure you remove any previous versions of SQL Server Compact using Programs | Add/Remove Programs option.
  13. The installation file for Microsoft SQL Server Compact is located in: C:\Program Files\Microsoft StreamInsight 1.0\Redist\SSCERuntime-ENU-x86.msi.
  14. From Start | Run, enter the path to install the file mentioned previously. The SQL Server Compact 3.5 SP2 installation is a self-explanatory process, which will install the required StreamInsight components and adapters.
  15. To install client package the process is to enter the following text as a command-line operation:

    StreamInsightClient.msi /quiet /log log.txt
    IACCEPTLICENSETERMS=YES

This completes the required installation of the StreamInsight package at server side and SQL Server Compact 3.5 SP2 edition to implement the StreamInsight onto your existing SQL Server 2008 R2 platform.

How it works...

Microsoft StreamInsight supports two deployment scenarios for StreamInsight server:

  • Full integration into StreamInsight application as a hosted DLL
  • A standalone server with multiple StreamInsight applications that run as a wrapper or can be packaged as a Windows service

The SQL Server 2008 R2 StreamInsight installation program installs the following on your machine:

  • The SDK to develop StreamInsight applications and adapters
  • The StreamInsight Event Flow Debugger for debugging StreamInsight applications
  • The runtime libraries for StreamInsight
  • The platform libraries in the form of a set of DLLs
  • The documentation and Readme file
  • The StreamInsight host, optionally registered as a Windows Service
  • The StreamInsight Event Flow Debugger tool
  • The SQL CE installation package (required for StreamInsight servers that use the SQL CE-based metadata store)

 

Microsoft SQL Server 2008 R2 Administration Cookbook Over 70 practical recipes for administering a high-performance SQL Server 2008 R2 system with this book and eBook
Published: May 2011
eBook Price: £20.99
Book Price: £33.99
See more
Select your format and quantity:
        Read more about this book      

(For more resources on Microsoft SQL Server, see here.)

Implementing SQL Azure connectivity features

SQL Azure is a relational database service in the cloud, which can be used to store business data or system data. It is like database-as-a-service for cloud applications, which is hosted by Microsoft. Its predecessor, SQL Data Services (SDS) built on architecture with full scalability, fault tolerance, and high availability features, has been enhanced to offer the relational database services on top of existing SDS architecture, named SQL Azure.

Like all other Windows Azure services in the cloud, SQL Azure is scalable and Microsoft data centers provide these services with load balancing, failover, and replication capabilities. SQL Azure services comprise three layers: infrastructure, platform, and services that run inside the Microsoft data center and the client layer from the user side. At any point in time, SQL Azure maintains three replicas of user application databases. In case one replica fails, the infrastructure and platform create a new database to maintain the three replicas, making them available at any point in time.

SQL Azure doesn't provide the features such as Service Broker, CLR and so on in SQL Server, but provides the minimum features required to deploy and maintain the database. In this recipe, we will go through the important steps to implement SQL Azure's connectivity features to ensure uninterrupted data access facilities.

Getting ready

SQL Azure is a core component but is not available to download like other Microsoft components.

As it is a service offering from the Microsoft data center, we require a registration that is commercially available on the Microsoft site. To register, visit: http://www.microsoft.com/en-us/sqlazure/account/default.aspx.

The successful registration is validated with an e-mail that is provided during the registration process, which consists of the relevant login details to create an application to be managed using SQL Azure. To create the relevant user privileges, complete the following steps:

  • On the Create server page create an Administrator username and password and geo-location
    • The Administrator is the super-user of the database and should be used to execute administrative-related queries on the database
    • The geo-location will represent the data center in which your databases are hosted
  • Always configure the geo-location so that it is the same as the existing Windows Azure project
  • To create a database, view the connection strings and the server instance information using the Server Administration page

How to do it...

The SQL Azure connectivity features are divided into two pattern methods, they are code-near and code-far.

  • The code-near connectivity needs data access application deployed on-premises and the same data center as the SQL Azure database. The applications web interface connectivity is exposed through Windows Azure web role that is hosted from an ASP.NET application.
  • The code-far connectivity needs the application deployed on-premises and in different data centers than SQL Azure. The application executes the SQL queries using the TDS protocol to SQL Azure database.

The required coding techniques for the SQL Azure connectivity feature is used as follows:

Use ADO.NET Connection pooling that will increase code efficiency and remove the need for the re-login process.

using (SqlConnection conn = new SqlConnection(
"Integrated Security=SSPI;Initial Catalog=DBIA-Azure"))
{
connection.Open();
// Pool Connection-A is created.
using (SqlCommand cmd = conn.CreateCommand())
{
cmd.CommandText = ...;
...
}
}
using (SqlConnection conn = new SqlConnection(...))
{
conn.Open();
// Process-Application-steps.
}

The Parameterization limits the compiles and the two methods are Client-Side and Server-side.

The Client-side parameterization using ADO.NET, ODBC or sp_executesql is a recommended approach for query parameterization to re-use one parameterized query plan.

The following is a parameterized query example that uses sp_executesql:

declare @param_value int, @sqlstring nvarchar(500),
@param_definition nvarchar(500), @col2 int;
set @sqlstring = N'select @Sales_person = Person.Name from Person
join Sales on Person.name = Sales.Name where Sales.SaleCount = @
param';
set @param_definition = N'@param int, @Sales_person int output';

set @param_value = 10;
exec sp_executesql @sqlstring, @param_definition, @param = @param_
value, @Sales_person = @col2 output;

Let us execute a similar query using ADO.NET(default client-server application behavior). Here, the data type and length of each parameter must be specified to avoid any slowdown due to cache expansion:

cmd.CommandText = "SELECT Person.Name FROM dbo.Person WHERE Person.
Type = @1";
cmd.Parameters.Add("@1", SqlDbType.NVarChar).Value = "16";
cmd.Parameters.Add("@1", SqlDbType.NVarChar).Value = "97";
//this will have the cache expansion causing slowdown of
//performance.
(@1 nvarchar(1)) SELECT Person.Name FROM dbo.Person WHERE Person.
Type= @1
(@1 nvarchar(2)) SELECT Person.Name FROM dbo.Person WHERE Person.
Type= @1

// if the length is specified, there won't be any cache distend.
cmd.CommandText = " SELECT Person.Name FROM dbo.Person WHERE Person.
Type = @1";
cmd.Parameters.Add("@1", SqlDbType.NVarChar, 128).Value = "16";
(@1 nvarchar(128)) SELECT Person.Name FROM dbo.Person WHERE Person.
Type = @1

For frequently executed queries, use stored procedures and batching within the code. Always limit the number of round-trips to the server and the number of columns to be returned.

The default connection timeout is 30 seconds and before reporting a connection failure, it retries connections. If, for any reason, a huge volume of data is transmitted, it is ideal to build the re-try logic in the application. To find out more about SQL Azure connection retry, refer to the following MSDN blog post: http://blogs.msdn.com/b/bartr/archive/2010/06/18/sql-azure-connection-retry.aspx.

If, for any reason, there is a connection failure, reconnect immediately.

If the connection failure persists, wait for 10 seconds and retry. Check the network for any transient errors.

Also check the status of the SQL Azure service as per your geo-location. Refer to http://www.microsoft.com/windowsazure/support/status/servicedashboard.aspx for the current status.

You can also use the SQL Azure web page to report any live-site issue if the connectivity errors are persistent.

Using the trace-connections feature from SQL Azure, you may trace the connectivity problem when necessary. The CONTEXT_INFO value can be used to obtain this value which is set with a unique session GUID value. To obtain more information on the trace-connectivity code example, refer to http://msdn.microsoft.com/en-us/library/ee730903.aspx.

Use the TRY...CATCH block statements to catch transactions on database operations. SQL Azure rolls back the transaction automatically to free up the resources held by that transaction.

Further, best practices in coding include catching the transient connection termination errors, keeping the connection open and short, and executing the transaction in a continuous loop.

For data transfer methods, always use the Bulk copy utility (bcp.exe) or the SSIS package. For best results on data synchronization, take advantage of the SQL Azure DataSync application from: http://www.microsoft.com/en-us/SQLAzure/datasync/default.aspx.

The recommended pattern is to build your database on-premise in SQL Express and then generate a script and execute the script on the cloud database either using SQL Server Management Studio or SQLCMD.
When creating a logical SQL Azure server, choose the location that is closest to you, to avoid extra bandwidth costs and achieve better performance.

How it works...

The three services of SQL Azure are responsible for all the activities of data access. The services, as mentioned in the previous sections, are Infrastructure, Platform, and Services.

The infrastructure layer is responsible for providing the administration of hardware and operating systems required by the services layer. This works similar to a data-center layer, which is shared across multiple services in a data center.

The platform layer consists of the SQL Server instances and the SQL Azure fabric components, and required Management services. The instances are represented as the deployed databases and their replicas. The SQL Azure fabric is the internal framework, which automates the deployment, replication, failover, and load balancing of the database servers. This is the most important layer that is responsible for creating three replicas of the database instance to provide automatic failover capabilities to these instances.

The services layer, by its name, comprises external (customer) facing machines and performs as a gateway to the platform layer. The TDS protocol (which uses port 1433 over SSL) is responsible for providing the metering and account provisioning services to the customers. In addition to these functions, it maintains the connectivity-routing to the primary database instance and runtime information about your database replicas and routes the TDS coming from client applications to the appropriate primary instance.

The connectivity to the SQL Azure database is managed by Tabular Data Stream (TDS) protocol having limited support. However, the majority of SQL Server client APIs support TDS protocol, so the supported features by SQL Azure work with existing client APIs. The following screenshot is a reference to the SQL Azure network topology. The SQL Azure Network topology diagram (in the next screenshot) is reproduced with permission from the Microsoft Corporation: http://social.technet.microsoft.com/wiki/contents/articles/sql-azure-connection-management-in-sql-azure.aspx ).

There's more...

For best performance, the code-far connectivity patterns will benefit your application because of direct connectivity to the database in the cloud. The only restriction here is that all the client applications must use the TDS protocol for the data access. Hence, it is essential to develop the client applications using SQL Server client APIs, such as ADO.NET, ODBC, and SQLNCLI.

Summary

This article covered the implementation phases of SQL Server 2008 R2 new features and essential steps in planning and implementing self-service BI services, SQLAzure connectivity, StreamInsight technologies. The next article will cover Deploying Master Data Services.


Further resources on this subject:


About the Author :


Satya Shyam K Jayanty

Satya Shyam K Jayanty is an Independent consultant working as a Principal Architect and Director for D Bi A Solutions Limited based in the UK. He has a Master's degree in Economics and Commerce from Osmania University, Hyderabad (India) and an MBA – Systems from the University of the West of England. Satya started his career in the IT industry in 1992, and he has been working with SQL Server (from version 4.2) for more than 15 years and has been an accomplished Microsoft SQL Server MVP for the last six years. Satya is a frequent speaker and SME volunteer at Microsoft Tech-Ed, SQL PASS, SQL Bits, Scottish Area SQL Server user group. He maintains a blog at http://www.sqlserver-qa.net and http://www.sql-server-performance.com.

Books From Packt


Microsoft SQL Server 2008 High Availability
Microsoft SQL Server 2008 High Availability

Microsoft SQL Server 2008 R2 Master Data Services: RAW
Microsoft SQL Server 2008 R2 Master Data Services: RAW

Learning SQL Server 2008 Reporting Services
Learning SQL Server 2008 Reporting Services

Expert Cube Development with Microsoft SQL Server 2008 Analysis Services
Expert Cube Development with Microsoft SQL Server 2008 Analysis Services

Microsoft Data Protection Manager 2010
Microsoft Data Protection Manager 2010

.NET Compact Framework 3.5 Data Driven Applications
.NET Compact Framework 3.5 Data Driven Applications

Software Testing using Visual Studio 2010
Software Testing using Visual Studio 2010

MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook: RAW
MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook: RAW


Code Download and Errata
Packt Anytime, Anywhere
Register Books
Print Upgrades
eBook Downloads
Video Support
Contact Us
Awards Voting Nominations Previous Winners
Judges Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software
Resources
Open Source CMS Hall Of Fame CMS Most Promising Open Source Project Open Source E-Commerce Applications Open Source JavaScript Library Open Source Graphics Software