Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Databases

233 Articles
Anonymous
05 Nov 2020
3 min read
Save for later

Adding a user to an Azure SQL DB from Blog Posts - SQLServerCentral

Anonymous
05 Nov 2020
3 min read
Creating a user is simple right? Yes and no. First of all, at least in SSMS it appears you don’t have a GUI. I don’t use the GUI often unless I’m working on a T-SQL command I haven’t used much before but this could be major shock for some people. I right clicked on Security under the database and went to New -> User and a new query window opened up with the following: -- ======================================================================================== -- Create User as DBO template for Azure SQL Database and Azure SQL Data Warehouse Database -- ======================================================================================== -- For login <login_name, sysname, login_name>, create a user in the database CREATE USER <user_name, sysname, user_name> FOR LOGIN <login_name, sysname, login_name> WITH DEFAULT_SCHEMA = <default_schema, sysname, dbo> GO -- Add user to the database owner role EXEC sp_addrolemember N'db_owner', N'<user_name, sysname, user_name>' GO Awesome! I did say I preferred code didn’t I? I am noticing a slight problem though. I don’t actually have a login yet. So I look in object explorer and there is no instance level security tab. On top of that when I try to create a login with code I get the following error: Msg 5001, Level 16, State 2, Line 1User must be in the master database. Well, ok. That’s at least a pretty useful error. When I connect to the master database in SSMS (remember, you can only connect to one database at a time in Azure SQL DB) I do see security tab for the instance level and get the option to create a new login. Still script but that’s fine. -- ====================================================================================== -- Create SQL Login template for Azure SQL Database and Azure SQL Data Warehouse Database -- ====================================================================================== CREATE LOGIN <SQL_login_name, sysname, login_name> WITH PASSWORD = '<password, sysname, Change_Password>' GO So in the end you just need to create your login in master and your user in your user database. But do you really need to create a login? No, in fact you don’t. Azure SQL DBs act like partially contained databases when it comes to users. I.e. if you one of these commands you can create a user that does not require a login and authenticates through the database. CREATE USER Test WITH PASSWORD = '123abc*#$' -- SQL Server ID CREATE USER Test FROM EXTERNAL PROVIDER -- Uses AAD That said I still recommend using a login in master. You can still specify the SID and that means that if you are using a SQL Id (SQL holds the password) you can create a new DB and associate it to the same login without knowing the password. The post Adding a user to an Azure SQL DB appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1947

article-image-azure-stack-and-azure-arc-for-data-services-from-blog-posts-sqlservercentral
Anonymous
18 Nov 2020
6 min read
Save for later

Azure Stack and Azure Arc for data services from Blog Posts - SQLServerCentral

Anonymous
18 Nov 2020
6 min read
For those companies that can’t yet move to the cloud, have certain workloads that can’t move to the cloud, or have limited to no internet access, Microsoft has options to build your own private on-prem cloud via Azure Stack and Azure Arc. I’ll focus this blog on using these products to host your databases. Azure Stack is an extension of Azure that provides a way to run apps and databases in an on-premises environment and deliver Azure services via three options: Azure Stack Hub: Run your own private, autonomous cloud—connected or disconnected with cloud-native apps using consistent Azure services on-premises. Azure Stack Hub integrated systems are comprised in racks of 4-16 servers built by trusted hardware partners and delivered straight to your datacenter. Azure Stack Hub is built on industry standard hardware and is managed using the same tools you already use for managing Azure subscriptions. As a result, you can apply consistent DevOps processes whether you’re connected to Azure or not. The Azure Stack Hub architecture lets you provide Azure services for remote locations with intermittent connectivity or disconnected from the internet. You can also create hybrid solutions that process data locally in Azure Stack Hub and then aggregate it in Azure for additional processing and analytics. Finally, because Azure Stack Hub is installed on-premises, you can meet specific regulatory or policy requirements with the flexibility of deploying cloud apps on-premises without changing any code. See Azure Stack Hub overview Azure Stack Edge: Get rapid insights with an Azure-managed appliance using compute and hardware-accelerated machine learning at edge locations for your Internet of Things (IoT) and AI workloads. Think of it as a much smaller version of Azure Stack Hub that uses purpose-built hardware-as-a-service such as Pro GPU, Pro FPGA, Pro R, and Mini R. The Mini is designed to work in the harshest environment conditions, supporting scenarios such as tactical edge, humanitarian and emergency response efforts. See Azure Stack Edge documentation Azure Stack HCI (preview): A hyperconverged infrastructure (HCI) cluster solution that hosts virtualized Windows and Linux workloads and their storage in a hybrid on-premises environment. Think of it as a virtualization fabric for VM or kubernetes hosting – software only to put on your certified hardware. See Azure Stack HCI solution overview These Azure Stack options are almost all VMs/IaaS, with no PaaS options for data services such as SQL Database (the only data service available is SQL Server in a VM). It is integrated certified hardware and software run by Microsoft, just plug in and go. For support, there is “one throat to choke” as the saying goes. It is a great option if you are disconnected from Azure. It extends Azure management and security to any infrastructure and provides flexibility in deployment of applications, making management more consistent (a single view for on-prem, clouds, and edge). It brings the Azure fabric to your own data center but allows you to use your own security requirements. Microsoft orchestrates the upgrades of hardware, firmware, and software, but you control when those updates happen. Azure Arc is a software only solution that can be deployed on any hardware, including Azure Stack, AWS, or your own hardware. With Azure Arc and Azure Arc-enabled data services (preview) you can deploy Azure SQL Managed Instance (SQL MI) and Azure Database for PostgreSQL Hyperscale to any of these environments, which requires kubernetes. It can also manage SQL Server in a VM by just installing an agent on the SQL server (see Preview of Azure Arc enabled SQL Server is now available). Any of these databases can then be easily moved from your hardware to Azure down the road. It allows you to extend Azure management across your environments, adopt cloud practices on-premises, and implement Azure security anywhere you choose. This allows for many options to use Azure Arc on Azure Stack or on other platforms (click to expand): Some features about Azure Arc: It can be used to solve for data residency requirements (data sovereignty) It is supported in disconnected and intermittently connected scenarios such as air gapped private data centers, cruise ships that are off the grid for multiple weeks, factory floors that have occasional disconnects due to power outages, etc. Customers can use Azure Data Studio (instead of the Azure Portal) to manage their data estate when operating in a disconnected/intermittent connected mode Could eventually support other products like Azure Synapse Analytics Can use larger hardware solutions and more hardware tiers then what is available in Azure, but have to do your own HA/DR You are not charged if you shut down SQL MI, unlike in Azure, as it’s your hardware, where in Azure the hardware is dedicated to you even if you are not using it With Arc you are managing the hardware, but with Stack Microsoft is managing the hardware Can use modern cloud billing models on-premises for better cost efficiency With Azure Arc enabled SQL Server, you can use the Azure Portal to register and track the inventory of your SQL Server instances across on-premises, edge sites, and multi-cloud in a single view. You can also take advantage of Azure security services, such as Azure Security Center and Azure Sentinel, as well as use the SQL Assessment service Azure Stack hub provides consistent hardware, but if you use your own hardware you have more flexibility and possibly cheaper hardware costs These slides covers the major benefits of Azure Arc and what the architecture looks like: Looking at the differences when you are connected directly vs connected indirectly (i.e. an Arc server is not connected to the Internet so must coordinate with a server that is connected): Here is what an Azure Arc data services architecture looks like: Some of the top use cases we see with customers using Azure Stack and/or Azure Arc: Cloud-to-cloud failover On-prem databases with failover to cloud Easier migration: Deploy locally, then flip a switch to go to cloud This slide provides details on the differences with SQL databases (click to expand): More info: Understanding Azure Arc Enabled SQL Server What is Azure Arc Enabled SQL Managed Instance The post Azure Stack and Azure Arc for data services first appeared on James Serra's Blog. The post Azure Stack and Azure Arc for data services appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1945

article-image-scott-mead-tracking-postgres-stats-from-planet-postgresql
Matthew Emerick
15 Oct 2020
9 min read
Save for later

Scott Mead: Tracking Postgres Stats from Planet PostgreSQL

Matthew Emerick
15 Oct 2020
9 min read
Database applications are living, [(sometimes) fire-]breathing systems that behave in unexpected ways. As a purveyor of the pgCraft, it’s important to understand how to interrogate a Postgres instance and learn about the workload. This is critical for lots of reasons: Understanding how the app is using the database Understanding what risks there are in the data model Designing a data lifecycle management plan (i.e. partitions, archiving) Learning how ORM is behaving towards the database Building a VACUUM strategy There’s lots of other reasons this data is useful, but let’s take a look at some examples and get down to a few scripts you can use to pull this together into something useful. First, take a visit to the pgCraftsman’s toolbox to find an easy-to-use snapshot script. This script is designed to be completely self-contained. It will run at whatever frequency you’d like and will save snapshots of the critical monitoring tables right inside your database. There’s even a few reporting functions included to help you look at stats over time. What to Watch There’s a number of critical tables and views to keep an eye on in the Postgres catalog, this isn’t an exhaustive list, but a quick set that the toolbox script already watches. pg_stat_activity pg_locks pg_stat_all_tables pg_statio_all_tables pg_stat_all_indexes pg_stat_database These tables views provide runtime stats on how your application is behaving in regards to the data model. The problem with many of these is that they’re either point-in-time (like pg_stat_activity) or cumulative (pg_stat_all_tables.n_tup_ins contains the cumulative number of inserts since pg_stat_database.stats_reset). In order to glean anything useful from these runtime performance views, you should be snapshot-ing them periodically and saving the results. I’ve seen (and built) lots of interesting ways to do this over the years, but the simplest way to generate some quick stats over time is with the PgCraftsman Toolbox script: pgcraftsman-snapshots.sql. This is approach is great, but as you can guess, a small SQL script doesn’t solve all the world’s database problems. True, this script does solve 80% of them, but that’s why it only took me 20% of the time Let’s say I have a workload that I know nothing about, let’s use pgcraftsman-snapshots.sql to learn about the workload and determine the best way to deal with it: Snapshots In order to build actionable monitoring out of the cumulative or point-in-time monitoring views, we need to snapshot the data periodically and compare between those snapshots. This is exactly was the pgcraftsman-snapshots.sql script does. All of the snapshots are saved in appropriate tables in a new ‘snapshots’ schema. The ‘snapshot’ function simply runs an INSERT as SELECT from each of the monitoring views. Each row is associated with the id of the snapshot being taken (snap_id). When it’s all put together, we can easily see the number of inserts that took place in a given table between two snapshots, the growth (in bytes) of a table over snapshots, or the number of index scans against a particular index. Essentially, any data in any of the monitoring views we are snapshot-ing. 1. Install pgcraftsman-snapshots.sql ❯ psql -h your.db.host.name -U postgres -d postgres -f pgcraftsman-snapshots.sql SET CREATE SCHEMA SELECT 92 CREATE INDEX SELECT 93 CREATE INDEX SELECT 6 CREATE INDEX SELECT 7 CREATE INDEX CREATE INDEX CREATE INDEX SELECT 145 CREATE INDEX SELECT 3 CREATE INDEX SELECT 269 CREATE INDEX CREATE INDEX CREATE INDEX SELECT 1 CREATE INDEX CREATE INDEX CREATE TABLE CREATE INDEX CREATE INDEX CREATE INDEX CREATE TABLE CREATE INDEX CREATE TABLE CREATE INDEX CREATE INDEX CREATE INDEX CREATE TABLE CREATE INDEX CREATE INDEX ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE ALTER TABLE CREATE SEQUENCE CREATE FUNCTION save_snap ----------- 2 (1 row) CREATE FUNCTION CREATE TYPE CREATE FUNCTION CREATE TYPE CREATE FUNCTION CREATE TYPE CREATE FUNCTION In addition to installing the snapshot schema, this script takes two initial snapshots for you. You can monitor the snapshots by running: postgres=# select * from snapshots.snap; snap_id | dttm ---------+------------------------------- 1 | 2020-10-15 10:32:54.31244-04 2 | 2020-10-15 10:32:54.395929-04 (2 rows) You can also get a good look at the schema: postgres=# set search_path=snapshots; SET postgres=# dt+ List of relations Schema | Name | Type | Owner | Size | Description -----------+------------------------+-------+----------+------------+------------- snapshots | snap | table | postgres | 8192 bytes | snapshots | snap_all_tables | table | postgres | 96 kB | snapshots | snap_cpu | table | postgres | 8192 bytes | snapshots | snap_databases | table | postgres | 8192 bytes | snapshots | snap_indexes | table | postgres | 120 kB | snapshots | snap_iostat | table | postgres | 8192 bytes | snapshots | snap_load_avg | table | postgres | 8192 bytes | snapshots | snap_mem | table | postgres | 8192 bytes | snapshots | snap_pg_locks | table | postgres | 16 kB | snapshots | snap_settings | table | postgres | 32 kB | snapshots | snap_stat_activity | table | postgres | 16 kB | snapshots | snap_statio_all_tables | table | postgres | 72 kB | (12 rows) postgres=# reset search_path; RESET postgres=# There’s a few tables here (snap_cpu, snap_load_avg, snap_mem) that seem interesting, eh? I’ll cover these in a future post, we can’t get that data from within a postgres instance without a special extension installed or some external driver collecting it. For now, those tables will remain unused. 2. Take a snapshot The snapshots.save_snap() function included with pgcraftsman-snapshots.sql does a quick save of all the metadata and assigns it all a new snap_id: postgres=# select snapshots.save_snap(); save_snap ----------- 3 (1 row) postgres=# The output row is the snap_id that was just generated and saved. Every time you want to create a snapshot, just call: select snapshots.save_snap(); The easiest way to do this is via cron or another similar job scheduler (pg_cron). I find it best to schedule these before large workload windows and after. If you have a 24 hour workload, find inflection points that you’re looking to differentiate between. Snapshot Performance Questions here about the performance of a snapshot make lots of sense. You can look a the save_snap() in code, you’ll see that the runtime of the process is going to depend on the number of rows in each of the catalog tables. This will depend on : pg_stat_activity <– Number of connections to the instance pg_locks < — Number of locks pg_stat_all_tables <– Number of tables in the database pg_statio_all_tables <– Number of tables in the database pg_stat_all_indexes <– Number of indexes in the database pg_stat_database <– Number of databases in the instance For databases with thousands of objects, snapshots should be pruned frequently so that the snapshot mechanism itself does not cause performance problems. Pruning old snapshots Pruning old snapshots with this script is really easy. There is a relationship between the snapshots.snap table and all the others, so a simple ‘DELETE FROM snapshots.snap WHERE snap_id = x; ‘ will delete all the rows from the given snap_id. 3. Let the workload run Let’s learn a little bit about the workload that is running in the database. Now that we have taken a snapshot (snap_id = 3) before the workload, we’re going to let the workload run for a bit, then take another snapshot and compare the difference. (Note: snapshots just read the few catalog tables I noted above and save the data. They don’t start a process, or run anything. The only thing that’ll make your snapshots run long is if you have a large number of objects (schema, table, index) in the database. ) 4. Take a ‘post-workload’ snapshot After we’ve let the workload run for a while (5 minutes, 2 hours, 2 days… whatever you think will give the best approximation for your workload), take a new snapshot. This will save the new state of data and let us compare the before and after stats: postgres=# select snapshots.save_snap(); save_snap ----------- 4 (1 row) postgres=# 5. Analyze the report There are two included functions for reporting across the workload: select * from snapshots.report_tables(start_snap_id, end_snap_id); select * from snapshots.report_indexes(start_snap_id, end_snap_id); Both of these reports need a starting and ending snap_id. You can get this by examining the snapshots.snap table: postgres=# select * from snapshots.snap; snap_id | dttm ---------+------------------------------- 1 | 2020-10-15 10:32:54.31244-04 2 | 2020-10-15 10:32:54.395929-04 3 | 2020-10-15 10:56:56.894127-04 4 | 2020-10-15 13:30:47.951223-04 (4 rows) postgres=# Our pre-workload snapshot was snap_id = 3 and our post-workload snapshot was snap_id = 4. Since we are reporting between two snapshots, we can see exactly what occurred between them. The number of inserts / updates / deletes / sequential scans / index scans, and even table growth (bytes and human readable). The key is that this is just what took place between the snapshots. You can take a snapshot at any time and report across any number of them. (Note: You may need to side-scroll to see the full output. I highly recommend it) postgres=# select * from snapshots.report_tables(3,4); time_window | relname | ins | upd | del | index_scan | seqscan | relsize_growth_bytes | relsize_growth | total_relsize_growth_bytes | total_relsize_growth | total_relsize | total_relsize_bytes -----------------+-------------------------+--------+--------+-----+------------+---------+----------------------+----------------+----------------------------+----------------------+---------------+--------------------- 02:33:51.057096 | pgbench_accounts | 0 | 588564 | 0 | 1177128 | 0 | 22085632 | 21 MB | 22085632 | 21 MB | 1590083584 | 1516 MB 02:33:51.057096 | pgbench_tellers | 0 | 588564 | 0 | 588564 | 0 | 1269760 | 1240 kB | 1597440 | 1560 kB | 1720320 | 1680 kB 02:33:51.057096 | pgbench_history | 588564 | 0 | 0 | | 0 | 31244288 | 30 MB | 31268864 | 30 MB | 31268864 | 30 MB 02:33:51.057096 | pgbench_branches | 0 | 588564 | 0 | 587910 | 655 | 1081344 | 1056 kB | 1146880 | 1120 kB | 1204224 | 1176 kB 02:33:51.057096 | snap_indexes | 167 | 0 | 0 | 0 | 0 | 49152 | 48 kB | 65536 | 64 kB | 204800 | 200 kB 02:33:51.057096 | snap_all_tables | 111 | 0 | 0 | 0 | 0 | 40960 | 40 kB | 40960 | 40 kB | 172032 | 168 kB 02:33:51.057096 | snap_statio_all_tables | 111 | 0 | 0 | 0 | 0 | 24576 | 24 kB | 24576 | 24 kB | 114688 | 112 kB 02:33:51.057096 | pg_statistic | 23 | 85 | 0 | 495 | 0 | 16384 | 16 kB | 16384 | 16 kB | 360448 | 352 kB 02:33:51.057096 | snap_pg_locks | 39 | 0 | 0 | 0 | 0 | 8192 | 8192 bytes | 32768 | 32 kB | 98304 | 96 kB 02:33:51.057096 | snap_stat_activity | 6 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 32768 | 32 kB 02:33:51.057096 | snap | 1 | 0 | 0 | 0 | 324 | 0 | 0 bytes | 0 | 0 bytes | 57344 | 56 kB 02:33:51.057096 | snap_settings | 1 | 0 | 0 | 1 | 1 | 0 | 0 bytes | 0 | 0 bytes | 114688 | 112 kB 02:33:51.057096 | snap_databases | 1 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 24576 | 24 kB 02:33:51.057096 | pg_class | 0 | 1 | 0 | 1448 | 200 | 0 | 0 bytes | 0 | 0 bytes | 245760 | 240 kB 02:33:51.057096 | pg_trigger | 0 | 0 | 0 | 3 | 0 | 0 | 0 bytes | 0 | 0 bytes | 65536 | 64 kB 02:33:51.057096 | sql_parts | 0 | 0 | 0 | | 0 | 0 | 0 bytes | 0 | 0 bytes | 49152 | 48 kB 02:33:51.057096 | pg_event_trigger | 0 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 16384 | 16 kB 02:33:51.057096 | pg_language | 0 | 0 | 0 | 1 | 0 | 0 | 0 bytes | 0 | 0 bytes | 73728 | 72 kB 02:33:51.057096 | pg_toast_3381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 8192 | 8192 bytes 02:33:51.057096 | pg_partitioned_table | 0 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 8192 | 8192 bytes 02:33:51.057096 | pg_largeobject_metadata | 0 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 8192 | 8192 bytes 02:33:51.057096 | pg_toast_16612 | 0 | 0 | 0 | 0 | 0 | 0 | 0 bytes | 0 | 0 bytes | 8192 | 8192 bytes This script is a building-block. If you have a single database that you want stats on, it’s great. If you have dozens of databases in a single instance or dozens of instances, you’re going to quickly wish you had this data in a dashboard of some kind. Hopefully this gets you started with metric building against your postgres databases. Practice the pgCraft, submit me a pull request! Next time, we’ll look more into some of the insights we can glean from the information we assemble here.
Read more
  • 0
  • 0
  • 1931

Matthew Emerick
14 Oct 2020
1 min read
Save for later

Bruce Momjian: Thirty Years of Continuous PostgreSQL Development from Planet PostgreSQL

Matthew Emerick
14 Oct 2020
1 min read
I did an interview with EDB recently, and a blog post based on that interview was published yesterday. It covers the Postgres 13 feature set and the effects of open source on the software development process.
Read more
  • 0
  • 0
  • 1888

article-image-alexey-lesovsky-postgres-13-observability-updates-from-planet-postgresql
Matthew Emerick
15 Oct 2020
2 min read
Save for later

Alexey Lesovsky: Postgres 13 Observability Updates from Planet PostgreSQL

Matthew Emerick
15 Oct 2020
2 min read
New shiny Postgres 13 has been released and now it’s the  time for making some updates to “Postgres Observability” diagram. New release includes many improvements related to monitoring, such as new stats views and new added fields to existing views. Let’s take a closer look at these. List of progress views has been extended with two new views. The first one is the “pg_stat_progress_basebackup” which helps to observe running base backups and estimate their progress, ETA and other properties.  The second view is the “pg_stat_progress_analyze” as the name suggests, it watches over execute/analyze operations. The third new view is called pg_shmem_allocations which is supposed to be used for deeper inspection of how shared buffers are used. The fourth, and the last new view is “pg_stat_slru” related to the inspection of SLRU caches. Both recently added views are help to answer the question “How Postgres spends its allocated memory” Other improvements are general-purpose and related to the existing views. The “pg_stat_statements” has few modifications: New fields related to time planning have been added, and due to this the existing “time” fields have been renamed to executing time. So all monitoring tools that rely on pg_stat_statements should be adjusted accordingly.  New fields related to WAL have been added – now it’s possible to understand how much WAL has been generated by each statement. WAL usage statistics have also been added to EXPLAIN (added WAL keyword), auto_explain and autovacuum. WAL usage stats are appended to the logs (that is if log_autovacuum_min_duration is enabled) Pg_stat_activity has a new column “leader_pid”, which shows the PID of the parallel group leader and helps to explicitly identify background workers with their leader. A huge thank you goes to many who contributed to this new release, among which are my colleagues Victor Yegorov and Sergei Kornilov and also those who help to spread the word about Postgres to other communities and across geographies.  The post Postgres 13 Observability Updates appeared first on Data Egret.
Read more
  • 0
  • 0
  • 1881

article-image-solved-sql-backup-detected-corruption-in-the-database-log-from-blog-posts-sqlservercentral
Anonymous
06 Nov 2020
5 min read
Save for later

[Solved] SQL Backup Detected Corruption in the Database Log from Blog Posts - SQLServerCentral

Anonymous
06 Nov 2020
5 min read
Summary: In this article, we will discuss about the ‘SQL Backup Detected Corruption in the Database Log’ error. It will also describe the reason behind the error and manual workarounds to resolve it. The article also explains an alternative solution that can be used to restore the database and its transaction log backup – when the manual solutions fail.  When performing transaction log backup for a SQL database, to restore the database after network maintenance or in the event of a crash, you may find the backup job failed with the following error: Backup failed for Server xxx (Microsoft.SqlServer.SmoExtended) System.Data.SqlClient.SqlError: BACKUP detected corruption in the database log. Check the errorlog for more information. (Microsoft.SqlServer.Smo) The error message clearly indicates that the transaction log is damaged (corrupted). Checking the SQL errorlog for more details on the error shows: 2020-11-01 13:30:40.570 spid62 Backup detected log corruption in database TestDB. Context is Bad Middle Sector. LogFile: 2 ‘D:DataTestDB_log.ldf’ VLF SeqNo: x280d VLFBase: x10df10000 LogBlockOffset: x10efa1000 SectorStatus: 2 LogBlock.StartLsn.SeqNo: x280d LogBlock.StartLsn.2020-11-01 13:30:40.650 Backup Error: 3041, Severity: 16, State: 1.2020-11-01 13:30:40.650 Backup BACKUP failed to complete the command BACKUP DATABASE TestDB. Check the backup application log for detailed messages However, the FULL database backup completed successfully and even running DBCC CHECKDB integrity check didn’t find any errors. What Could Have Caused the SQL Transaction Log Backup to Fail? A transaction log (T-log) backup allows restoring a database to a certain point-in-time, before the failure occurred. It does so by taking a backup of all the transaction logs created since the last log backup, including the corrupt portion of the T-log. This causes the backup to fail. However, a FULL database backup only has to back up the beginning of the last active part of the T-log – at the time the backup is taken. Also, DBCC CHECKDB requires the same amount of log as the FULL database backup – at the time of the db snapshot was generated. This is why the full backup executed successfully and no errors were reported by DBCC CHECKDB. Manual Workarounds to Backup Detected Log Corruption in SQL Database Following are the manual workarounds you can apply to resolve the SQL backup log corruption issue: Workaround 1: Change the SQL Recovery Model from FULL to SIMPLE To fix the ‘SQL Server backup detected corruption in the database log’ issue, try switching the database to the SIMPLE recovery model. Switching to SIMPLE recovery model will ignore the corrupted portion of the T-log. Subsequently, change the recovery model back to FULL and execute the backups again. Here’s the steps you need to perform to change the recovery model: Step 1: Make sure there are no active users by stopping all user activity in the db. Step 2: Change the db from FULL to a SIMPLE recovery model. To do so, follow these steps: Open SQL Server Management Studio (SSMS) and connect to an instance of the SQL Server database engine. From Object Explorer, expand the server tree by clicking the server name. Next, depending on the db you are using, select a ‘user database’ or choose a ‘system database’ by expanding System Databases. Right-click the selected db, and then select Properties. In the Database Properties dialog box, click Options under ‘Select a page’. Choose the Simple recovery model from the ‘Recovery model’ list box, and then click OK Step 3: Now set the db back to the FULL recovery model by following the same steps from 1 till 5 above. Then, select Full as your recovery model from the list box. Step 4: Perform a FULL database backup again. Step 5: Take log backups again. Hopefully, performing these steps will help you perform the transaction log backup without any issue. Note: This solution won’t be feasible if you’re using database mirroring for the database for which you have encountered the ‘backup detected log corruption’ error. That’s because, in order to switch to the SIMPLE recovery model you will need to break the mirror and then reconfigure the db which can take significant amount of time and effort. In this case, try the next workaround. Workaround 2: Create Transaction Log Backup using Continue on Error Option To complete executing the backup of T-log without any error, try running log backup of SQL database with the CONTINUE AFTER ERROR option. You can either choose to run the option directly from SSMS or by executing a T-SQL script. Steps to run the ‘Continue on Error’ option from SSMS are as follows: Step 1: Run SSMS as an administrator. Step 2: From ‘Back Up Database’ window, click Options under ‘Select a page’ on the left panel. Then, select the ‘Continue on error’ checkbox under the Reliabilitysection.  Step 3: Click OK. Now, run the log backup to check if starts without the backup detecting an error in SQL database. Ending Note The above-discussed manual solutions won’t work if the transaction log is missing or damaged, putting the database in suspect mode. In that case, you can try restoring the database from backups or run Emergency-mode repair to recover the db from suspect mode. However, none of the above solutions might work in case of severe database corruption in SQL Server. Also, implementing the ‘Emergency-mode repair’ method involves data loss risk. But, using a specialized SQL database repair software such as Stellar Repair for MS SQL can help you repair a severely corrupted database and restore it back to its original state in just a few steps. The software helps in repairing both SQL database MDF and NDF files. Once the MDF file is repaired, you can create a transaction log file of the database and back it up without any encountering any error. www.PracticalSqlDba.com The post [Solved] SQL Backup Detected Corruption in the Database Log appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1845
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Matthew Emerick
14 Oct 2020
8 min read
Save for later

Hans-Juergen Schoenig: pg_squeeze: Optimizing PostgreSQL storage from Planet PostgreSQL

Matthew Emerick
14 Oct 2020
8 min read
Is your database growing at a rapid rate? Does your database system slow down all the time? And maybe you have trouble understanding why this happens? Maybe it is time to take a look at pg_squeeze and fix your database once and for all. pg_squeeze has been designed to shrink your database tables without downtime. No more need for VACUUM FULL – pg_squeeze has it all. The first question any PostgreSQL person will ask is: Why not use VACUUM or VACUUM FULL? There are various reasons: A normal VACUUM does not really shrink the table in disk. Normal VACUUM will look for free space, but it won’t return this space to the operating system. VACUUM FULL does return space to the operating system but it needs a table lock. In case your table is small this usually does not matter. However, what if your table is many TBs in size? You cannot simply lock up a large table for hours just to shrink it after table bloat has ruined performance. pg_squeeze can shrink large tables using only a small, short lock. However, there is more. The following listing contains some of the operations pg_squeeze can do with minimal locking: Shrink tables Move tables and indexes from one tablespace to another Index organize (“cluster”) a table Change the on-disk FILLFACTOR After this basic introduction it is time to take a look and see how pg_squeeze can be installed and configured. PostgreSQL: Installing pg_squeeze pg_squeeze can be downloaded for free from our GitHub repository. However, binary packages are available for most Linux distributions. If you happen to run Solar, AIX, FreeBSD or some other less widespread operating system just get in touch with us. We are eager to help. After you have compiled pg_squeeze or installed the binaries some changes have to be made to postgresql.conf: wal_level = logical max_replication_slots = 10 # minimum 1 shared_preload_libraries = 'pg_squeeze' The most important thing is to set the wal_level to logical. Internally pg_squeeze works as follows: It creates a new datafile (snapshot) and then applies changes made to the table while this snapshot is copied over. This is done using logical decoding. Of course logical decoding needs replication slots. Finally the library has to be loaded when PostgreSQL is started. This is basically it – pg_squeeze is ready for action. Understanding table bloat in PostgreSQL Before we dive deeper into pg_squeeze it is important to understand table bloat in general. Let us take a look at the following example: test=# CREATE TABLE t_test (id int); CREATE TABLE test=# INSERT INTO t_test SELECT * FROM generate_series(1, 2000000); INSERT 0 2000000 test=# SELECT pg_size_pretty(pg_relation_size('t_test')); pg_size_pretty ---------------- 69 MB (1 row) Once we have imported 2 million rows the size of the table is 69 MB. What happens if we update these rows and simply add one? test=# UPDATE t_test SET id = id + 1; UPDATE 2000000 test=# SELECT pg_size_pretty(pg_relation_size('t_test')); pg_size_pretty ---------------- 138 MB (1 row) The size of the table is going to double. Remember, UPDATE has to duplicate the row which of course eats up some space. The most important observation, however, is: If you run VACUUM the size of the table on disk is still 138 MB – storage IS NOT returned to the operating system. VACUUM can shrink tables in some rare instances. However, in reality the table is basically never going to return space to the filesystem which is a major issue. Table bloat is one of the most frequent reasons for bad performance, so it is important to either prevent it or make sure the table is allowed to shrink again. PostgreSQL: Shrinking tables again If you want to use pg_squeeze you have to make sure that a table has a primary key. It is NOT enough to have unique indexes – it really has to be a primary key. The reason is that we use replica identities internally, so we basically suffer from the same restrictions as other tools using logical decoding. Let us add a primary key and squeeze the table: test=# ALTER TABLE t_test ADD PRIMARY KEY (id); ALTER TABLE test=# SELECT squeeze.squeeze_table('public', 't_test', null, null, null); squeeze_table --------------- (1 row) Calling pg_squeeze manually is one way to handle a table. It is the preferred method if you want to shrink a table once. As you can see the table is smaller than before: test=# SELECT pg_size_pretty(pg_relation_size('t_test')); pg_size_pretty ---------------- 69 MB (1 row) The beauty is that minimal locking was needed to do that. Scheduling table reorganization pg_squeeze has a builtin job scheduler which can operate in many ways. It can tell the system to squeeze a table within a certain timeframe or trigger a process in case some thresholds have been reached. Internally pg_squeeze uses configuration tables to control its behavior. Here is how it works: test=# d squeeze.tables Table "squeeze.tables" Column | Type | Collation | Nullable | Default ------------------+------------------+-----------+----------+-------------------------------------------- id | integer | | not null | nextval('squeeze.tables_id_seq'::regclass) tabschema | name | | not null | tabname | name | | not null | clustering_index | name | | | rel_tablespace | name | | | ind_tablespaces | name[] | | | free_space_extra | integer | | not null | 50 min_size | real | | not null | 8 vacuum_max_age | interval | | not null | '01:00:00'::interval max_retry | integer | | not null | 0 skip_analyze | boolean | | not null | false schedule | squeeze.schedule | | not null | Indexes: "tables_pkey" PRIMARY KEY, btree (id) "tables_tabschema_tabname_key" UNIQUE CONSTRAINT, btree (tabschema, tabname) Check constraints: "tables_free_space_extra_check" CHECK (free_space_extra >= 0 AND free_space_extra < 100) "tables_min_size_check" CHECK (min_size > 0.0::double precision) Referenced by: TABLE "squeeze.tables_internal" CONSTRAINT "tables_internal_table_id_fkey" FOREIGN KEY (table_id) REFERENCES squeeze.tables(id) ON DELETE CASCADE TABLE "squeeze.tasks" CONSTRAINT "tasks_table_id_fkey" FOREIGN KEY (table_id) REFERENCES squeeze.tables(id) ON DELETE CASCADE Triggers: tables_internal_trig AFTER INSERT ON squeeze.tables FOR EACH ROW EXECUTE FUNCTION squeeze.tables_internal_trig_func() The last column here is worth mentioning: It is a custom data type capable of holding cron-style scheduling information. The custom data type looks as follows: test=# d squeeze.schedule Composite type "squeeze.schedule" Column | Type | Collation | Nullable | Default ---------------+------------------+-----------+----------+--------- minutes | squeeze.minute[] | | | hours | squeeze.hour[] | | | days_of_month | squeeze.dom[] | | | months | squeeze.month[] | | | days_of_week | squeeze.dow[] | | | If you want to make sure that pg_squeeze takes care of a table simple insert the configuration into the table: test=# INSERT INTO squeeze.tables (tabschema, tabname, schedule) VALUES ('public', 't_test', ('{30}', '{22}', NULL, NULL, '{3, 5}')); INSERT 0 1 In this case public.t_test will be squeezed at 22:30h in the evening every 3rd and 5th day of the week. The main question is: When is that? In our setup days 0 and 7 are sundays. So 3 and 5 means wednesday and friday at 22:30h. Let us check what the configuration looks like: test=# x Expanded display is on. test=# SELECT *, (schedule).* FROM squeeze.tables; -[ RECORD 1 ]----+---------------------- id | 1 tabschema | public tabname | t_test clustering_index | rel_tablespace | ind_tablespaces | free_space_extra | 50 min_size | 8 vacuum_max_age | 01:00:00 max_retry | 0 skip_analyze | f schedule | ({30},{22},,,"{3,5}") minutes | {30} hours | {22} days_of_month | months | days_of_week | {3,5} Once this configuration is in place, pg_squeeze will automatically take care of things. Everything is controlled by configuration tables so you can easily control and monitor the inner workings of pg_squeeze. Handling errors If pg_squeeze decides to take care of a table it can happen that the reorg process actually fails. Why is that the case? One might drop a table and recreate it, the structure might change or pg_squeeze might not be able to get the brief lock at the end. Of course it is also possible that the tablespace you want to move a table too does not have enough space. There are many issues which can lead to errors. Therefore one has to track those reorg processes. The way to do that is to inspect squeeze.errors: test=# SELECT * FROM squeeze.errors; id | occurred | tabschema | tabname | sql_state | err_msg | err_detail ----+----------+-----------+---------+-----------+---------+------------ (0 rows) This log table contains all the relevant information needed to track things fast and easily. Finally … pg_squeeze is not the only Open Source tool we have published for PostgreSQL. If you are looking for a cutting edge scheduler we recommend taking a look at what pg_timetable has to offer. The post pg_squeeze: Optimizing PostgreSQL storage appeared first on Cybertec.
Read more
  • 0
  • 0
  • 1808

article-image-daily-coping-30-nov-2020-from-blog-posts-sqlservercentral
Anonymous
30 Nov 2020
2 min read
Save for later

Daily Coping 30 Nov 2020 from Blog Posts - SQLServerCentral

Anonymous
30 Nov 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to make a meal using a recipe or ingredient you’ve not tried before. While I enjoy cooking, I haven’t experimented a lot. Some, but not a lot. I made a few things this year that I’ve never made before, as an experiment. For example, I put together homemade ramen early in the pandemic, which was a hit. For me, I had never made donuts. We’ve enjoyed (too many) donuts during the pandemic, but most aren’t gluten free. I have a cookbook that includes and one for donuts. It’s involved, like bread, with letting the dough rise twice and then frying in oil. I told my daughter I’d make them and she got very excited. I didn’t quite realize what I’d gotten myself into, and it was hours after my girl expected something, but they came out well. It felt good to make these. My Mom had made something similar when I was a kid, but I’d never done them until now. The post Daily Coping 30 Nov 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1799

Anonymous
03 Nov 2020
4 min read
Save for later

Protect your SQL Server from Ransomware: Backup Your Service Master Key and More from Blog Posts - SQLServerCentral

Anonymous
03 Nov 2020
4 min read
Not many disaster recovery or SQL migration/upgrade scenarios require the SQL Server instance service master key to be restored.  Some do. Recently, by far the most frequent and common disaster recovery scenario for my clients has been the need for a complete, bare-metal rebuild or restore of the master database. Not hardware failure but ransomware (crypto-locking file) attacks have been the cause.  You should consider complimentary backup solutions that backup/snapshot the entire server (or VM) for SQL Server, but sometimes these technologies are limited or have too much of an impact on the server. A whole VM snapshot for example that is reliant on VSS could incur an unacceptable long IO stun duration when it occurs.  Regardless, in all cases, SQL Server backups of each database should be taken regularly. This is a conversation for another blog post but a typical pattern is weekly full backups, nightly differential backups, and in the case of databases not in SIMPLE recovery model, 15 minute transaction log backups. In the case of SQL Servers needing a rebuild from nothing but SQL Server backups, some of the key pieces of information from this checklist will be helpful: 1. The exact SQL Server version of each instance to recover, so that you can restore system databases and settings. Storing the output of the @@Version global variable is helpful. Store this in server documentation. 2. The volume letters and paths of SQL Server data and log files would be helpful too. Output from the system view sys.master_files is helpful so that you can recreate the volumes. Store this in server documentation. 3. The service master key backup file and its password is needed to restore certain items in the master database like linked server information. Though the master database can be restored without restoring the service master key, some encrypted information will be unavailable and will need to be recreated. This is very easy to do, but the catch is making sure that the backup file created and its password are stored security in an enterprise security vault software. There are many options out there for something like this, I won't list any vendors, but you should be able to store both strings and small files securely, with metadata, and with enterprise security around it, like multi-factor authentication. BACKUP SERVICE MASTER KEY --not actually important for TDE, but important overall and should be backed up regardless.TO FILE = 'E:Program FilesMicrosoft SQL ServerMSSQL14.SQL2K17MSSQLdataInstanceNameHere_SQLServiceMasterKey_20120314.snk' ENCRYPTION BY PASSWORD = 'complexpasswordhere'; 4. In the event they are present, database master key files. Here's an easy script to create backups of each database's symmetric master key, if it exists. Other keys in the database should be backed up as well, upon creation, and stored in your enterprise security vault. exec sp_msforeachdb 'use [?];if exists(select * from sys.symmetric_keys )beginselect ''Database key(s) found in [?]''select ''USE [?];''select ''OPEN MASTER KEY DECRYPTION BY PASSWORD = ''''passwordhere''''; BACKUP MASTER KEY TO FILE = ''''c:temp?_''+name+''_20200131.snk'''' ENCRYPTION BY PASSWORD = ''''passwordhere'''';GO ''from sys.symmetric_keys;END' 5. Transparent Data Encryption (TDE) certificates, keys and passwords. You should have set this up upon creation, backed up and stored them in your enterprise security vault. For example: BACKUP CERTIFICATE TDECert_enctestTO FILE = 'E:Program FilesMicrosoft SQL ServerMSSQL14.SQL2K17MSSQLdataTestingTDEcert.cer' WITH PRIVATE KEY ( FILE = 'E:Program FilesMicrosoft SQL ServerMSSQL14.SQL2K17MSSQLdataTestingTDEcert.key' , --This is a new key file for the cert backup, NOT the same as the key for the database MASTER KEY ENCRYPTION BY PASSWORD = '$12345testpassword123' ); --This password is for the cert backup's key file. 6. Shared Access Signature certificates, in the cases where your SQL Server has been configured to use a SAS certificate to, for example, send backups directly to Azure Blob Storage via the Backup to URL feature. You should save the script used to create the SAS certificate when it is created, and store it in your enterprise security vault. 7. Integration Services SSISDB database password for the SSIS Catalog. You created this password when you created the SSISDB catalog, and stored in your enterprise security vault. You can always try to open the key to test whether or not your records are correct:  OPEN MASTER KEYDECRYPTION BY PASSWORD = N'[old_password]'; --Password used when creatingSSISDB More information here on restoring the SSISDB key: https://techcommunity.microsoft.com/t5/sql-server-integration-services/ssis-catalog-backup-and-restore/ba-p/388058 8. Reporting Services (SSRS) encryption key and password. Backup and restore this key using the Reporting Service Configuration Manager, and store them your enterprise security vault. In the comments: what other steps have you taken to prevent or recover a SQL Server from a ransomware attack? The post Protect your SQL Server from Ransomware: Backup Your Service Master Key and More appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1793

Anonymous
22 Nov 2020
1 min read
Save for later

Azure SQL Database and Memory from Blog Posts - SQLServerCentral

Anonymous
22 Nov 2020
1 min read
There are many factors to consider when you are thinking about the move to Azure SQL Database (PaaS) – this could be single databases (provisioned compute or serverless) to elastic pools. Going through your head should be how many vCores … Continue reading ? The post Azure SQL Database and Memory appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1664
article-image-the-learning-curve-for-devops-from-blog-posts-sqlservercentral
Anonymous
16 Nov 2020
1 min read
Save for later

The Learning Curve for DevOps from Blog Posts - SQLServerCentral

Anonymous
16 Nov 2020
1 min read
If you’re attempting to implement automation in and around your deployments, you’re going to find there is quite a steep learning curve for DevOps and DevOps-style implementations. Since adopting a DevOps-style release cycle does, at least in theory, speed your ability to deliver better code safely, why would it be hard? Why is there a […] The post The Learning Curve for DevOps appeared first on Grant Fritchey. The post The Learning Curve for DevOps appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1640

Anonymous
16 Nov 2020
1 min read
Save for later

SQLpassion Black Friday Deal 2020 from Blog Posts - SQLServerCentral

Anonymous
16 Nov 2020
1 min read
(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.) As we all know the Black Friday approaches quite fast, and therefore I want to offer you a great deal from my side. During the following 2 weeks I will offer you my available Online Trainings with a 60% (!) discounted price: Design, Deploy, and Optimize SQL Server on VMware SQL Server on VMware – Best Practices SQL Server on Linux, Docker, and Kubernetes SQL Server Query Tuning Strategies SQL Server In-Memory Technlogies SQL Server Performance Troubleshooting SQL Server Availability Groups SQL Server Extended Events So, hurry and sign-up for one (or even more) of my Online Trainings! Thanks for your time, -Klaus The post SQLpassion Black Friday Deal 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1600

article-image-speaking-at-dps-2020-from-blog-posts-sqlservercentral
Anonymous
19 Nov 2020
1 min read
Save for later

Speaking at DPS 2020 from Blog Posts - SQLServerCentral

Anonymous
19 Nov 2020
1 min read
I was lucky enough to attend the Data Platform Summit a few years ago. One of my favorite speaking photos was from the event.Me on a massive stage, massive auditorium and huge screen. This year the event is virtual and I’m on the slate with a couple talks. I’m doing a blogging session and a DevOps session. Both are recorded, but I’ll be online for chat, and certainly available for questions later. There are tons of sessions, with pre-cons, post-cons, and lots of sessions, running around the world. It’s inexpensive, so if you missed the PASS Summit or SQL Bits, join DPS. US$124.50 for the event and recordings. Pre/post cons are about $175. Register today and I’ll see you there. The post Speaking at DPS 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1599
Anonymous
24 Nov 2020
2 min read
Save for later

Daily Coping 24 Nov 2020 from Blog Posts - SQLServerCentral

Anonymous
24 Nov 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to look at life through someone else’s eyes and see their perspective. In today’s world, in many places, I find that people lack the ability to look at the world through other’s eyes. We’ve lost some empathy and willingness to examine things from another perspective. This is especially true during the pandemic, where politics and frustrations seem to be overwhelming. I have my own views, but I had the chance to hang out with a friend recently. This person sees the world differently than I, but I decided to understand, not argue or complain. In this case, the person talked a bit about why they agreed or disagreed with particular decisions or actions by the state or individuals. I asked questions for clarification or more detail, but allowed this person to educate me on their point of view. It was a good conversation, in a way that’s often lost in the media or in larger groups. I didn’t agree with everything, and did feel there were some emotions overriding logic, but I could understand and appreciate the perspective, even if I disagreed with portions. I like these conversations and I wish we could have more of them in small groups, in a civilized fashion. The post Daily Coping 24 Nov 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1563

article-image-kubernetes-precon-at-dps-from-blog-posts-sqlservercentral
Anonymous
21 Nov 2020
2 min read
Save for later

Kubernetes Precon at DPS from Blog Posts - SQLServerCentral

Anonymous
21 Nov 2020
2 min read
Pre-conference Workshop at Data Platform Virtual Summit 2020 I’m proud to announce that I will be be presenting pre-conference workshop at Data Platform Virtual Summit 2020 split into Two four hour sessions on 30 November and 1 December! This one won’t let you down! Here is the start and stop times in various time zones: Time Zone Start Stop EST 5.00 PM 9 PM CET 11.00 PM 3.00 AM (+1) IST 3.30 AM (+1) 7.30 AM (+1) AEDT 9.00 AM (+1) 1.00 PM (+1) The workshop is “Kubernetes Zero to Hero – Installation, Configuration, and Application Deployment” Abstract: Modern application deployment needs to be fast and consistent to keep up with business objectives, and Kubernetes is quickly becoming the standard for deploying container-based applications fast. In this day-long session, we will start container fundamentals and then get into Kubernetes with an architectural overview of how it manages application state. Then you will learn how to build a cluster. With our cluster up and running, you will learn how to interact with our cluster, common administrative tasks, then wrap up with how to deploy applications and SQL Server. At the end of the session, you will know how to set up a Kubernetes cluster, manage a cluster, deploy applications and databases, and how to keep everything up and running. PS: This class will be recorded, and the registered attendee will get 12 months streaming access to the recorded class. The recordings will be available within 30 days of class completion. Workshop Objectives Introduce Kubernetes Cluster Components Introduce Kubernetes API Objects and Controllers Installing Kubernetes Interacting with your cluster Storing persistent data in Kubernetes Deploying Applications in Kubernetes Deploying SQL Server in Kubernetes High Availability scenarios in Kubernetes Click here to register now! The post Kubernetes Precon at DPS appeared first on Centino Systems Blog. The post Kubernetes Precon at DPS appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1563
Modal Close icon
Modal Close icon