Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Databases

233 Articles
article-image-data-architecture-blog-post-ci-cd-in-azure-synapse-analytics-part-1-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
1 min read
Save for later

Data Architecture Blog Post: CI CD in Azure Synapse Analytics Part 1 from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
1 min read
Hello Dear Reader! It's been a while. I've got a new blog post over on the Microsoft Data Architecture Blog on using Azure Synapse Analytics titled,  CI CD in Azure Synapse Analytics Part 1 .  I'm not sure how many numbers will be in this series. I have at least 2 planned. We will see after that. So head over and read up my friends!   As always. Thank you for stopping by.   Thanks,   Brad. The post Data Architecture Blog Post: CI CD in Azure Synapse Analytics Part 1 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4357

Anonymous
02 Dec 2020
1 min read
Save for later

Daily Coping 2 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
02 Dec 2020
1 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to tune in to a different radio station or TV channel. I enjoy sports radio. For years while I commuted, I caught up on what was happening with the local teams in the Denver area. With the pandemic, I go fewer places, and I more rarely listen to the station. I miss that a bit, but when I tuned in online, I found some different hosts. One that I used to really enjoy listening to is Alfred Williams. He played for the Broncos, and after retirement, I enjoyed hearing him on the radio. I looked around, and found him on 850KOA. I’ve made it a point to periodically listen in the afternoon, hear something different, and enjoy Alfred’s opinions and thoughts again. The post Daily Coping 2 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4296

article-image-azure-synapse-analytics-is-ga-from-blog-posts-sqlservercentral
Anonymous
04 Dec 2020
2 min read
Save for later

Azure Synapse Analytics is GA! from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
2 min read
(Note: I will give a demo on Azure Synapse Analytics this Saturday Dec 5th at 1:10pm EST, at the PASS SQL Saturday Atlanta BI (info) (register) (full schedule)) Great news! Azure Synapse Analytics is now GA (see announcement). While most of the feature are GA, there are a few that are still in preview: For those of you who were using the public preview version of Azure Synapse Analytics, nothing has changed – just access your Synapse workspace as before. For those of you who have a Synapse database (i.e. SQL DW database) that was not under a Synapse workspace, your existing data warehouse resources are now listed under “Dedicated SQL pool (formerly SQL DW)” in the Azure portal (where you can still create a standalone database, called a SQL pool). You now have three options going forward for your existing database: Standalone: Keep the database (called a SQL pool) as is and get none of the new workspace features listed here, but you are able to continue using your database, operations, automation, and tooling like before with no changes Enable Azure Synapse workspace features: Go to the overview page for your existing database and choose “New synapse workspace” in the top menu bar and get all the new features except unified management and monitoring. All management operations will continue via SQL resource provider. Except for SQL requests submitted via the Synapse Studio, all SQL monitoring capabilities remain on the database (dedicated SQL pool). For more details on the steps to enable the workspace features see Enabling Synapse workspace features for an existing dedicated SQL pool (formerly SQL DW) Migrate to Azure Synapse workspace: Create a user-defined restore point through the Azure portal, create a new synapse workspace or use an existing one, and then restore the database and get all the new features. All monitoring and management is done via the Synapse workspace and the Synapse Studio experience The features available for all three options (click to expand): More info: Microsoft introduces Azure Purview data catalog; announces GA of Synapse Analytics The post Azure Synapse Analytics is GA! first appeared on James Serra's Blog. The post Azure Synapse Analytics is GA! appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4257

article-image-rob-sullivan-using-pg_repack-in-aws-rds-from-planet-postgresql
Matthew Emerick
13 Oct 2020
4 min read
Save for later

rob sullivan: Using pg_repack in AWS RDS from Planet PostgreSQL

Matthew Emerick
13 Oct 2020
4 min read
As your database keeps growing, there is a good chance you're going to have to address database bloat. While Postgres 13 has launched with some exciting features with built-in methods to rebuild indexes concurrently, many people still end up having to use pg_repack to do an online rebuild of the tables to remove the bloat. Customers on AWS RDS struggle figuring out how to do this. Ready to learn how? Since you have no server to access the local binaries, and because AWS RDS provides no binaries for the versions they are using, you're going to have to build your own. This isn't as hard as one might think because the official pg repos have an installer (ie: sudo apt install postgresql-10-pg_repack). If you don't use the repos, the project itself, is an open source project with directions: http://reorg.github.io/pg_repack/ While you were getting up to speed above, I was spinning up a postgres 10.9 db on RDS. I started it yesterday so that it would be ready by the time you got to this part of the post. Lets create some data: -- let's create the tableCREATE TABLE burritos (id SERIAL UNIQUE NOT NULL primary key,title VARCHAR(10) NOT NULL,toppings TEXT NOT NULL,thoughts TEXT,code VARCHAR(4) NOT NULL,UNIQUE (title, toppings));--disable auto vacuumALTER TABLE burritos SET (autovacuum_enabled = false, toast.autovacuum_enabled = false);-- orders upINSERT INTO burritos (title, toppings, thoughts, code)SELECT left(md5(i::text), 10), md5(random()::text), md5(random()::text), left(md5(random()::text), 4)FROM GENERATE_SERIES(1, 1000000) s(i);UPDATE burritos SET toppings = md5(random()::text) WHERE id < 250;UPDATE burritos SET toppings = md5(random()::text) WHERE id between 250 and 500;UPDATE burritos SET code = left(md5(random()::text), 4) WHERE id between 2050 and 5000;UPDATE burritos SET thoughts = md5(random()::text) WHERE id between 10000 and 20000;UPDATE burritos SET thoughts = md5(random()::text) WHERE id between 800000 and 900000;UPDATE burritos SET toppings = md5(random()::text) WHERE id between 600000 and 700000; (If you are curious how Magistrate presents bloat, here is a clip of the screen:) Much like a human that has had that much interaction with burritos... our database has quite a bit of bloat. Assuming we already have the pg_repack binaries in place, either though compilation or installing the package on the OS, we now need to enable the extension. We've put together a handy reference for installing extensions to get you going. pg_repack has a lot of options. Feel free to check them out, but I'm going to start packing: /usr/local/bin/pg_repack -U greataccounthere -h bloatsy.csbv99zxhbsh.us-east-2.rds.amazonaws.com -d important -t burritos -j 4NOTICE: Setting up workers.connsERROR: pg_repack failed with error: You must be a superuser to use pg_repack This might feel like game over because of the implementation of superuser on RDS, but the trick is to take a leap of faith and add another flag (-k) that skips the superuser check: /usr/local/bin/pg_repack-1.4.3/pg_repack -U greataccounthere -h bloatsy.csbv99zxhbsh.us-east-2.rds.amazonaws.com -k -d important -t burritos -j 4NOTICE: Setting up workers.connsINFO: repacking table "public.burritos"LOG: Initial worker 0 to build index: CREATE UNIQUE INDEX index_16449 ON repack.table_16442 USING btree (id) TABLESPACE pg_defaultLOG: Initial worker 1 to build index: CREATE UNIQUE INDEX index_16451 ON repack.table_16442 USING btree (title, toppings) TABLESPACE pg_defaultLOG: Command finished in worker 0: CREATE UNIQUE INDEX index_16449 ON repack.table_16442 USING btree (id) TABLESPACE pg_defaultLOG: Command finished in worker 1: CREATE UNIQUE INDEX index_16451 ON repack.table_16442 USING btree (title, toppings) TABLESPACE pg_default It works! The table is feeling fresh and tidy and your application has a little more pep in its step. When using Magistrate our platform matrix also knows when you have pg_repack installed and gives you the commands to run for tables it detects with high bloat percentage.
Read more
  • 0
  • 0
  • 4125

article-image-migrating-from-sql-server-to-amazon-aws-aurora-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
2 min read
Save for later

Migrating from SQL Server to Amazon AWS Aurora from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
2 min read
Is Microsoft’s licensing scheme getting you down? It’s 2020 and there are now plenty of data platforms that are good for running your enterprise data workloads. Amazon’s Aurora PaaS service runs either MySQL or PostgreSQL. I’ve been supporting SQL Server for nearly 22 years and I’ve seen just about everything when it comes to bugs or performance problems and am quite comfortable with SQL Server as a data platform; so, why migrate to something new? Amazon’s Aurora has quite a bit to offer and they are constantly improving the product. Since there’s no license costs its operating expenditures are much more reasonable. Let’s take a quick look to compare a 64 core Business Critical Azure Managed Instance with a 64 core instance of Aurora MySQL. What about Aurora? Two nodes of Aurora MySQL are less than half the cost of Azure SQL Server Managed Instances.It’s also worth noting that Azure Managed Instances only support 100 databases and only have 5.1 GB of RAM per vCore. Given the 64 GB example there’s only 326.4 GB of RAM compared to the 512 GB selected in the Aurora Instance. This post wasn’t intended to be about the “Why” of migrating; so, let’s talk about the “How”. Migration at the high level takes two steps. Schema Conversion Data Migration Schema Conversion is made simple with AWS SCT (Schema Conversion Tool). Walking through a simple conversion. Note that the JDBC drivers for SQL Server are required. You can’t use “.” for a local host, which is a little annoying but typing the servername is easy enough. The dark blue items in the graph represent complex actions, such as converting triggers, since triggers aren’t a concept used in MySQL they aren’t a simple 1:1 conversion. Migrating to Aurora from SQL Server can be simple with AWS SCT and a cost saving move that also modernizes your data platform. Next we’ll look at AWS DMS (Data Migration Service). Thanks to the engineers at AWS, migrating to Aurora PostgreSQL is even easier. Recently Babelfish for Aurora PostgreSQL was announced, which is a product that allows SQL Server’s T-SQL code to run on PostgreSQL. The post Migrating from SQL Server to Amazon AWS Aurora appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4065

article-image-protocol-flaw-in-mysql-client-allows-mysql-server-to-request-any-local-file-from-mysql-client
Melisha Dsouza
21 Jan 2019
2 min read
Save for later

Protocol flaw in MySQL client allows MySQL server to request any local file from MySQL client

Melisha Dsouza
21 Jan 2019
2 min read
Last week, William de Groot, a digital forensics consultant discovered a protocol flaw in MySQL, which he alleges is the main reason behind e-commerce and government sites getting hacked via the Adminer database tool. He stated that Adminer can be “lured to disclose arbitrary files” which attackers can then misuse to fetch passwords for popular apps such as Magento and Wordpress, thus gaining control of a site’s database.  Because of this flaw, MySQL client allows MySQL server to request any local file by default. He further states that an example of such a malicious MySQL server can be found at GitHub that was “likely used to exfiltrate passwords from these hacked sites”. A reddit user also pointed out that flaw could be further exploited to steal SSH keys and crypto wallets. The only check mark is that the server has to know the full path of the file on the client to exploit this flaw. Unlike Adminer, several clients and libraries including Golang, Python, PHP-PDO,  have built-in protection for this “feature” or disable it by default. This flaw is surprisingly a part of MySQL documentation which states: Source: MySQL Documentation You can head over to Willem Groot’s blog for more insights on this news. Alternatively, head over to his Twitter thread for a more in-depth discussion on the topic. How to optimize MySQL 8 servers and clients 6 reasons to choose MySQL 8 for designing database solutions 12 most common MySQL errors you should be aware of  
Read more
  • 0
  • 0
  • 4049
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-requesting-comments-on-the-sqlorlando-operations-manual-from-blog-posts-sqlservercentral
Anonymous
02 Dec 2020
1 min read
Save for later

Requesting Comments on the SQLOrlando Operations Manual from Blog Posts - SQLServerCentral

Anonymous
02 Dec 2020
1 min read
For the past couple weeks I’ve been trying to capture a lot of ideas about how and what and why we do things in Orlando and put them into an organized format. I’m sharing here in hopes that some of you will find it useful and that some of you will have questions, comments, or suggestions that would make it better. I’ll write more about it later this week, for now I’ll let the document stand on its own, with one exception – below are a list of all the templates we have in Trello that have the details on how to do many of our recurring tasks. I’ll share all of that in the next week or so as well. SQLOrlando Operating ManualDownload The post Requesting Comments on the SQLOrlando Operations Manual appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 3991

article-image-would-you-pass-the-sql-server-certifications-please-what-do-you-mean-were-out-from-blog-posts-sqlservercentral
Anonymous
01 Dec 2020
5 min read
Save for later

Would You Pass the SQL Server Certifications Please? What Do You Mean We're Out? from Blog Posts - SQLServerCentral

Anonymous
01 Dec 2020
5 min read
I have held various certifications through my DBA career, from CompTIA A+ certification back when I worked help desk (I'm old) through the various MCxx that Microsoft has offered over the years (although I never went for Microsoft Certified Master (MCM), which I still regret). I have definitely gotten some mileage out of my certs over the years, getting an interview or an offer not just because I was certified, but rather because I had comparable job experience to someone else *and* I was certified, nudging me past the other candidate. I am currently an MCSA: SQL 2016 Database Administration and an MCSE: Data Management and Analytics, which is pretty much the top of SQL Server certifications currently available. I also work for a company that is a Microsoft partner (and have previously worked for other Microsoft partners) and part of the requirements to become (and stay) a Microsoft partner is maintaining a certain number of employees certified at certain levels of certification dependent on your partnership level. I completed the MCSE back in 2019, and my company is starting to have a new re-focus on certifications (a pivot, so to speak - I hate that term but it is accurate), so I went out to look at what my options were.  We have two SQL Server versions past SQL Server 2016 at this point, so there must be something else right? On top of that, the MCSA and MCSE certs I currently have are marked to expire *next* month (January 2021 - seriously, check it out HERE)...so there *MUST* be something else right - something to replace it with or to upgrade to? I went to check the official Microsoft certifications site (https://docs.microsoft.com/en-us/learn/certifications/browse/?products=sql-server&resource_type=certification) and found that the only SQL Server-relevant certification beyond the MCSE: Data Management and Analytics is the relatively new "Microsoft Certified: Azure Database Administrator Associate" certification (https://docs.microsoft.com/en-us/learn/certifications/azure-database-administrator-associate).   The official description of this certification is as follows: The Azure Database Administrator implements and manages the operational aspects of cloud-native and hybrid data platform solutions built with Microsoft SQL Server and Microsoft Azure Data Services. The Azure Database Administrator uses a variety of methods and tools to perform day-to-day operations, including applying knowledge of using T-SQL for administrative management purposes. Cloud...Cloud, Cloud...Cloud...(SQL)...Cloud, Cloud, Cloud...by the way, SQL. Microsoft has been driving toward the cloud for a very long time - everything is "Cloud First" (developed in Azure before being retrofit into on-premises products, and the company definitely tries to steer as much into the cloud as it can. I realize this is Microsoft's reality, and I have had some useful experiences using the cloud for Azure VM's and Azure SQL Database over the years...but... There is still an awful lot of the world running on physical machines - either directly or via a certain virtualization platform that starts with VM and rhymes with everywhere. As such, I can't believe Microsoft has bailed on actual SQL Server certifications...but it sure looks that way.  Maybe something shiny and new will come out of this; maybe there will be a new better, stronger, faster SQL Server certification in the near future - but the current lack of open discussion doesn't inspire hope. -- Looking at the Azure Database Administrator Associate certification, it requires a single exam (DP-300 https://docs.microsoft.com/en-us/learn/certifications/exams/dp-300) and is apparently "Associate" level.  Since the styling of certs is apparently changing (after all it isn't the MCxx Azure Database Administrator) I went to look at what Associate meant. Apparently there are Fundamental, Associate, and Expert level certifications in the new role-based certification setup, and there are currently only Expert-level certs for a handful of technologies, most of them Office and 365-related technologies. This means that for most system administrators - database and otherwise - there is nowhere to go beyond the "Associate" level - you can dabble in different technologies, but no way to be certified as an "Expert" by Microsoft in SQL Server, cloud or otherwise. (The one exception I could find for any sysadmins is the "Microsoft Certified: Azure Solutions Architect Expert" certification, which is all-around design and implement in Azure at a much broader level.) -- After reviewing all of this, I am already preparing for the Azure Database Administrator Associate certification via the DP-300 exam, and I am considering other options for broadening my experience, including Azure administrator certs and AWS administrator certs.  I will likely focus on Azure since my current role has more Azure exposure than AWS (although maybe that is a reason to go towards AWS and broaden my field...hmm...) If anything changes in the SQL Server cert world - some cool new "OMG we forgot we don't have a new SQL Server certification - here you go" announcement - I will let you know. The post Would You Pass the SQL Server Certifications Please? What Do You Mean We're Out? appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 3879

Anonymous
01 Dec 2020
5 min read
Save for later

PASS Summit 2020 Mental Health Presentation Eval Comments from Blog Posts - SQLServerCentral

Anonymous
01 Dec 2020
5 min read
I first starting presenting on mental health last December in Charlotte, NC on mental health.  I got some backlash at a couple of events for a few people on “me” being the person talking about it.  But I’ve gotten an overwhelming amount more of support than I have backlash.  I just want to share the comments I got from the 20 people of filled out evals at Summit.  If you notice one person actually used something they learned with a family member that week.  I’m not going to worry about revealing scores mine was the highest I’ve ever gotten, but if someone could please fix that darn vendor related worded question so I can quit getting my score lowered while not advertising anything it would be great. I would ask any managers out there that are reading this that have had to feal with employees with mental health issues to contact me.  I do get questions on the best way to approach an employee they are concerned about and have given advice and how I would like to be approached but I liked to hear how it looks from a manager’s perspective.  Just DM me on Twitter.  I don’t any details on the person or particular situation just how you approached the person with the issue. These are all the comments unedited I received: This is important stuff to keep in mind both for oneself and when working with and watching out for others, personally and professionally. Thank you. I’m happy that people are starting to become more comfortable sharing their battles with mental illness and depression. Tracy eloquently shared her story in a way that makes us want to take this session back to spread to our colleagues. There definitely needs to be an end to the stigma surrounding mental health; sessions like Tracy’s are helping crack that barrier. VERY valuable session. Thanks! Thanks Tracy! That was a wonderful session and thank you for discussing the elephant in the room as the saying goes. I didn’t realize there are higher rates of mental health issues for us IT folks. I’ve also struggled with co-workers that didn’t understand and were not compassionate about what I was going through at the time, which made things harder. Thanks again! Rich This is a great session. It is good to remind ourselves that we are all human and need to focus on our mental health. Also I have known Tracy for awhile and I know that she is super talented and does so much to give back to not only PASS but other great causes too. Hearing about some of the challenges she has had helps to demonstrate that we are all more a like than we are different in that we all struggle with things from time to time. Also great use of pictures in the session. Having relevant pictures through out made the presentation speak louder for sure. Thanks for sharing your story, Tracy! valuable topic I admire Tracy’s strength for talking about what she has been through. Hopefully it opens the door for others to be able to speak more openly in the future. as far as the presentation itself, the slides were good and gave a good summary of the discussion. Thank you for speaking about this. It’s good to hear that we’re not alone in feeling stress. The list of resources in the slides is of great help. I really wish you had done a session like this with a health professional. It was okay to hear first hand experience but I think that insight from a mental health professional would have been much more helpful. It takes a lot of courage to approach and discuss this topic. This was a very good reminder to me to stop and remember it’s not all about deadlines etc. Some the statistics were very eye opening. I’ve been impacted by several suicides over the last five years and it is hard to understand and to understand how to help. It’s good to be reminded that just listening helps. Tracy is exceptionally brave; I appreciate her work to destigmatize the topic and provide practical and tangible advice. Much appreciated. Thanks Tracy. I was able to use some of the things you taught me to work through a mental health issue in my family yesterday and the results were excellent. Keep sharing! Thank you so much. Thank you for sharing your story and helping me realize how many people struggle with mental health in IT. Thank you for the pointers on how to help a friend. Thank you for the survival tips. This was the most valuable session of the whole conference for me! The post PASS Summit 2020 Mental Health Presentation Eval Comments first appeared on Tracy Boggiano's SQL Server Blog. The post PASS Summit 2020 Mental Health Presentation Eval Comments appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 3871

article-image-provisioning-storage-for-azure-sql-edge-running-on-a-raspberry-pi-kubernetes-cluster-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
10 min read
Save for later

Provisioning storage for Azure SQL Edge running on a Raspberry Pi Kubernetes cluster from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
10 min read
In a previous post we went through how to setup a Kubernetes cluster on Raspberry Pis and then deploy Azure SQL Edge to it. In this post I want to go through how to configure a NFS server so that we can use that to provision persistent volumes in the Kubernetes cluster. Once again, doing this on a Raspberry Pi 4 with an external USB SSD. The kit I bought was: – 1 x Raspberry Pi 4 Model B – 2GB RAM 1 x SanDisk Ultra 16 GB microSDHC Memory Card 1 x SanDisk 128 GB Solid State Flash Drive The initial set up steps are the same as the previous posts, but we’re going to run through them here (as I don’t just want to link back to the previous blog). So let’s go ahead and run through setting up a Raspberry Pi NFS server and then deploying persistent volumes for Azure SQL Edge. Flashing the OS The first thing to do is flash the SD card using Rufus: – Grab the Ubuntu 20.04 ARM image from the website and flash all the cards: – Once that’s done, connect the Pi to an internet connection, plug in the USB drive, and then power the Pi on. Setting a static IP Once the Pi is powered on, find it’s IP address on the network. Nmap can be used for this: – nmap -sP 192.168.1.0/24 Or use a Network Analyzer application on your phone (I find the output of nmap can be confusing at times). Then we can ssh to the Pi: – ssh pi@192.168.1.xx And then change the password of the default ubuntu user (default password is ubuntu): – Ok, now we can ssh back into the Pi and set a static IP address. Edit the file /etc/netplan/50-cloud-init.yaml to look something like this: – eth0 is the network the Pi is on (confirm with ip a), 192.168.1.160 is the IP address I’m setting, 192.168.1.254 is the gateway on my network, and 192.168.1.5 is my dns server (my pi-hole). There is a warning there about changes not persisting, but they do Now that the file is configured, we need to run: – sudo netplan apply Once this is executed it will break the current shell, wait for the Pi to come back on the network on the new IP address and ssh back into it. Creating a custom user Let’s now create a custom user, with sudo access, and diable the default ubuntu user. To create a new user: – sudo adduser dbafromthecold Add to the sudo group: – sudo usermod -aG sudo dbafromthecold Then log out of the Pi and log back in with the new user. Once in, disable the default ubuntu user: – sudo usermod --expiredate 1 ubuntu Cool! So we’re good to go to set up key based authentication into the Pi. Setting up key based authentication In the post about creating the cluster we already created an ssh key pair to use to log into the Pi but if we needed to create a new key we could just run: – ssh-keygen And follow the prompts to create a new key pair. Now we can copy the public key to the Pi. Log out of the Pi and navigate to the location of the public key: – ssh-copy-id -i ./raspberrypi_k8s.pub dbafromthecold@192.168.1.160 Once the key has been copied to the Pi, add an entry for the Pi into the ssh config file: – Host pi-nfs-server HostName 192.168.1.160 User dbafromthecold IdentityFile ~/raspberrypi_k8s To make sure that’s all working, try logging into the Pi with: – ssh dbafromthecold@pi-nfs-server Installing and configuring the NFS server Great! Ok, now we can configure the Pi. First thing, let’s rename it to pi-nfs-server and bounce: – sudo hostnamectl set-hostname pi-nfs-server sudo reboot Once the Pi comes back up, log back in and install the nfs server itself: – sudo apt-get install -y nfs-kernel-server Now we need to find the USB drive on the Pi so that we can mount it: – lsblk And here you can see the USB drive as sda: – Another way to find the disk is to run: – sudo lshw -class disk So we need to get some more information about /dev/sda it in order to mount it: – sudo blkid /dev/sda Here you can see the UUID of the drive and that it’s got a type of NTFS. Now we’re going to create a folder to mount the drive (/mnt/sqledge): – sudo mkdir /mnt/sqledge/ And then add a record for the mount into /etc/fstab using the UUID we got earlier for the drive: – sudo vim /etc/fstab And add (changing the UUID to the value retrieved earlier): – UUID=242EC6792EC64390 /mnt/sqledge ntfs defaults 0 0 Then mount the drive to /mnt/sqledge: – sudo mount -a To confirm the disk is mounted: – df -h Great! We have our disk mounted. Now let’s create some subfolders for the SQL system, data, and log files: – sudo mkdir /mnt/sqledge/{sqlsystem,sqldata,sqllog} Ok, now we need to modify the export file so that the server knows which directories to share. Get your user and group ID using the id command: – The edit the /etc/exports file: – sudo vim /etc/exports Add the following to the file: – /mnt/sqledge *(rw,all_squash,insecure,async,no_subtree_check,anonuid=1001,anongid=1001) N.B. – Update the final two numbers with the values from the id command. A full break down of what’s happening in this file is detailed here. And then update: – sudo exportfs -ra Configuring the Kubernetes Nodes Each node in the cluster needs to have the nfs tools installed: – sudo apt-get install nfs-common And each one will need a reference to the NFS server in its /etc/hosts file. Here’s what the hosts file on k8s-node-1 now looks like: – Creating a persistent volume Excellent stuff! Now we’re good to go to create three persistent volumes for our Azure SQL Edge pod: – apiVersion: v1 kind: PersistentVolume metadata: name: sqlsystem-pv spec: capacity: storage: 1024Mi accessModes: - ReadWriteOnce nfs: server: pi-nfs-server path: "/mnt/sqledge/sqlsystem" --- apiVersion: v1 kind: PersistentVolume metadata: name: sqldata-pv spec: capacity: storage: 1024Mi accessModes: - ReadWriteOnce nfs: server: pi-nfs-server path: "/mnt/sqledge/sqldata" --- apiVersion: v1 kind: PersistentVolume metadata: name: sqllog-pv spec: capacity: storage: 1024Mi accessModes: - ReadWriteOnce nfs: server: pi-nfs-server path: "/mnt/sqledge/sqllog" What this file will do is create three persistent volumes, 1GB in size (although that will kinda be ignored as we’re using NFS shares), in the ReadWriteOnce access mode, pointing at each of the folders we’ve created on the NFS server. We can either create the file and deploy or run (do this locally with kubectl pointed at the Pi K8s cluster): – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/da751e8c93a401524e4e59266812dc63/raw/d97c0a78887b6fcc41d0e48c46f05fe48981c530/azure-sql-edge-pv.yaml To confirm: – kubectl get pv Now we can create three persistent volume claims for the persistent volumes: – apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sqlsystem-pvc spec: volumeName: sqlsystem-pv accessModes: - ReadWriteOnce resources: requests: storage: 1024Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sqldata-pvc spec: volumeName: sqldata-pv accessModes: - ReadWriteOnce resources: requests: storage: 1024Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sqllog-pvc spec: volumeName: sqllog-pv accessModes: - ReadWriteOnce resources: requests: storage: 1024Mi Each one with the same AccessMode and size as the corresponding persistent volume. Again, we can create the file and deploy or just run: – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/0c8fcd74480bba8455672bb5f66a9d3c/raw/f3fdb63bdd039739ef7d7b6ab71196803bdfebb2/azure-sql-edge-pvc.yaml And confirm with: – kubectl get pvc The PVCs should all have a status of Bound, meaning that they’ve found their corresponding PVs. We can confirm this with: – kubectl get pv Deploying Azure SQL Edge with persistent storage Awesome stuff! Now we are good to go and deploy Azure SQL Edge to our Pi K8s cluster with persistent storage! Here’s the yaml file for Azure SQL Edge: – apiVersion: apps/v1 kind: Deployment metadata: name: sqledge-deployment spec: replicas: 1 selector: matchLabels: app: sqledge template: metadata: labels: app: sqledge spec: volumes: - name: sqlsystem persistentVolumeClaim: claimName: sqlsystem-pvc - name: sqldata persistentVolumeClaim: claimName: sqldata-pvc - name: sqllog persistentVolumeClaim: claimName: sqllog-pvc containers: - name: azuresqledge image: mcr.microsoft.com/azure-sql-edge:latest ports: - containerPort: 1433 volumeMounts: - name: sqlsystem mountPath: /var/opt/mssql - name: sqldata mountPath: /var/opt/sqlserver/data - name: sqllog mountPath: /var/opt/sqlserver/log env: - name: MSSQL_PID value: "Developer" - name: ACCEPT_EULA value: "Y" - name: SA_PASSWORD value: "Testing1122" - name: MSSQL_AGENT_ENABLED value: "TRUE" - name: MSSQL_COLLATION value: "SQL_Latin1_General_CP1_CI_AS" - name: MSSQL_LCID value: "1033" - name: MSSQL_DATA_DIR value: "/var/opt/sqlserver/data" - name: MSSQL_LOG_DIR value: "/var/opt/sqlserver/log" terminationGracePeriodSeconds: 30 securityContext: fsGroup: 10001 So we’re referencing our three persistent volume clams and mounting them as sqlsystem-pvc – /var/opt/mssql sqldata-pvc – /var/opt/sqlserver/data sqllog-pvc – /var/opt/sqlserver/log We’re also setting environment variables to set the default data and log paths to the paths mounted by persistent volume claims. To deploy: – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/92ddea343d525f6c680d9e3fff4906c9/raw/4d1c071e9c515266662361e7c01a27cc162d08b1/azure-sql-edge-persistent.yaml To confirm: – kubectl get all All looks good! To dig in a little deeper: – kubectl describe pods -l app=sqledge Testing the persistent volumes But let’s not take Kubernetes’ word for it! Let’s create a database and see it persistent across pods. So expose the deployment: – kubectl expose deployment sqledge-deployment --type=LoadBalancer --port=1433 --target-port=1433 Get the External IP of the service created (provided by MetalLb configured in the previous post): – kubectl get services And now create a database with the mssql-cli: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "CREATE DATABASE [testdatabase];" Confirm the database is there: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "SELECT [name] FROM sys.databases;" Confirm the database files: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "USE [testdatabase]; EXEC sp_helpfile;" We can even check on the NFS server itself: – ls -al /mnt/sqledge/sqldata ls -al /mnt/sqledge/sqllog Ok, so the “real” test. Let’s delete the existing pod in the deployment and see if the new pod has the database: – kubectl delete pod -l app=sqledge Wait for the new pod to come up: – kubectl get pods -o wide And then see if our database is in the new pod: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "SELECT [name] FROM sys.databases;" And that’s it! We’ve successfully built a Pi NFS server to deploy persistent volumes to our Raspberry Pi Kubernetes cluster so that we can persist databases from one pod to another! Phew! Thanks for reading! The post Provisioning storage for Azure SQL Edge running on a Raspberry Pi Kubernetes cluster appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 3836
Anonymous
04 Dec 2020
5 min read
Save for later

5 Things You Should Know About Azure SQL from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
5 min read
Azure SQL offers up a world of benefits that can be captured by consumers if implemented correctly.  It will not solve all your problems, but it can solve quite a few of them. When speaking to clients I often run into misconceptions as to what Azure SQL can really do. Let us look at a few of these to help eliminate any confusion. You can scale easier and faster Let us face it, I am old.  I have been around the block in the IT realm for many years.  I distinctly remember the days where scaling server hardware was a multi-month process that usually resulted in the fact that the resulting scaled hardware was already out of date by the time the process was finished.  With the introduction of cloud providers, the ability to scale vertically or horizontally can usually be accomplished within a few clicks of the mouse.  Often, once initiated, the scaling process is completed within minutes instead of months.  This is multiple orders of magnitude better than the method of having to procure hardware for such needs. The added benefit of this scaling ability is that you can then scale down when needed to help save on costs.   Just like scaling up or out, this is accomplished with a few mouse clicks and a few minutes of your time. It is not going to fix your performance issues If you currently have performance issues with your existing infrastructure, Azure SQL is not going to necessarily solve your problem.  Yes, you can hide the issue with faster and better hardware, but really the issue is still going to exist, and you need to deal with it.  Furthermore, moving to Azure SQL could introduce additional issues if the underlying performance issue is not addressed before hand.   Make sure to look at your current workloads and address any performance issues you might find before migrating to the cloud.  Furthermore, ensure that you understand the available service tiers that are offered for the Azure SQL products.   By doing so, you’ll help guarantee that your workloads have enough compute resources to run as optimal as possible. You still must have a DR plan If you have ever seen me present on Azure SQL, I’m quite certain you’ve heard me mention that one of the biggest mistakes you can do when moving to any cloud provider is not having a DR plan in place.  There are a multitude of ways to ensure you have a proper disaster recovery strategy in place regardless of which Azure SQL product you are using.  Platform as a Service (Azure SQL Database or SQL Managed Instance) offers automatic database backups which solves one DR issue for you out of the gate.  PaaS also offers geo-replication and automatic failover groups for additional disaster recovery solutions which are easily implemented with a few clicks of the mouse. When working with SQL Server on an Azure Virtual machine (which is Infrastructure as a Service), you can perform database backups through native SQL Server backups or tools like Azure Backup. Keep in mind that high availability is baked into the Azure service at every turn.  However, high availability does not equal disaster recovery and even cloud providers such as Azure do incur outages that can affect your production workloads.  Make sure to implement a disaster recovery strategy and furthermore, practice it. It could save you money When implemented correctly, Azure SQL could indeed save you money in the long run. However, it all depends on what your workloads and data volume look like. For example, due to the ease of scalability Azure SQL offers (even when scaling virtual machines), secondary replicas of your data could be at a lower service tier to minimize costs.  In the event a failover needs to occur you could then scale the resource to a higher performing service tier to ensure workload compute requirements are met. Azure SQL Database offers a serverless tier that provides the ability for the database to be paused.  When the database pauses, you will not be charged for any compute consumption.  This is a great resource for unpredictable workloads. Saving costs in any cloud provider implies knowing what options are available as well as continued evaluation of which options would best suit your needs. It is just SQL Azure SQL is not magical quite honestly.  It really is just the same SQL engine you are used to with on-premises deployments.  The real difference is how you engage with the product and sometimes that can be scary if you are not used to it.  As a self-proclaimed die-hard database administrator, it was daunting for me when I started to learn how Azure SQL would fit into modern day workloads and potential help save organizations money.  In the end, though, it’s the same product that many of us have been using for years. Summary In this blog post I’ve covered five things to know about Azure SQL.  It is a powerful product that can help transform your own data ecosystem into a more capable platform to serve your customers for years to come.  Cloud is definitely not a fad and is here to stay.  Make sure that you expand your horizons and look upward because that’s where the market is going. If you aren’t looking at Azure SQL currently, what are you waiting for?  Just do it. © 2020, John Morehouse. All rights reserved. The post 5 Things You Should Know About Azure SQL first appeared on John Morehouse. The post 5 Things You Should Know About Azure SQL appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 3816

Anonymous
01 Dec 2020
2 min read
Save for later

Inspector 2.4 now available from Blog Posts - SQLServerCentral

Anonymous
01 Dec 2020
2 min read
All the changes for this release can be found in the Github Project page Mainly bug fixes this time around , but we have also added new functionality: Improvements #263 If you centralize your Servers’ collections into a single database, you may be interested in the latest addition, we added the ability to override most of the global settings for thresholds found in the Settings table on a server by server basis so you are no longer locked to a threshold for all the servers information contained within the database. Check out the Github issue for more details regarding the change or check out the Inspector user guide. Bug Fixes #257 Fixed a bug where the Inspector Auto update Powershell function was incorrectly parsing non uk date formats, download the latest InspectorAutoUpdate.psm1 to get the update. #261 We noticed that ModuleConfig with ReportWarningsOnly = 3 still sent a report even if there were no Warnings/Advisories present so we fixed that. #256 If you use the powershell collection and one of your servers had a blank settings table that servers’ collection was being skipped, shame on us! we fixed this so that the settings table is re-synced and collection continues. #259 The BlitzWaits custom module was highlighting wait types from your watched wait types table even when the threshold was not breached , A silly oversight but we got it fixed. #265 The Backup space module was failing if access was denied on the backup path , we handle this gracefully now so you will see a warning on your report if this occurs. The post Inspector 2.4 now available appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 3603

Anonymous
03 Dec 2020
1 min read
Save for later

[Video] Azure SQL Database – Import a Database from Blog Posts - SQLServerCentral

Anonymous
03 Dec 2020
1 min read
Quick Video showing you have to use a BACPAC to “import” a database into Azure (Via Storage container), The post [Video] Azure SQL Database – Import a Database appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 3589
article-image-sql-database-corruption-how-to-investigate-root-cause-from-blog-posts-sqlservercentral
Anonymous
02 Dec 2020
5 min read
Save for later

SQL Database Corruption, how to investigate root cause? from Blog Posts - SQLServerCentral

Anonymous
02 Dec 2020
5 min read
Introduction: In this article, we will discuss the MS SQL Server database corruption.  So, first, we need to understand what the cause of corruption is. Usually, in all the scenarios of SQL Server database corruption, the main corruption cause is related to the IO subsystem level, which means that the root cause is a problem with the drives, drivers, and possibly even drivers. And while the specific root causes can vary widely (simply due to the sheer complexity involved in dealing with magnetic storage). The main thing to remember about disk systems is that any person in the IT knows that all major operating systems. It ships with the equivalent of a kind of Disk-Check utility (CHKDSK) that can scan for bad sectors, bad entries, and other storage issues that can infiltrate storage environments. Summary: If you are beginner to Microsoft SQL Server. You could do the following things to solve the database corruption. And these tricks can’t help you out: Reopen SQL Server It just holds up the issue and gives raise to the system to run through crash restoration on the databases. Not to mention, in most systems, you will not be able to do this right away and will hold up the issue further Delete all the procedure cache Separate and moving the Microsoft SQL server to a new server When you do this you will feel pain because SQL Server will fail to attach on the second server and on your primary.  At this moment you have to look into "hack attach" SQL Server and I can understand it can be a very painful experience. If you know what will be helpful to solve any problem or what can't be helpful. It requires that you have to be prepared every time for these kinds of problems.  It means that you have to create a database that is corrupt and try everything to recovery that database with the slightest data loss. You may read this: How to Reduce the Risk of SQL Database Corruption Root cause analysis: Root cause analysis may be a crucial part of this method and should not be unmarked regardless of however you pass through the info. This can be a vital step in preventing the matter from occurring once more and doubtless earlier than you're thinking that. In my expertise, once corruption happens, it's absolute to happen once more if no actions area unit is taken to rectify the matter. To boot, this is often seemed to be worse the second time. Now, I'd counsel, that though you think that you recognize the explanation for the corruption (E.G. power outage with no UPS) investigate the subsequent sources anyways. Perhaps the outage was simply helped and there have been warning signs occurring. To begin, I perpetually recommend these places to seem. Memory and disk medicine to create certain there aren't any issues with the present hardware SQL Server error logs Windows event viewer While rare, sit down with your vendors to examine if they need to have issues with the computer code you're using Software errors, believe it or not, Microsoft has been known to cause corruption. See KB2969896. this is often wherever gap tickets with Microsoft also are helpful The event viewer and SQL server error logs may be viewed along. But, I suggest dividing these out to the system administrators as they regularly have more manpower on their team to review these. Helpful Tip: In fact, even once knowing what the matter is, I forever counsel gap a price tag with Microsoft as a result of they're going to not solely provide an additional set of eyes on the problem however additionally their experience on the topic.to boot, Microsoft will and can assist you with the next steps to assist notice the foundation reason behind the matter and wherever the corruption originated from. Corruption problems: If the database is corrupt, it is possible to repair the database using SQL Recovery Software. This software will allow repairing the database in case of corruption. Conclusion: So finally, after this article, we learn many things about database corruption and how to resolve that corrupt database. Most of the things are too common, and now you can solve this kind of common corruption. With time when will you finish this series, the goal will be that when you find out you have corruption, it is coming from your alerts, not an end-user, and you will have a procedure to let your managers know where you sit and what the next steps are. Because of this, you will get a lot of benefits, and also it allows you to work without having someone breathing down your neck frequently. www.PracticalSqlDba.com The post SQL Database Corruption, how to investigate root cause? appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 3499

article-image-basic-json-queries-sqlnewblogger-from-blog-posts-sqlservercentral
Anonymous
30 Nov 2020
2 min read
Save for later

Basic JSON Queries–#SQLNewBlogger from Blog Posts - SQLServerCentral

Anonymous
30 Nov 2020
2 min read
Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers. Recently I saw Jason Horner do a presentation on JSON at a user group meeting. I’ve lightly looked at JSON in some detail, and I decided to experiment with this. Basic Querying of a Document A JSON document is text that contains key-value pairs, with colons used to separate them, and grouped with curly braces. Arrays are supported with brackets, values separated by commas, and everything that is text is quoted with double quotes. There are a few other rules, but that’s the basic structure. Things can next, and in SQL Server, we store the data a character data. So let’s create a document: DECLARE @json NVARCHAR(1000) = N'{  "player": {             "name" : "Sarah",             "position" : "setter"            }  "team" : "varsity"}' This is a basic document, with two key values (player and team) and one set of additional keys (name and position) inside the first key. I can query this with the code: SELECT JSON_VALUE(@json, '$.player.name') AS PlayerName; This returns the scalar value from the document. In this case, I get “Sarah”, as shown here: I need to get the path correct here for the value. Note that I start with a dot (.) as the root and then traverse the tree. A few other examples are shown in the image. These show the paths to get to data in the document. In a future post, I’ll look in more detail how this works. SQLNewBlogger After watching the presentation, I decided to do a little research and experiment. I spent about 10 minutes playing with JSON and querying it, and then another 10 writing this post. This is a great example of picking up the beginnings of a new skill, and the start of a blog series that shows how I can work with this data. The post Basic JSON Queries–#SQLNewBlogger appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 3376
Modal Close icon
Modal Close icon