Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Databases

233 Articles
article-image-powershell-arrays-and-hash-tables-sqlnewblogger-from-blog-posts-sqlservercentral
Anonymous
09 Nov 2020
2 min read
Save for later

PowerShell Arrays and Hash Tables–#SQLNewblogger from Blog Posts - SQLServerCentral

Anonymous
09 Nov 2020
2 min read
I was watching the GroupBy talk the other day and noticed that Cláudio Silva was using arrays, or what appeared to be arrays, in his talk. That was an interesting technique, one that I haven’t used very much. A day later, I ran into an explanation on dbatools.io, that showed this code: PS C:> $columns = @{ >> Text = 'FirstName' >> Number = 'PhoneNumber' >> } That didn’t quite seem like what I wanted, so I decided to investigate more. I looked up PowerShell Arrays, and that wasn’t what I wanted. These are a list of values, as in $a = 1, 2,3 Which gives me this: >>$a 1 2 3 Useful, but not for my purposes. I need to map things together, which means a hash table. Hash Tables It turns out I need a hash table. This is a key value pair that lets me pick a name and value and store them together. The way I construct these are with the @{} structure. Inside here, I set semi-colon separated pairs, with the name=value syntax. Here’s an example I used: $ColList = @{Date="EventDate"; Event="Event"} In here I map two keys (Date and Event) to two values (EventDate and Event). For the cmdlet I am using, this allows me to map these two columns together. When I need a value, I can use the $variable.key to get the value back. I assume this is what the SqlBulkCopy cmdlet uses, which is what dbatools wraps. I ended up passing this $ColList hash table in for the –ColumnMap parameter. SQLNewBlogger A quick writeup that I used to solve a problem. I had some issues figuring this out, and some searching and experimenting got me a little better understanding of what was happening. After about 30 minutes of some work, I took 10 minutes to type this up and explain it to myself. A good example of what you could add to your blog, showing how you use this in your work. The post PowerShell Arrays and Hash Tables–#SQLNewblogger appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1555

Anonymous
15 Nov 2020
8 min read
Save for later

PASS Summit 2020 – My experience from Blog Posts - SQLServerCentral

Anonymous
15 Nov 2020
8 min read
This was the first year in 16 years for me that there has been a fall season without the PASS Summit to go to. Every year, during the fall, it has been a soul sustaining practice to pack up and head to the summit (typically, Seattle) and spend a week with friends – learning, sharing one another’s stories and just enjoying the warmth and comfort of being with people who care. This year, thanks to Covid, that was not to be. We had a virtual summit instead and most of us were skeptical how this is going to work out.For me personally, this year has been THE most challenging of my adult life. Health issues, losses in my immediate family due to covid and just the emotional stress of the hermit like existence with no relief in sight was beginning to get to me. The virtual summit came as a welcome relief. Below is my experience.PRECONS: I signed up for two precons – Day 1 on Powershell with Rob Sewell, and Day 2 on Execution Plan Deep Dive with Hugo Kornelis. Both were excellent in terms of quality and rendering. BUT, I lost 3 hours of Rob’s precon because he was on a different timezone (i knew this when i signed up but thought the recordings would make up)..and Hugo’s was packed with so much content that a rewatch (or multiple rewatches) would have helped. But the recording was limited to the thursday of the summit. Am not sure how anyone thought people attending the summit would find time to re watch anything in this painfully short interval. I certainly could not and my learning was limited to the first attendance. I will definitely reconsider putting $$ down on a precon if this is the way it continues to be done.SUMMIT DAY 1It felt unusual/odd to start a day without a keynote but I got used to it and attended two great classes in the morning – Raoul Illayas on ‘Data Modernization, how to run a successful modernization project’, followed by David Klee’s class on ’10 Cloudy questions to ask before migrating your sql server’. Both classes were excellent with great Q&A and well moderated. This was followed by the afternoon keynote. It started with my friend and co chapter lead Tracy Boggiano winning the passion award – which was a heart warming moment. The Passion Award made a huge, positive impact on my life and I was a bit sad she did not get to experience it in person. But she did say a few words and it was received really well by everyone in the community. This was followed by several microsofties demoing cool product features – lead by Rohan Kumar. It was fun and interesting. I attended two sessions in the afternoon ‘What is new in sql server tools’ by Vicky Harp and her team at Microsoft, followed by ‘Making a real cloud transformation, not just a migration’ by Greg Low. Both were outstanding classes. In the evening I had to moderate a Birds-of-a-feather bubble on Mentoring – I had a good chat with a few friends who showed up, and made a couple of new friends as well. Overall it was a worthy day of learning and networking with few real glitches to worry about. SUMMIT DAY 2I started this day by attempting to reach out to a few friends – these are people I see in person and not on social media. I sent them a message via messaging option, but did not hear back. I was disappointed with how this worked. I was able to catch up on a few friends accidentally – because they were in the same class or same network bubble, but intentionally reaching out was really hard and did not seem to work very well.I also visited a few vendor rooms online – vendors were the reason the virtual summit is possible. Vendor visits are always a big part of my in person summit attendance so wanted to make sure they were thanked. I got good responses for my visit at Red Gate and Sentry One. Some of the other vendors did not care to respond very much (maybe they did not have anyone online).I also attended two classes ‘Normalization beyond third normal form’ with Hugo Kornelis and ‘Azure SQL: Path to an Intelligent Database’ with Joe Sack and Pedro Lopez. Both classes were outstanding in content. The afternoon’s keynote was again from microsoft – it seemed to have content but was a bit dry and difficult to follow along. I think given how difficult this year was in general, we can forgive Microsoft this. The absolute highlight of Day 2 , to me, was the Diversity in Data panel discussion that I was part of – with some amazing women – Tracy Boggiano, Hope Foley, Anna Hoffman, Jess Pomfret and DeNisha Malone. I have been on a few panels but this was truly well moderated by Rebecca Ferguson from PASS HQ, well attended by a number of people in the virtual summit including several of my own colleagues. It was a true honor to do it. SUMMIT DAY 3The last day arrived, albeit too soon. I logged in early, attended a few sessions (‘Execution plans, where do I start’ by Hugo Kornelis, ‘Getting Started with Powershell as a DBA’ by Ben Miller). The ‘Diversity, Equity and Inclusion’ keynote by Bari Miller was amazing – planning to revisit this and make more notes/probably a separate blog post. In the afternoon attended ‘Splitting up your work in chunks’ by Erland Sommarksog, and ‘the good, bad and ugly of migrating sql server to public clouds’ by Allan Hirt. Half way through Allan’s class(which was outstanding as always) I started to feel really tired/brain fried and wanted a break. There was a networking session open for late evening – so hopped on it and had a lovely chat with many friends i could not see in person. That ended a very fun week.Following are what worked well and what didn’t: POSITIVES 1 The platform was relatively easy to navigate – finding a class was really easy.2 Chat rooms in classes were a lot of fun and good places to find friends unexpectedly as well. All the sessions I attended personally, except a few on friday were very well moderated.3 PASS HQ and BoD members were freely available and it was really easy to find and have chats with them if one desired throughout the week.4 Replaying sessions were a really good bonus treat – I wish the same was true for precons as well.5 Having the recordings available on an immediate basis is absolutely great – watching a few this weekend and able to catch what I missed in the class. This would be very hard to do if we had to wait a couple of months for the recordings. NEGATIVES1 I am not sure how any vendor would make any gains out of this. Traffic in vendor rooms, when i visited, seemed low and in some cases vendor reps did not even bother answering chat messages. (I don’t blame them if the traffic was not much).2 Transcribing/sub titling was a total mess. Granted, it made for a lot of very fun moments in many classes but the purpose of it is to help hearing impaired, don’t think it lived up to that cause at all.3 Pre cons – esp the ones given by experts – are – to me not worth it with replay option gone in such a short time. I would have appreciated if i had the weekend to replay both my precons but I didn’t.4 Finding individuals to chat up was insanely hard. This is especially true of people not on social media. I was very disappointed that I never heard back from many people to whom i sent messages via the platform. I don’t think they knew where to check this, I certainly didn’t. I had to largely depend on people to be there on social media or to show up in group chats (many did). Overall, it was a worthy experience to me and especially uplifting in a time like this. I am hoping that for the next year PASS could do a blend of live and online classes and we can all make the best of both worlds. Thanks to everyone for attending and supporting the organization. The post PASS Summit 2020 – My experience appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1540

Anonymous
10 Nov 2020
1 min read
Save for later

The rule of three, SQL Server on Linux edition from Blog Posts - SQLServerCentral

Anonymous
10 Nov 2020
1 min read
When it comes to Microsoft products, the rule of three — at least as far as I’m concerned — is where you can accomplish the same task in three different ways. The go-to example is Microsoft Word, where you can use the ribbon toolbar, a keyboard shortcut, or the context menu to perform the same-> Continue reading The rule of three, SQL Server on Linux edition The post The rule of three, SQL Server on Linux edition appeared first on Born SQL. The post The rule of three, SQL Server on Linux edition appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1535

article-image-why-arent-you-automating-database-deployments-from-blog-posts-sqlservercentral
Anonymous
23 Nov 2020
1 min read
Save for later

Why Aren’t You Automating Database Deployments? from Blog Posts - SQLServerCentral

Anonymous
23 Nov 2020
1 min read
Building out processes and mechanisms for automated code deployments and testing can be quite a lot of work and isn’t easy. Now, try the same thing with data, and the challenges just shot through the roof. Anything from the simple fact that you must maintain the persistence of the data to data size to up […] The post Why Aren’t You Automating Database Deployments? appeared first on Grant Fritchey. The post Why Aren’t You Automating Database Deployments? appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1533

article-image-migrating-sql-server-container-images-to-the-github-container-registry-from-blog-posts-sqlservercentral
Anonymous
29 Oct 2020
2 min read
Save for later

Migrating SQL Server container images to the Github Container Registry from Blog Posts - SQLServerCentral

Anonymous
29 Oct 2020
2 min read
A couple of months ago Docker announced that they would be implementing a 6 month retention policy for unused images in the Docker Hub. This was due to kick in on the 1st of November but has now been pushed back until mid 2021. I’ve had multiple Windows SQL Server container images up on the Docker Hub for years now. It’s been a great platform and I’m very thankful to them for hosting my images. That being said, I want to make sure that the images that I’ve built are always going to be available for the community so I have pushed my SQL Server images to the Github Container Registry. In the Docker Hub I have the following public SQL Server images: – dbafromthecold/sqlserver2012express:rtm dbafromthecold/sqlserver2012dev:sp4 dbafromthecold/sqlserver2014express:rtm dbafromthecold/sqlserver2014dev:sp2 dbafromthecold/sqlserver2016dev:sp2 These images won’t be removed (by myself) and I will update this post in the event of them being removed. But they are now available on the Github Container Registry: – ghcr.io/dbafromthecold/sqlserver2012:express ghcr.io/dbafromthecold/sqlserver2012:dev ghcr.io/dbafromthecold/sqlserver2014:express ghcr.io/dbafromthecold/sqlserver2014:dev ghcr.io/dbafromthecold/sqlserver2016:dev Bit of a disclaimer with these images…they’re LARGE! The 2012 dev image is ~20GB which is gigantic for a container image. So if you going to use them (for dev and test only please ) you’ll need to pre-pull them before running any containers. Thanks for reading! The post Migrating SQL Server container images to the Github Container Registry appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1532

Anonymous
25 Nov 2020
1 min read
Save for later

T-SQL Tuesday Retrospective #005: Reporting from Blog Posts - SQLServerCentral

Anonymous
25 Nov 2020
1 min read
A few weeks ago, I began answering every single T-SQL Tuesday from the beginning. This week it’s the fifth entry, and on April 5th, 2010, Aaron Nelson invited us to write about reporting. You can visit the previous entries here: T-SQL Tuesday #001 – Date/Time Tricks T-SQL Tuesday #002 – A Puzzling Situation T-SQL Tuesday-> Continue reading T-SQL Tuesday Retrospective #005: Reporting The post T-SQL Tuesday Retrospective #005: Reporting appeared first on Born SQL. The post T-SQL Tuesday Retrospective #005: Reporting appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1525
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-pro-microsoft-power-platform-solution-building-for-the-citizen-developer-from-blog-posts-sqlservercentral
Anonymous
04 Nov 2020
2 min read
Save for later

Pro Microsoft Power Platform: Solution Building for the Citizen Developer from Blog Posts - SQLServerCentral

Anonymous
04 Nov 2020
2 min read
Over the last several months a team of excellent authors, including myself, have been writing a very exciting new book about Microsoft’s Power Platform. We approached the publishing company Apress with an idea to produce a book that really tells the full story of how the Power Platform works together. As I’m sure you know, the Power Platform is actually 4 tools in one: Power Apps, Power Automate, Power BI and Power Virtual Agent. We found there were few books on the market that attempted to tell this full story. This book is designed for the “Citizen Developer” to help you feel confident in developing solutions that leverage the entire Power Platform. Are you a Citizen Developer? Citizen Developers are often business users with little or no coding experience who solve problems using technologies usually approved by IT. The concept of business users solving their own problems is not new but, what is new is the concept of doing it with IT’s blessing. Organizations have realized the power of enabling Citizen Developers to solve smaller scale problems so IT can focus larger more difficult problem. I hope you enjoy this new book and find it helpful in your Power Platform journey! The post Pro Microsoft Power Platform: Solution Building for the Citizen Developer appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1525

article-image-how-do-i-tell-what-compression-level-my-tables-indexes-are-from-blog-posts-sqlservercentral
Anonymous
17 Nov 2020
2 min read
Save for later

How do I tell what compression level my tables/indexes are? from Blog Posts - SQLServerCentral

Anonymous
17 Nov 2020
2 min read
It’s been a while since I worked with compression and the other day I needed to check which of my indexes were compressed and which weren’t. Now, I knew the information wasn’t going to be in sys.tables and I couldn’t find it in sys.indexes or INDEXPROPERTY(). I’ll be honest it had me stumped for a little bit. Until I remembered something! Compression isn’t done at the table or even the index level. It’s done at the partition level. Something important to remember is that every table has at least one entry in sys.indexes, although in the case of a heap it’s just the unindexed table. So in a way you could say that every table has an index. Well every index has at least one partition. If you haven’t deliberately partitioned the index it’s just the whole index. Why would you want to compress just certain partitions? Well, remember that one reason for partitioning is to separate out older, less used, data from newer, more frequently accessed data. You might decide to use page compression, the strongest but slowest compression, on your oldest data to conserve space on data you aren’t going to have to de-compress very often. You might then have a group of data that you access a bit more often where you use row compression, which saves some space and takes a bit less CPU to undo. And on your current/more frequently accessed data you don’t compress the data at all. Anyway, back on point. The system view sys.partitions has the information we are looking for. This query has the table and index names along with the partition information. SELECT o.name, i.name, p.* FROM sys.partitions p JOIN sys.indexes i ON i.object_id = p.object_id AND i.index_id = p.index_id JOIN sys.objects o ON i.object_id = o.object_id The post How do I tell what compression level my tables/indexes are? appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1507

Anonymous
28 Oct 2020
1 min read
Save for later

Daily Coping 28 Oct 2020 from Blog Posts - SQLServerCentral

Anonymous
28 Oct 2020
1 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to plan a fun or exciting activity to look forward to. I’ve got a couple, but in the short term, my wife and I decided to take advantage of a gift from Redgate. For the 21st birthday of the company, everyone got a voucher for an “experience” in their area. In looking over the items, we decided to do a Wine and Painting night with our kids. They are artistic, and we like wine. Actually, my son can drink wine as well, so 3 of us will enjoy wine, with 2 painting. I’m looking forward to the night we can do this. The post Daily Coping 28 Oct 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1502

article-image-daily-coping-18-nov-2020-from-blog-posts-sqlservercentral
Anonymous
18 Nov 2020
1 min read
Save for later

Daily Coping 18 Nov 2020 from Blog Posts - SQLServerCentral

Anonymous
18 Nov 2020
1 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to be curious. Learn about a new topic or an inspiring idea. I try to learn often. Awhile ago, my daughter had some exposure to welding and wanted to take a class. We looked around and found one at a local shop. We booked it, and a few days ago, we went and spent a few hours welding. A neat experience, I learned a lot, and I think we’ll invest in one soon. I also learned I’m not very good at this and need a lot of practice. The post Daily Coping 18 Nov 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1494
article-image-the-silliness-of-a-group-from-blog-posts-sqlservercentral
Anonymous
20 Nov 2020
2 min read
Save for later

The Silliness of a Group from Blog Posts - SQLServerCentral

Anonymous
20 Nov 2020
2 min read
Recently we had a quick online tutorial for Mural, a way of collaborating online with a group. It’s often used for design, but it can be used for brainstorming and more. There are templates for standups, business models, roadmaps, and more. Anyway, we had a designer showing a bunch of others how to do this. Some product developers, team leads, advocates, and more. During the session, as we were watching, we were in a live mural where we could add items. I added a post-it with “Steve’s Note” on it, just to get a feel. I also added a photo I’d taken. Before long, the group chimed in, especially when the host misidentified Phoebe the horse as a goat. We had another part of the session dealing with voting and making choices. The demo was with ice cream, allowing each of us to vote on a set of choices. Next we went to a template where we could add our own choices and people had fun, including me. All in all, I see Mural as an interesting tool that I could see different groups using this in a variety of ways to collaborate, with some sort of Zoom/audio call and then focusing on a virtual whiteboard, there’s a lot here. I actually think this could be a neat way of posing questions, taking votes or polls, and sharing information in a group that can’t get together in person. . The post The Silliness of a Group appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1491

article-image-the-future-of-pass-from-blog-posts-sqlservercentral
Anonymous
21 Nov 2020
3 min read
Save for later

The Future of PASS from Blog Posts - SQLServerCentral

Anonymous
21 Nov 2020
3 min read
2020 has been a tough year for PASS. It’s primary fund raiser – the PASS Summit – was converted to a virtual event that attracted fewer attendees and far less revenue than the in-person version. Going into the event they were projecting a budget shortfall of $1.5 million for the fiscal year ending in June of 2021 and that’s after some cost cutting. My guess is that the net revenue from the Summit will be less than projected in the revised budget, so the shortfall will increase, only partially offset by $1m in reserves. I’m writing all of that based on information on the PASS site and one non-NDA conversation with some Board members during the 2020 PASS Summit. It’s not a happy picture. If things aren’t that dire, I’d be thrilled. I’ll pause here to say this – it doesn’t matter how we got here. The Board has to work the problem they have. When the Board meets in November or December with a final accounting from the Summit, they will have to adjust the budget again and start talking about a 2021 budget. Big questions: How much is the shortfall for 2020 and can we reduce the spend rate enough to make up the difference and have money in the bank to carry through to a prospective 2021 Summit? If we have to reduce staff, which ones? Can we keep the key people that would drive the next in-person event? Can it be a furlough, or is it worse? How much notice can we give them? Does PASS have the option to exit from any contracts around the in-person 2021 Summit right now without penalty, or will that be conditional based on restrictions in place due to Covid? Will event insurance claims cover any of the revenue gap in 2020? Even if a vaccine is being distributed, does PASS bet-it-all on an in-person event in 2021? What’s the minimum attendee number needed to generate net revenue equivalent to the 2020 Virtual Summit and is that number possible? Where could PASS find bridge funding? Government grants, credit line, advances on sponsor fees, cash infusion from Microsoft, selling seats on the Board to a very large company or two, selling off intellectual property (the mailing list, SQLSaturday & groups, maybe the store of recorded content). Note that I’m not saying I like any of those options and there may be others, but the question is one that needs to be asked. What can be done to start marketing the 2021 Summit now? Can we make the decision to go virtual again right now and move on that? What can be done to increase the perceived value of PASS Pro and/or the subscription rate? Should work on that project continue? Is bankruptcy an option that needs to be explored? How much will it cost to retain counsel to get us through that? I’m hoping we’ll get clear and candid messaging from the Board before the end of December on the financial state and go forward plans. As much as I’d like to see public discussion before that’s decided, I don’t think there is time – that’s why I’m writing this. If you’ve got an idea that addresses the core problems, now is the time to share it. The post The Future of PASS appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1485

Matthew Emerick
12 Oct 2020
3 min read
Save for later

Jonathan Katz: PostgreSQL Monitoring for App Developers: Alerts & Troubleshooting from Planet PostgreSQL

Matthew Emerick
12 Oct 2020
3 min read
We've seen an example of how to set up PostgreSQL monitoring in Kubernetes. We've looked at two sets of statistics to keep track of in your PostgreSQL cluster: your vitals (CPU/memory/disk/network) and your DBA fundamentals. While starting at these charts should help you to anticipate, diagnose, and respond to issues with your Postgres cluster, the odds are that you are not staring at your monitor 24 hours a day. This is where alerts come in: a properly set up alerting system will let you know if you are on the verge of a major issue so you can head it off at the pass (and alerts should also let you know that there is a major issue). Dealing with operational production issues was a departure from my application developer roots, but I looked at it as an opportunity to learn a new set of troubleshooting skills. It also offered an opportunity to improve communication skills: I would often convey to the team and customers what transpired during a downtime or performance degradation situation (VSSE: be transparent!). Some of what I observed I used to  help us to improve to application, while other parts helped me to better understand how PostgreSQL works. But I digress: let's drill into alerts on your Postgres database. Note that just because an alert or alarm is going off, it does not mean you need to immediately react: for example, a transient network degradation issue may cause a replica to lag further behind a primary for a bit too long but will clear up when the degradation passes. That said, you typically want to investigate the alert to understand what is causing it. Additionally, it's important to understand what actions you want to take to solve the problem. For example, a common mistake during an "out-of-disk" error is to delete the PostgreSQL WAL logs with a rm command; doing so can lead to a very bad day (and is also an advertisement for ensuring you have backups). As mentioned in the post on setting up PostgreSQL monitoring in Kubernetes, the Postgres Operator uses pgMonitor for metric collection and visualization via open source projects like Prometheus and Grafana. pgMonitor uses open source Alertmanager for configuring and sending alerts, and is what the PostgreSQL Operator uses. Using the above, let's dive into some of the items that you should be alerting on, and I will describe how my experience as an app developer translated into troubleshooting strategies.
Read more
  • 0
  • 0
  • 1469
article-image-redgate-sql-data-masker-refreshing-schema-from-blog-posts-sqlservercentral
Anonymous
04 Nov 2020
2 min read
Save for later

Redgate SQL Data Masker Refreshing Schema from Blog Posts - SQLServerCentral

Anonymous
04 Nov 2020
2 min read
This is a quick blog to help me remember what is going on with the Data Masker product. This is for the SQL Server version, but I believe the Oracle one is very similar. I added a new column to a table, and I had a masking plan already built. How do I get my masking plan to show the new column? Here is my masking plan: I added a new column to the DCCheck table, which is under rule 01-0026. If I open that mask and add a new column, I get this, but I can’t expand the dropdown. All the columns in this table are masked, and data masker doesn’t know about the new one. I need an updated schema, as the rules do not update in real time. To get this to work, I need to return to the masking plan and double click the controller at the top. This is the schema manager for my set of rules. Note: If I mask different schemas, I need different controllers. Once this opens, I can see my connection to a database. In my case, I’m building this in dev areas, so it’s pointed to the QA environment. If I click the “Tools” tab at the top, I see lots of options, one of which is to refresh. Once I pick that one, I have a bunch of more options, which gets confusing, but I can click the “refresh all tables” at the top, leaving everything alone. Once that’s done, I get a note. Once I get this, I can return to my rule, and when I add a new column, and I see it listed. This isn’t the smoothest flow, but data masker isn’t something that is likely to be in constant use. For many of us, adding new schema items is relatively rare, so we can adjust our plans as needed. The one good thing is that I can easily find where I need to add a column, as opposed to digging through a number of .SQL scripts. The post Redgate SQL Data Masker Refreshing Schema appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1466

article-image-azure-devops-using-variable-groups-from-blog-posts-sqlservercentral
Anonymous
02 Nov 2020
2 min read
Save for later

Azure DevOps–Using Variable Groups from Blog Posts - SQLServerCentral

Anonymous
02 Nov 2020
2 min read
I was in a webinar recently and saw a note about variable groups. That looked interesting, as I’ve started to find that I may have lots of variables in some pipelines, and I thought this would keep me organized. However, these are better than that. When I go to the variable screen for a pipeline, I see this my variables, but on the left side, I see “Variable groups”. If I click this, I see some info.: The top link takes me to a doc page, where I see this sentence: “Use a variable group to store values that you want to control and make available across multiple pipelines.” Now that is interesting. I have been thinking about different pipelines, so having variables that work across them is good. To create a variable group, I need to go to the Library, which is another menu item under the Pipelines area. I get a list of groups, of which I have none right now. When I click the blue button, I get a form with the group properties, and then a variables section below. I add a couple variables and add some group info. In this case, I want some secret values that are useful across different pipelines. You do need to click Save at the end of this. In my pipeline, I see I have some variables. On the left, again, is a Variable groups item. When I click that, I don’t see any, but I haven’t linked any. Here I need to link my group. When I click this, I get a blade on the right. I can then see my group(s), and I can set a scope. I do need to click the group and then I can click link at the bottom. Now I see the variables available in my pipeline. That’s pretty cool, especially as I am starting to see separate pipelines for different downstream environments becoming more popular. The post Azure DevOps–Using Variable Groups appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1458
Modal Close icon
Modal Close icon