Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Databases

233 Articles
article-image-t-sql-tuesday-133-what-i-learned-from-presenting-from-blog-posts-sqlservercentral
Anonymous
08 Dec 2020
3 min read
Save for later

T-SQL Tuesday #133–What I Learned From Presenting from Blog Posts - SQLServerCentral

Anonymous
08 Dec 2020
3 min read
This month Lisa Griffin Bohm is the host and thanks to her from hosting. She was one of the last people I pressured lightly to host. She came up with a very creative invite. She is asking us to share something technical that we learned, that wasn’t related to the actual presentation. While I’ve presented many times on various topics, and seen many more talks, I often go for a specific reason. At work it might be because I need to go, but at events, I usually pick a session based on the topic or presenter. Since most do a good job talking about their topic, I had to really think about this one. A Couple Choices I’ve got two that came to mind as I pondered what wasn’t related to the topic. One is small, with minor impact to my work. The other had a much larger impact to me. The first is a story where I was researching for a talk on Always Encrypted in 2016, I ran into an issue I hadn’t expected. This was Microsoft’s first big change to encryption options since 2005, and I was excited about it. I knew about the restrictions with data types, collation, etc, but they seemed to be acceptable to me. As I was building demos and working on how to show certificate movement, I created certificates in various ways, I created some in text files using the common PFX format. Little did I know that SQL Server can’t read these. Instead, you need to convert them to PVK, which isn’t well known. Many people use the .cer or .pfx, but for whatever reason, SQL Server doesn’t support those. The second story was while rehearsing for a talk at Build. Once again, I was speaking, but delivering only a small piece of a multi-person talk with a few Microsoft employees. Donovan Brown was one of them, and as we worked on timing, transitions, and tailored my section to fit, we had time to chat a bit. This was in the mid days of Visual Studio Team Services, which become Azure DevOps a short time later. As I was talking about some of the challenges I’d had in TFS, Donovan showed me a Java app he was maintaining as a side project, which was being built, tested, and released from VSTS. I was surprised, as Microsoft was still mostly focused on their own technology. He showed me some of the any language, any platform, any framework philosophy that was being used to make Azure DevOps a complete platform, not a Microsoft one. I was surprised, and I’ve continued to be as I watch new capabilities and features appear, few of which are tied to Microsoft products. That greatly impacted my career, and continues to do so today as I work with more and more customers that use non-Microsoft technologies. The post T-SQL Tuesday #133–What I Learned From Presenting appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 5413

Anonymous
07 Dec 2020
3 min read
Save for later

Tracking costliest queries from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
3 min read
Being a Database Developer or Administrator, often we work on Performance Optimization of the queries and procedures. It becomes very necessary that we focus on the right queries to get major benefits. Recently I was working on a Performance Tuning project. I started working based on query list provided by client. Client was referring the user feedbacks and Long Running Query extract from SQL Server. But it was not helping much. The database had more than 1K stored procedures and approx. 1K other programmability objects. On top of that, there were multiple applications triggering inline queries as well. I got a very interesting request from my client that “Can we get the top 100 queries running most frequently and taking more than a minute?”. This made me write my own query to get the list of queries being executed frequently and for duration greater/less than a particular time. This query can also play a major role if you are doing multiple things to optimize the database (such as server / database setting changes, indexing, stats or code changes etc.) and would like to track the duration. You can create a job with this query and dump the output in some table. Job can be scheduled to run in certain frequency. Later, you can plot trend out of the data tracked. This has really helped me a-lot in my assignment. I hope you’ll also find it useful. /* Following query will return the queries (along with plan) taking more than 1 minute and how many time executed since last SQL restart. We'll also get the average execution time. */ ; WITH cte_stag AS ( SELECT plan_handle , sql_handle , execution_count , (total_elapsed_time / NULLIF(execution_count, 0)) AS avg_elapsed_time , last_execution_time , ROW_NUMBER() OVER(PARTITION BY sql_handle, plan_handle ORDER BY execution_count DESC, last_execution_time DESC) AS RowID FROM sys.dm_exec_query_stats STA WHERE (total_elapsed_time / NULLIF(execution_count, 0)) > 60000 -- This is 60000 MS (1 minute). You can change it as per your wish. ) -- If you need TOP few queries, simply add TOP keyword in the SELECT statement. SELECT DB_NAME(q.dbid) AS DatabaseName , OBJECT_NAME(q.objectid) AS ObjectName , q.text , p.query_plan , STA.execution_count , STA.avg_elapsed_time , STA.last_execution_time FROM cte_stag STA CROSS APPLY sys.dm_exec_query_plan(STA.plan_handle) AS p CROSS APPLY sys.dm_exec_sql_text(STA.sql_handle) AS q WHERE STA.RowID = 1 AND q.dbid = DB_ID() /* Either select the desired database while running the query or supply the database name in quotes to the DB_ID() function. <code>Note: Inline queries being triggered from application may not have the object name and database name. In case you are not getting the desired query in the result, try removing the filter condition on dbid</code> */ ORDER BY 5 DESC, 6 DESC The post Tracking costliest queries appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 5335

Anonymous
08 Dec 2020
2 min read
Save for later

Daily Coping 8 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
08 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to contact someone you can’t be with to see how they are doing. One of the things I did early on in the pandemic was reach out every day or two to a few random people in my contact list. The pace of things slowed down across the summer, but I decided to have a few more in-depth conversations, rather than just a “how are you” query. I opened up a Facebook Messenger conversation with a friend recently, reaching out to see what they were up to, and how life was going. Across a few days, we exchanged numerous messages, touching base and having a conversation around the rest of our busy lives. On one hand I enjoyed the chance to reach out to someone and contact them. It was good to catch up and see how life was getting along. On the other, this brought some sadness, as I had planned on seeing this person across the summer, which didn’t happen. With the prospect of months more of living like this, I’m disheartened that I won’t see this person for awhile. Not a great coping day for me. The post Daily Coping 8 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 5200

article-image-virtual-log-files-from-blog-posts-sqlservercentral
Anonymous
24 Nov 2020
4 min read
Save for later

Virtual Log Files from Blog Posts - SQLServerCentral

Anonymous
24 Nov 2020
4 min read
Today’s post is a guest article from a friend of Dallas DBAs, writer, and fantastic DBA Jules Behrens (B|L) One common performance issue that is not well known that should still be on your radar as a DBA is a high number of VLFs. Virtual Log Files are the files SQL Server uses to do the actual work in a SQL log file (MyDatabase_log.LDF). It allocates new VLFs every time the log file grows. Perhaps you’ve already spotted the problem – if the log file is set to grow by a tiny increment, then if your the file ever grows very large, you may end up with thousands of tiny little VLFs, and this can slow down your performance at the database level. Think of it like a room (the log file) filled with boxes (the VLFs). If you just have a few boxes, it is more efficient to figure out where something (a piece of data in the log file) is, than if you have thousands of tiny boxes. (Analogy courtesy of @SQLDork) It is especially evident there is an issue with VLFs when SQL Server takes a long time to recover from a restart. Other symptoms may be slowness with autogrowth, log shipping, replication, and general transactional slowness. Anything that touches the log file, in other words. The best solution is prevention – set your log file to be big enough to handle its transaction load to begin with, and set it to have a sensible growth rate in proportion to its size, and you’ll never see this come up. But sometimes we inherit issues where best practices were not followed, and a high number of VLFs is certainly something to check when doing a health assessment on an unfamiliar environment. The built-in DMV sys.dm_db_log_info is specifically for finding information about the log file, and command DBCC LOGINFO (deprecated) will return a lot of useful information about VLFs as well. There is an excellent script for pulling the count of VLFs that uses DBCC LOGINFO from Kev Riley, on Microsoft Tech Net: https://gallery.technet.microsoft.com/scriptcenter/SQL-Script-to-list-VLF-e6315249 There is also a great script by Steve Rezhener on SQLSolutionsGroup.com that utilizes the view: https://sqlsolutionsgroup.com/capture-sql-server-vlf-information-using-a-dmv/ Either one of these will tell you what you ultimately need to know – if your VLFs are an issue. How many VLFs are too many? There isn’t an industry standard, but for the sake of a starting point, let’s say a tiny log file has 500 VLFs. That is high. A 5GB log file with 200 VLFs, on the other hand, is perfectly acceptable. You’ll likely know a VLF problem when you find it; you’ll run a count on the VLFs and it will return something atrocious like 20,000. (ed – someone at Microsoft support told me about one with 1,000,000 VLFs) If the database is in Simple recovery model and doesn’t see much traffic, this is easy enough to fix. Manually shrink the log file as small as it will go, verify the autogrow is appropriate, and grow it back to its normal size. If the database is in Full recovery model and is in high use, it’s a little more complex. Follow these steps (you may have to do it more than once): Take a transaction log backup . Issue a CHECKPOINT manually. Check the empty space in the transaction log to make sure you have room to shrink it. Shrink the log file as small as it will go. Grow the file back to its normal size. Lather, Rinse, Repeat as needed Now check your VLF counts again, and make sure you are down to a nice low number. Done! Thanks for reading! The post Virtual Log Files appeared first on DallasDBAs.com. The post Virtual Log Files appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 5180

Anonymous
30 Oct 2020
2 min read
Save for later

go from Blog Posts - SQLServerCentral

Anonymous
30 Oct 2020
2 min read
Starter Template I saved this as a snippet for vscode to get up and running quickly with something better than the defaults for handling func main isolation. I’ve been working on modifying this a bit as I don’t really like using args, but am trying not to overcomplicate things as a new gopher. I tend to like better flag parsing than using args, but it’s still a better pattern to get functions isolated from main to easily test. The gist that I’ve taken from this and discussions in the community is ensure that main is where program termination is dedicated instead of handling this in your functions. This isolation of logic from main ensures you can more easily setup your tests as well, since func main() isn’t testable. package main // package template from: import ( "errors" "fmt" "io" "os" ) const ( // exitFail is the exit code if the program // fails. exitFail = 1 ) func main() { if err := run(os.Args, os.Stdout); err != nil { fmt.Fprintf(os.Stderr, "%sn", err) os.Exit(exitFail) } } func run(args []string, stdout io.Writer) error { if len(args) == 0 { return errors.New("no arguments") } for _, value := range args[1:] { fmt.Fprintf(stdout, "Running %s", value) } return nil } Puzzles - FizzBuzz I honestly had never done any algorithm or interview puzzles beyond sql-server, so I was really happy to knock this out relatively easily. At least I pass the basic Joel test ?? #development #golang The post go appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4991

article-image-differences-between-using-a-load-balanced-service-and-an-ingress-in-kubernetes-from-blog-posts-sqlservercentral
Anonymous
23 Nov 2020
5 min read
Save for later

Differences between using a Load Balanced Service and an Ingress in Kubernetes from Blog Posts - SQLServerCentral

Anonymous
23 Nov 2020
5 min read
What is the difference between using a load balanced service and an ingress to access applications in Kubernetes? Basically, they achieve the same thing. Being able to access an application that’s running in Kubernetes from outside of the cluster, but there are differences! The key difference between the two is that ingress operates at networking layer 7 (the application layer) so routes connections based on http host header or url path. Load balanced services operate at layer 4 (the transport layer) so can load balance arbitrary tcp/udp/sctp services. Ok, that statement doesn’t really clear things up (for me anyway). I’m a practical person by nature…so let’s run through examples of both (running everything in Kubernetes for Docker Desktop). What we’re going to do is spin up two nginx pages that will serve as our applications and then firstly use load balanced services to access them, followed by an ingress. So let’s create two nginx deployments from a custom image (available on the GHCR): – kubectl create deployment nginx-page1 --image=ghcr.io/dbafromthecold/nginx:page1 kubectl create deployment nginx-page2 --image=ghcr.io/dbafromthecold/nginx:page2 And expose those deployments with a load balanced service: – kubectl expose deployment nginx-page1 --type=LoadBalancer --port=8000 --target-port=80 kubectl expose deployment nginx-page2 --type=LoadBalancer --port=9000 --target-port=80 Confirm that the deployments and services have come up successfully: – kubectl get all Ok, now let’s check that the nginx pages are working. As we’ve used a load balanced service in k8s in Docker Desktop they’ll be available as localhost:PORT: – curl localhost:8000 curl localhost:9000 Great! So we’re using the external IP address (local host in this case) and a port number to connect to our applications. Now let’s have a look at using an ingress. First, let’s get rid of those load balanced services: – kubectl delete service nginx-page1 nginx-page2 And create two new cluster IP services: – kubectl expose deployment nginx-page1 --type=ClusterIP --port=8000 --target-port=80 kubectl expose deployment nginx-page2 --type=ClusterIP --port=9000 --target-port=80 So now we have our pods running and two cluster IP services, which aren’t accessible from outside of the cluster: – The services have no external IP so what we need to do is deploy an ingress controller. An ingress controller will provide us with one external IP address, that we can map to a DNS entry. Once the controller is up and running we then use an ingress resources to define routing rules that will map external requests to different services within the cluster. Kubernetes currently supports GCE and nginx controllers, we’re going to use an nginx ingress controller. To spin up the controller run: – kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.40.2/deploy/static/provider/cloud/deploy.yaml We can see the number of resources that’s going to create its own namespace, and to confirm they’re all up and running: – kubectl get all -n ingress-nginx Note the external IP of “localhost” for the ingress-nginx-controller service. Ok, now we can create an ingress to direct traffic to our applications. Here’s an example ingress.yaml file: – apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-testwebsite annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: www.testwebaddress.com http: paths: - path: /pageone pathType: Prefix backend: service: name: nginx-page1 port: number: 8000 - path: /pagetwo pathType: Prefix backend: service: name: nginx-page2 port: number: 9000 Watch out here. In Kubernetes v1.19 ingress went GA so the apiVersion changed. The yaml above won’t work in any version prior to v1.19. Anyway, the main points in this yaml are: – annotations: kubernetes.io/ingress.class: "nginx" Which makes this ingress resource use our ingress nginx controller. rules: - host: www.testwebaddress.com Which sets the URL we’ll be using to access our applications to http://www.testwebaddress.com - path: /pageone pathType: Prefix backend: service: name: nginx-page1 port: number: 8000 - path: /pagetwo pathType: Prefix backend: service: name: nginx-page2 port: number: 9000 Which routes our requests to the backend cluster IP services depending on the path (e.g. – http://www.testwebaddress.com/pageone will be directed to the nginx-page1 service) You can create the ingress.yaml file manually and then deploy to Kubernetes or just run: – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/a6805ca732eac278e902bbcf208aef8a/raw/e7e64375c3b1b4d01744c7d8d28c13128c09689e/testnginxingress.yaml Confirm that the ingress is up and running (it’ll take a minute to get an address): – kubectl get ingress N.B. – Ignore the warning (if you get one like in the screen shot above), we’re using the correct API version Finally, we now also need to add an entry for the web address into our hosts file (simulating a DNS entry): – 127.0.0.1 www.testwebaddress.com And now we can browse to the web pages to see the ingress in action! And that’s the differences between using load balanced services or an ingress to connect to applications running in a Kubernetes cluster. The ingress allows us to only use the one external IP address and then route traffic to different backend services whereas with the load balanced services, we would need to use different IP addresses (and ports if configured that way) for each application. Thanks for reading! The post Differences between using a Load Balanced Service and an Ingress in Kubernetes appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4968
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-using-write-debug-from-blog-posts-sqlservercentral
Anonymous
28 Oct 2020
2 min read
Save for later

Using Write-Debug from Blog Posts - SQLServerCentral

Anonymous
28 Oct 2020
2 min read
I wrote a post about PoSh output recently, noting that in general we ought to use Write-Output or Write-Verbose for messaging. In there, I mentioned Write-Debug as well as a way of allowing the user to control debug information. Since I often find myself fumbling a bit with debugging scripts, I decided to give this a try and see how it works. First, let’s build a simple script. In this case, I’ll write a script that takes two parameters and determines which one is larger. $i = $args[0] $j = $args[1] Write-Debug("First Param:$i") Write-Debug("SecondParam:$j") if ($i -eq #null ) {   $i = 1 Write-Debug("Setting first as a default to 1") } if ($j -eq #null ) {   $j = 1 Write-Debug("Setting second as a default to 1") } if ($a -gt $b) {   Write-Output("The first parameter is larger") } elseif ($i -eq $j ) {   Write-Output("The parameters are equal.") } else {       Write-Output("The second parameter is larger")   } If I run this, I get what I expect. Here are a few executions. Now, what if I’m unsure of what’s happening. For example, I forget the second parameter. How does my program know the first parameter is larger? I have some debug information in there, but it doesn’t appear. However, if I change the value of $DebugPreference, I see something. The variable, $DebugPreference, controls how Write-Debug messages are processed. By default, this is set to SilentlyContinue. However, if I change this to Continue, all the messages appear. If I want, I can also set it to Stop or Inquire, allowing me to control the program differently. You can read more about preference variables here. This is a handy thing to use. I’ve often had a variable I set in programs, sometimes as a parameter, that allows me to show debug messages, but I often then need a series of IF statements inside code to check this and display debug information. Now, I can just include write-debug info in my code, and if the preference isn’t set, I don’t see them. I’ve seen this used in custom cmdlets from vendors, including Redgate, and it is nice to be able to access more information when something isn’t working, and have it suppressed by default. The post Using Write-Debug appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4848

Anonymous
31 Oct 2020
1 min read
Save for later

Azure SQL Database administration Tips and Hints Exam (DP-300) from Blog Posts - SQLServerCentral

Anonymous
31 Oct 2020
1 min read
Finally, I got my certification Azure Database administrator Associate for Exam (DP-300) after two times failure, during the journey of study I watched many courses, videos, and articles, and this post of today is for spreading what I have from the knowledge and what I learned during the journey, and I do two things during … Continue reading Azure SQL Database administration Tips and Hints Exam (DP-300) The post Azure SQL Database administration Tips and Hints Exam (DP-300) appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4842

Anonymous
30 Oct 2020
1 min read
Save for later

shell from Blog Posts - SQLServerCentral

Anonymous
30 Oct 2020
1 min read
Installing go-task This tool is great for cross-platform shell scripting as it runs all the commands in the Taskfile.yml using a built in go shell library that supports bash syntax (and others). Quickly get up and running using the directions here: Install Task # For Default Installion to ./bin with debug logging sh -c "$(curl -ssL https://taskfile.dev/install.sh)" -- -d # For Installation To /usr/local/bin for userwide access with debug logging # May require sudo sh sh -c "$(curl -ssL https://taskfile.dev/install.sh)" -- -d -b /usr/local/bin #development #shell The post shell appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4716

article-image-external-tables-vs-t-sql-views-on-files-in-a-data-lake-from-blog-posts-sqlservercentral
Anonymous
03 Nov 2020
4 min read
Save for later

External tables vs T-SQL views on files in a data lake from Blog Posts - SQLServerCentral

Anonymous
03 Nov 2020
4 min read
A question that I have been hearing recently from customers using Azure Synapse Analytics (the public preview version) is what is the difference between using an external table versus a T-SQL view on a file in a data lake? Note that a T-SQL view and an external table pointing to a file in a data lake can be created in both a SQL Provisioned pool as well as a SQL On-demand pool. Here are the differences that I have found: Overall summary: views are generally faster and have more features such as OPENROWSET Virtual functions (filepath and filename) are not supported with external tables which means users cannot do partition elimination based on FILEPATH or complex wildcard expressions via OPENROWSET (which can be done with views) External tables can be shareable with other computes, since their metadata can be mapped to and from Spark and other compute experiences, while views are SQL queries and thus can only be used by SQL On-demand or SQL Provisioned pool External tables can use indexes to improve performance, while views would require indexed views for that Sql On-demand automatically creates statistics both for a external table and views using OPENROWSET. You can also explicitly create/update statistics on files on OPENROWSET. Note that automatic creation of statistics is turned on for Parquet files. For CSV files, you need to create statistics manually until automatic creation of CSV files statistics is supported Views give you more flexibility in the data layout (external tables expect the OSS Hive partitioning layout for example), and allow more query expressions to be added External tables require an explicit defined schema while views can use OPENROWSET to provide automatic schema inference allowing for more flexibility (but note that an explicitly defined schema can provide faster performance) If you reference the same external table in your query twice, the query optimizer will know that you are referencing the same object twice, while two of the same OPENROWSETs will not be recognized as the same object. For this reason in such cases better execution plans could be generated when using external tables instead of views using OPENROWSETs Row-level security (Polybase external tables for Azure Synapse only) and Dynamic Data Masking will work on external tables. Row-level security is not supported with views using OPENROWSET You can use both external tables and views to write data to the data lake via CETAS (this is the only way either option can write data to the data lake) If using SQL On-demand, make sure to read Best practices for SQL on-demand (preview) in Azure Synapse Analytics I often get asked what is the difference in performance when it comes to querying using an external table or view against a file in ADLS Gen2 vs. querying against a highly compressed table in a SQL Provisioned pool (i.e. managed table). It’s hard to quantify without understanding more about each customers scenario, but you will roughly see a 5X performance difference between queries over external tables and views vs. managed tables (obviously, depending on the query, that will vary but that’s a rough number – could be more than 5X in some scenarios). A few things that contribute to that: in-memory caching, SSD based caches, result-set caching, and the ability to design and align data and tables when they are stored as managed tables. You can also create materialized views for managed tables which typically bring lots of performance improvements as well. If you are querying Parquet data, that is in a columnstore file format with compression so that would give you similar data/column elimination as what managed SQL clustered columnstore index (CCI) would give, but if you are querying non-Parquet files you do not get this functionality. Note that for managed tables, on top of performance, you also get a granular security model, workload management capabilities, and so on (see Data Lakehouse & Synapse). The post External tables vs T-SQL views on files in a data lake first appeared on James Serra's Blog. The post External tables vs T-SQL views on files in a data lake appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4663
article-image-a-note-to-the-pass-board-of-directors-from-blog-posts-sqlservercentral
Anonymous
06 Dec 2020
2 min read
Save for later

A Note to the PASS Board of Directors from Blog Posts - SQLServerCentral

Anonymous
06 Dec 2020
2 min read
I just read with dismay that Mindy Curnutt has resigned. That’s a big loss at a time when the future of PASS is in doubt and we need all hands engaged. The reasons she gives for leaving with regards to secrecy and participation are concerning, troublesome, yet not really surprising. The cult of secrecy has existed at PASS for a long time, as has the tendency of the Executive Committee to be a closed circle that acts as if it is superior to the Board, when in fact the Board of Directors has the ultimate say on just about everything. You as a Board can force issues into the open or even disband the Executive Committee, but to do that you’ll have to take ownership and stop thinking of the appointed officers as all powerful. The warning about morally wrong decisions is far more concerning. Those of out here in the membership don’t now what’s going on. PASS hasn’t written anything in clear and candid language about the state of PASS and options being considered, or asked what we think about those options. Is there a reason not to have that conversation? Are you sure that if you can find a way for PASS to survive that it will be one we can support and admire? Leading is about more than being in the room and making decisions. Are you being a good leader, a good steward? From the outside it sure doesn’t seem that way. The post A Note to the PASS Board of Directors appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4662

article-image-deploy-ssrs-projects-with-two-new-powershell-commands-from-blog-posts-sqlservercentral
Anonymous
11 Nov 2020
3 min read
Save for later

Deploy SSRS Projects with Two New PowerShell Commands from Blog Posts - SQLServerCentral

Anonymous
11 Nov 2020
3 min read
I built two new PowerShell commands to deploy SSRS projects, and they have finally been merged into the ReportingServicesTools module. The commands are Get-RsDeploymentConfig & Publish-RsProject. While the Write-RsFolderContent command did already exist, and is very useful, it does not support deploying the objects in your SSRS Project to multiple different folders in your report server. These two new commands can handle deployment to multiple folders. The concept is fairly simple, first you run the Get-RsDeploymentConfigcommand to pull in all the deployment-target details from the SSRS project file. In SSRS projects you can have multiple deployment configurations, so you can specify which configuration you want to use by supplying the name of that configuration for the -ConfigurationToUse parameter. This will give you back a PSObject with all the info it collected. After that, you need to add the URL of the report portal manually (unfortunately, these are not included in the SSRS Project config files). You can put all of that together and see the results like this: $RSConfig = Get-RsDeploymentConfig –RsProjectFile ‘C:sourcereposFinancial ReportsSSRS_FRSSRS_FR.rptproj‘ –ConfigurationToUse Dev01 $RSConfig | Add-Member –PassThru –MemberType NoteProperty –Name ReportPortal –Value ‘http://localhost/PBIRSportal/‘ $RSConfig Once that looks good to you, all you have to do is pipe that object to the Publish-RsProject command, and your deployment should start. $RSConfig | Publish-RsProject Some quick notes: Obviously, the account running these commands will need a copy of the SSRS project it can point to, as well as the necessary credentials to deploy to the SSRS/PRIRS server you point it to. For the Get-RsDeploymentConfig command, the SSRS project you are using must be in the VS 2019 project format. Otherwise, the command won’t know where to look for the correct info. If you don’t know the name of the configuration you want to use, just point Get-RsDeploymentConfig to the project file, and it will give you back a list of configuration options to choose from. Make sure you run Update-Module ReportingServicesToolsto get these new commands. FYI: I only had two SSRS projects available to test these commands with. They worked great for those two projects, but your SSRS project might include some complexities that I just didn’t have in either of the projects I tested with. If you have any trouble making this work, please give me a shout or file a bug on the GitHub project and I will try to help out. Big thanks to Doug Finke ( t ) for his code contributions, and Mike Lawell ( t ) for his help testing, to make these two commands a reality. The post Deploy SSRS Projects with Two New PowerShell Commands first appeared on SQLvariations: SQL Server, a little PowerShell, maybe some Power BI. The post Deploy SSRS Projects with Two New PowerShell Commands appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4606

article-image-eightkb-is-back-from-blog-posts-sqlservercentral
Anonymous
18 Nov 2020
2 min read
Save for later

EightKB is back! from Blog Posts - SQLServerCentral

Anonymous
18 Nov 2020
2 min read
We’re back! The first EightKB back in July was a real blast. Five expert speakers delivered mind-melting content to over 1000 attendees! We were honestly blown away by how successful the first event was and we had so much fun putting it on, we thought we’d do it again The next EightKB is going to be on January 27th 2021 and the schedule has just been announced! Once again we have five top-notch speakers delivering the highest quality sessions you can get! Expect a deep dive into the subject matter and demos, demos, demos! Registration is open and it’s completely free! You can sign up for the next EightKB here We also run a monthly podcast called Mixed Extents where experts from the industry join us to talk about different topics related to SQL Server. They’re all on YouTube or you can listen to the podcasts wherever you get your podcasts! EightKB and Mixed Extents are 100% community driven with no sponsors…so, we’ve launched our own Mixed Extents t-shirts! Any money generated from these t-shirts will be put straight back into the events. EightKB was setup by Anthony Nocentino (b|t), Mark Wilkinson (b|t), and myself as we wanted to put on an event that delved into the internals of SQL Server and we’re having great fun doing just that Hope to see you there! The post EightKB is back! appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4488
Anonymous
07 Dec 2020
1 min read
Save for later

AutoCorrect in Git from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
1 min read
I can’t believe autocorrect is available, or that I didn’t know it existed. I should have looked, after all, git is smart enough to guess my intentions. I learned this from Kendra Little, who made a quick video on this. She got it from Andy Carter’s blog. Let’s say that I type something like git stats in the cmd line. I’ll get a message from git that this isn’t a command, but there is one similar. You can see this below. However, I can have git actually just run this. If I change the configuration with this code: git config --global help.autocorrect 20 Now if I run the command, I see this, where git will delay briefly and then run what it things is correct. The delay is controlled by the parameter I passed in. The value in in tenths of a second, so 20 is 2 seconds, 50 is 5 seconds, 2 is 0.2 seconds, etc.  If you set this back to 0, autocorrect is off. A great trick, and one I’d suggest everyone enable. The post AutoCorrect in Git appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4413

Anonymous
04 Dec 2020
3 min read
Save for later

Goal Progress–November 2020 from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
3 min read
This is my report, which continues on from the Oct report. It’s getting near the end of the year, and I wanted to track things a little tighter, and maybe inspire myself to push. Rating so far: C- Reading Goals Here were my goals for the year. 3 technical books 2 non-technical books – done Books I’ve tackled: Making Work Visible – Complete Pro Power BI Desktop – 70% complete White Fragility – Complete The Biggest Bluff – Complete Team of Teams – 59% complete Project to Product – NEW I’ve made progress here. I have completed my two non-technical books, and actually exceeded this. My focus moved a bit into the more business side of things, and so I’m on pace to complete 4 of these books. The tech books haven’t been as successful, as with my project work, I’ve ended up not being as focused as I’d like on my career, and more focused on tactical things that I need to work on for my job. I think I’ve learned some things, but not what I wanted. My push for December is to finish Team of Teams, get through Power BI Desktop, and then try to tackle one new tech book from either the list of them I have, or one I bought last winter and didn’t read. Project Goals Here were my project goals, working with software A Power BI report that updates from a database A mobile app reading data from somewhere A website that showcases changes and data from a database. Ugh. I’m feeling bad here. I had planned on doing more PowerBI work after the PASS Summit, thinking I’d get some things out of the pre-con. I did, but not practical things, so I need to put time into building up a PowerBI report that I can use. I’ve waffled between one for the team I coach, which has little data, but would be helpful to the athletes, and a personal one. I’ve downloaded some data about my life, but I haven’t organized it into a database. I keep getting started with exercise data, Spotify data, travel data, etc., but not finishing. I’ve also avoided working on a website, and actually having to maintain it in some way. Not a good excuse. I think the mobile app is dead for this year. I don’t really have enough time to dig in here, at least, that’s my thought. The website, however, should be easier. I wanted to use an example from a book, so I should make some time each week, as a personal project, and actually build this out. That’s likely doable by Dec 21. The post Goal Progress–November 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 4380
Modal Close icon
Modal Close icon