Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Windows Server 2012 Automation with PowerShell Cookbook
Windows Server 2012 Automation with PowerShell Cookbook

Windows Server 2012 Automation with PowerShell Cookbook: If you work on a daily basis with Windows Server 2012, this book will make life easier by teaching you the skills to automate server tasks with PowerShell scripts, all delivered in recipe form for rapid implementation.

By EDRICK GOAD
Mex$1,000.99 Mex$699.99
Book Mar 2013 372 pages 1st Edition
eBook
Mex$1,000.99 Mex$699.99
Print
Mex$1,251.99
Subscription
Free Trial
eBook
Mex$1,000.99 Mex$699.99
Print
Mex$1,251.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Mar 26, 2013
Length 372 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781849689465
Vendor :
Microsoft
Table of content icon View table of contents Preview book icon Preview Book

Windows Server 2012 Automation with PowerShell Cookbook

Chapter 1. Understanding PowerShell Scripting

In this chapter we will cover the following recipes:

  • Managing security on PowerShell scripts

  • Creating and using functions

  • Creating and using modules

  • Creating and using PowerShell profiles

  • Passing variables to functions

  • Validating parameters in functions

  • Piping data to functions

  • Recording sessions with transcripts

  • Signing PowerShell scripts

  • Sending e-mail

  • Sorting and filtering

  • Using formatting to export numbers

  • Using formatting to export data views

  • Using jobs

  • Dealing with errors in PowerShell

  • Tuning PowerShell scripts for performance

  • Creating and using Cmdlets

Introduction


This chapter covers the basics related to scripting with PowerShell. PowerShell was released in 2006 and is installed by default starting with Windows 7 and Server 2008R2. PowerShell is also available as a download for Windows XP, Windows Vista, and Server 2003. One of the main differences between PowerShell and VBScript/JScript, the other primary scripting languages for Windows, is that PowerShell provides an interactive runtime. This runtime allows a user to execute commands in real time, and then save these commands as scripts, functions, or modules to be used later.

Since its introduction, support for PowerShell has increased dramatically. In addition to managing Windows environments, Microsoft quickly created snap-ins for additional applications such as Exchange Server, the System Center suite, and clustering. Additional vendors have also created snap-ins for PowerShell, with some of the most popular being VMware and NetApp.

Many of the recipes presented here are the building blocks commonly used in PowerShell such as signing scripts, using parameters, and sorting/filtering data.

Managing security on PowerShell scripts


Due to the powerful capabilities of PowerShell, maintaining a secure environment is important. Executing scripts from untrustworthy sources could damage data on your system and possibly spread viruses or other malicious code. To deal with this threat, Microsoft has implemented Execution Policies to limit what scripts can do.

Note

The execution policies only limit what can be performed by scripts, modules, and profiles, these policies do not limit what commands are executed in the interactive runtime.

How to do it...

In this recipe, we will view the system's current execution policy and change it to suit various needs. To do this, carry out the following steps:

  1. To find the system's current execution policy, open PowerShell and execute Get-ExecutionPolicy.

  2. To change the system's execution policy, run Set-ExecutionPolicy <policy name> command.

  3. To reset the execution policy to the system default, set the policy to Undefined.

  4. To change the execution policy for a specific session, go to Start | Run and enter PowerShell.exe –ExecutionPolicy <policy name>.

How it works...

When a script is executed, the first thing PowerShell does is, determine the system's execution policy. By default, this is set to Restricted, which blocks all the PowerShell scripts from running. If the policy allows signed scripts, it analyzes the script to confirm it is signed and that the signature is from a trusted publisher. If the policy is set to unrestricted, then all the scripts run without performing checking.

Setting the execution policy is simply done via the command. Here we see several examples of viewing and setting the execution policy to various settings. There are six execution policies as follows:

  • Restricted: No scripts are executed. This is the default setting.

  • AllSigned: This policy allows scripts signed by a trusted publisher to run.

  • RemoteSigned: This policy requires remote scripts to be signed by a trusted publisher.

  • Unrestricted: This policy allows all scripts to run. It will still prompt for confirmation for files downloaded from the internet.

  • Bypass: This policy allows all scripts to run and will not prompt.

  • Undefined: This policy resets the policy to the default.

When changing the execution policy, you will be prompted via a command line or pop-up window to confirm the change. This is another level of security, but can be disabled by using the –Force switch.

There's more...

  • Approving publishers: When running scripts from new publishers, there are two primary methods for approving them. The first method is to open the certificates MMC on the local computer and import the signer's CA into the Trusted Publishers store. This can be done manually or via a group policy. The second method is to execute the script, and when prompted, approve the publisher.

  • Defining execution policy via GPO: The execution policy for individual computers, groups, or enterprise can be controlled centrally using group policies. The policy is stored under Computer Configuration | Policies | Administrative Templates | Windows Components | Windows PowerShell. Note however that this policy only applies to Windows 7/2008 or newer operating systems.

  • Permissions to change the execution policy: Changing the execution policy is a system-wide change, and as such requires administrator level permissions. With Windows default access controls in place, this also requires you to start PowerShell as an administrator.

    Changing the execution policy requires elevated permissions to run, so you may need to open PowerShell with Run as administrator to set the policy. If you are attempting to change the policy without sufficient permission, an error will be returned.

Tip

Best practice is to enforce some level of signature checking in most environments. In Dev/Test environments, it may be common to set the policy to Unrestricted to expedite testing, but it is always suggested to require fully signed scripts in production environments.

Creating and using functions


Functions could be considered one of the cornerstones of PowerShell scripting. Functions allow for individual commands or groups of commands and variables to be packaged into a single unit. These units are reusable and can then be accessed similar to native commands and Cmdlets, and are used to perform larger and more specific tasks.

Unlike Cmdlets, which are precompiled, functions are interpreted at runtime. This increases the runtime by a small amount (due to the code being interpreted by the runtime when executed), but its performance impact is often outweighed by the flexibility that the scripted language provides. Because of this, functions can be created without any special tools, then debugged, and modified as needed.

Let's say we are preparing for Christmas. We have made a large list of things to complete before the Christmas morning—wrap the presents, decorate the tree, bake cookies, and so on. Now that we have our list, we need to know how long we have until Christmas morning. In this way, we can prioritize the different tasks and know which ones can wait until later.

We could use something simple like a calendar, but being PowerShell experts, we have decided to use PowerShell to tell us how many days there are until Christmas.

How to do it...

Carry out the following steps:

  1. We start by identifying the necessary PowerShell commands to determine the number of days until Christmas.

  2. Next, we combine the commands into a function:

    Function Get-DaysTilChristmas
    {
    <#
       .Synopsis
        This function calculates the number of days until Christmas
       .Description
        This function calculates the number of days until Christmas
       .Example
        DaysTilChristmas
       .Notes
        Ed is really awesome
       .Link
        Http://blog.edgoad.com
     #>
        $Christmas=Get-Date("25 Dec " + (Get-Date).Year.ToString() + " 7:00 AM")
        $Today = (Get-Date)
        $TimeTilChristmas = $Christmas - $Today
        Write-Host $TimeTilChristmas.Days "Days 'til Christmas"
    }
  3. Once the function is created, we either type it or copy/paste it into a PowerShell console.

  4. Finally, we simply call the function by the name, Get-DaysTilChristmas.

How it works...

In the first step, we are attempting to find out how many days until Christmas using the basic PowerShell commands. We begin by using the Get-Date command to calculate the exact date of Christmas and put this into a variable named $Christmas. Actually, we are calculating the date and time until 7 a.m. Christmas morning—in this case, the time I plan to begin opening presents.

Next, we execute the Get-Date function without any parameters to return the current date and time into another variable named $Today. We create a third variable named $TimeTilChristmas, and subtract our two dates from each other. Finally, we write out the number of days remaining.

Note

Note: This assumes that the script is being executed before December 25th in the year. If this script is run after the 25th of December, a negative number of days will be returned.

The second step uses exactly the same commands as the first, except with the commands being included in a function. The Function command bundles the code into a reusable package named Get-DaysTilChristmas.

The function is input into PowerShell manually, via copy/paste or other methods. To use the function once it is created, just call it by its name.

At its simplest, a function is composed of the Function keyword, a function name, and commands encapsulated in curly braces.

Function FunctionName{
    # commands go here
}

The benefit of packaging the code as a function is that now it can be accessed by a name, without having to retype the original commands. Simply running Get-DaysTilChristmas again and again will continue to run the function and return the results.

There's more...

  • Function scope: Custom functions are traditionally limited to the currently active user session. If you create a function such as Get-DaysTilChristmas, and then open a new PowerShell window, the function will not be available in the new session, even though it is still available in the original session. Additionally, if you close your original session, the function will be removed from the memory and won't be available until it is re-entered.

  • Variable types: It may be interesting to note that the variables $Christmas and $Today are of different types than $TimeTilChristmas. The first two are date and time variables which refer to a specific point in history (year, month, day, hour, minute, second, millisecond, ticks). $TimeTilChristmas however is a time span; which refers to a length of time (day, hour, minute, second, millisecond, ticks), relative to a specific time. The type of a variable can be viewed by typing $<variableName>.GetType() as shown in the following screenshot:

  • Returning content: This function in its current form returns the number of days until Christmas, but that is all. Because the function uses date and time variables, it can easily include the number of hours, minutes, and seconds as well. See Get-Date | Get-Member for a list of properties that can be accessed.

  • Naming of functions and commands in PowerShell: Commands in PowerShell are traditionally named in a verb-noun pair, and for ease of use, a similar process should be used when naming custom functions. You can see in this example, we named the function Get-DaysTilChristmas, the verb Get, tells us that we are getting something. The noun DaysTilChristmas tells us what object we are working with. There are several common verbs such as Get, Connect, Find, and Save that should be used when possible. The noun in the verb-noun pair is often based on the object you are working with or the task you are doing. A full list of verbs for PowerShell can be found by executing Get-Verb.

Creating and using modules


Modules are a way of grouping functions for similar types of tasks or components into a common module. These modules can then be loaded, used, and unloaded together as needed. Modules are similar in concept to libraries in the Windows world—they are used to contain and organize tasks, while allowing them to be added and removed dynamically.

An example of a module is working with the DNS client. When working with the DNS client, you will have various tasks to perform: get configuration, set configuration, resolve hostname, register client, and so on. Because all of these tasks have to do with a common component, the DNS client, they can be logically grouped together into the DNSClient module. We can then view the commands included in the module using Get-Command –Module DnsClient as shown in the following screenshot:

Here we will show how to create a module for containing common functions that can be loaded as a unit. Because modules typically group several functions together, we will start off by creating multiple functions.

For our recipe, we will be creating a module named Hello. In this example, we have created two simple "Hello World" type functions. The first simply replies "Hello World!", while the second takes a name as a variable and replies "Hello <name>".

How to do it...

Carry out the following steps:

  1. Create several functions that can be logically grouped together.

    Function Get-Hello
    {
        Write-Host "Hello World!"
    }
    Function Get-Hello2
    {
        Param($name)
        Write-Host "Hello $name"
    } 
  2. Using the PowerShell ISE or a text editor, save the functions into a single file name Hello.PSM1.

  3. If the folder for the module doesn't exist yet, create the folder.

    $modulePath = "$env:USERPROFILE\Documents\WindowsPowerShell\Modules\Hello"
    if(!(Test-Path $modulePath))
    {
        New-Item -Path $modulePath -ItemType Directory
    }
  4. Copy Hello.PSM1 to the new module folder.

    $modulePath = "$env:USERPROFILE\Documents\WindowsPowerShell\Modules\Hello"
    Copy-Item -Path Hello.PSM1 -Destination $modulePath
  5. In a PowerShell console, execute Get-Module –ListAvailable to list all the available modules:

    Note

    A large list of modules will likely be returned. The modules in the current user's profile will be listed first, and you may need to scroll up the PowerShell window to see them listed.

  6. Run Import-Module Hello to import our new module.

    Note

    See the recipes Managing Security on PowerShell Scripts and Signing PowerShell Scripts for information about the security requirements for using modules.

  7. Run Get-Command –Module Hello to list the functions included in the module:

  8. Execute the functions in the module as normal:

How it works...

We start off by identifying several functions or commands to group together as a module. These commands do not necessarily have to relate to each other, but it is often best to organize them as well as possible. The commands are then saved into a single file with a .PSM1 file extension. This file extension indicates to PowerShell that the file is a PowerShell module.

The module is then stored in the user's profile directory. If the folder doesn't already exist, we create a new folder named the same as the module. Once the folder exists, we copy the .PSM1 file into the folder. PowerShell automatically searches this location for new modules to load.

Note

There are two locations PowerShell looks for installed modules: C:\Windows\system32\WindowsPowerShell\v1.0\Modules\ and %userprofile%\Documents\WindowsPowerShell\Modules. The first location is used by the entire system and requires administrative permission to access; most third party modules are installed here. The second location is user specific and does not require elevated rights to install scripts.

Once saved, we can load the module to the memory. The command Import-Module loads the contents of the module and makes the commands available for use. We can then view the contents of the module using Get-Command –Module Hello. This returns all publicly available functions in the module.

Note

Modules are viewed by PowerShell similar to scripts and they rely on the same security requirements as other scripts. Because of these restrictions, it is best practice to sign your modules once they have been created.

Finally, once the module is loaded, we can execute the included commands.

There's more...

  • Auto-loading of modules: PowerShell 3.0 automatically imports modules as they are needed. While it is best practice to load and unload modules, you do not necessarily have to use Import-Module prior to accessing the functions contained within. As you can see in the following screenshot, I listed the currently loaded modules using Get-Module. Once I confirmed my new Hello module was not loaded, I then execute the Get-Hello2 function in the module which completed successfully. Executing Get-Module again shows the module has been automatically loaded.

  • Module manifest: In addition to the modules themselves, you can also create a module manifest. A module manifest is a file with a .PSD1 extension that describes the contents of the module. Manifests can be useful because they allow for defining the environment in which a module can be used, its dependencies, additional help information, and even which set of commands to make available. The following code is a basic example of creating a manifest for our Hello World module:

    New-ModuleManifest -Path "$env:USERPROFILE\Documents\WindowsPowerShell\Modules\Hello\Hello.PSD1" -Author "Ed Goad" -Description "Hello World examples" -HelpInfoUri "http://blog.edgoad.com" -NestedModules 'Hello.PSM1'  

    Once the manifest is created, we can view the manifest properties using the following code:

    Get-Module Hello -ListAvailable | Format-List -Property * 

Creating and using PowerShell profiles


User profiles are used to set up user customized PowerShell sessions. These profiles can be blank, contain aliases, custom functions, load modules, or any other PowerShell tasks. When you open a PowerShell session, the contents of the profile are executed the same as executing any other PowerShell script.

How to do it...

In this recipe, we will modify the PowerShell console profile for the current user on the current host. By default the profile file does not exist, so we will create the file, and then configure it to create a transcript of our actions. To do this, carry out the following steps:

  1. Open the PowerShell console (not the ISE) and list your current profile locations by executing $PROFILE or $PROFILE | Format-List * -Force|:

  2. If the CurrentUserCurrentHost profile file doesn't already exist, create the folder and file structure:

    $filePath = $PROFILE.CurrentUserCurrentHost
    if(!(Test-Path $filePath))
    {
        New-Item -Path $filePath -ItemType File
    }
  3. Edit the CurrentUserCurrentHost profile by opening it in a text editor. Make the necessary changes and save the file.

Tip

NOTE: It is best practice to sign your profiles after making changes. This ensures that the profile is secure and hasn't been unintentionally changed.

More information about code signing in PowerShell can be found in the recipe Signing PowerShell scripts

How it works...

When a PowerShell session is started, the profile files are executed before the session is handed over to the user. At that time, any aliases or modules that were loaded will be in effect. Additionally, any background commands, such as Start-Transcript, will continue to operate in the background.

We start by opening PowerShell and listing our profile files. By default, $PROFILE command only returns the CurrentUserCurrentHost profile. By piping the output through Format-List with the –Force switch, we can see all applicable profile files.

Note

In this example we are specifically using the PowerShell console, instead of the PowerShell ISE, because the Start-Transcript command is only supported in the console.

There's more…

There are six user profile files in total, and they are applied to PowerShell sessions one at a time. First the more general profiles, such as AllUsersAllHosts are applied, ending with more specific profiles such as CurrentUserCurrentHost. As the individual profiles are applied, any conflicts that arise are simply overwritten by the more specific profile.

Not all six profiles are used at a time, and by default, these profiles are empty. Two of the profiles are specific to the PowerShell console, and two of them are specific to the PowerShell ISE. At the most, you can have four active profiles on a given session.

See also

Passing variables to functions


One of the most powerful features of PowerShell functions is in using variables to pass data into the function. By passing data into a function, the function can be more generic, and can perform actions on many types of objects.

In this recipe, we will show how to accept variables in functions, and how to report errors if a mandatory variable is not included.

How to do it...

  1. For this recipe we will be using the following function.

    Function Add-Numbers
    {
        Param(
        [int]$FirstNum = $(Throw "You must supply at least 1 number")
        , [int]$SecondNum = $FirstNum
        )
        Write-Host ($FirstNum + $SecondNum)
    }

How it works...

At the beginning of the function we reference the Param() keyword which defines the parameters the function will accept. The first parameter, $FirstNum, we define as being mandatory and of type [int] or integer. We did not have to classify the parameter type, and the function would have worked without this, but it's a good practice to validate the input of your functions.

The second parameter, $SecondNum, is also typed as [int], but also has a default value defined. This way if no value is passed for the second parameter, it will default to the $FirstNum.

When the function runs, it reads in the parameters from the command line and attempts to place them in the variables. The parameters can be assigned based on their position in the command line (that is, the first number is placed into $FirstNum, and the second number is placed into $SecondNum). Additionally, we can call the function using named parameters with the –FirstNum and –SecondNum switches. The following screenshot gives an example of this:

If a parameter has a Throw attribute assigned, but the value is not provided, the function will end and return an error. Additionally, if a parameter has a type defined, but the value received is incompatible (such as a string being placed into an integer), the function will end and return an error.

There's more...

Functions are not only capable of receiving input, but also returning output. This ability can come in very handy when trying to return values into other variables instead of simply returning the values to the screen. In our example, we can replace our Write-Host command with a Return command.

#Write-Host ($FirstNum + $SecondNum) 
Return ($FirstNum + $SecondNum)

The output of the function is mostly the same, except now we can assign the output to a variable and use that variable at a later time.

Note

In addition to returning values from functions, Return also causes the function to exit. The Return command should always be placed at the end of a function, or at a point where processing of the function should stop.

Validating parameters in functions


Whenever a script or program receives data from an unknown source, the general rule is that the data should be validated prior to being used. Validation can take many forms, with simple validations such as confirming the value exists, is of the right type, or fits a predefined format. Validation can also be complex multi-stage events such as ensuring a username exists in a database before prompting for a password.

This section will review several basic validation-testing methods for use in PowerShell.

How to do it...

Here we will discuss creating a function without input validation:

  1. Create a basic function with no input validation:

    Function Hello-World
    {
        param($foo)
        "Hello $foo"
    } 
  2. Test the function using different input types.

    Update the function to perform input type validations as discussed in the following steps:

  3. Update the function to include the basic string validation.

    Function Hello-WorldString
    {
        param([string] $foo)
        "Hello $foo"
    }
  4. Test the function using different input types:

  5. Update the function to perform basic integer validation.

    Function Hello-WorldInt
    {
        param([int] $foo)
        "Hello $foo"
    }
  6. Test the function using different input types:

  7. Update the function to perform basic float validation.

    Function Hello-WorldFloat
    {
        param([float] $foo)
        "Hello $foo"
    }
  8. Test the function using different input types:

  9. Update the function to perform basic array validation.

    Function Hello-WorldStringArray
    {
        param([string[]] $foo)
        "Hello " + $foo[0]
    }
  10. Test the function using different input types:

Update the functions to perform validation of input values:

  1. Create a function to validate the length of a parameter:

    function Hello-WorldLength{
        param([ValidateLength(4,10)] $foo)
        "Hello $foo"
    }
  2. Test the function using different input types:

  3. Create a function to validate a number in a range:

    function Hello-WorldAge{
        param([ValidateRange(13,99)] $age)
        "Hello, you are $age years old"
    }
  4. Test the function using different input types:

  5. Create a function to validate a set of parameters:

    function Hello-WorldSize{
        param([ValidateSet("Skinny", "Normal", "Fat")] $size)
        "Hello, you are $size"
    }
  6. Test the function using different input types:

  7. Create a function that validates against a script:

    function Hello-WorldAge2{
        param([ValidateScript({$_ -ge 13 -and $_ -lt 99})] $age)
        "Hello, you are $age years old"
    }
  8. Test the function using the different input types:

  9. Create a function to validate the input as a phone number:

    Function Test-PhoneNumber
    {
        param([ValidatePattern("\d{3}-\d{4}")] $phoneNumber)
        Write-Host "$phoneNumber is a valid number"
    }
  10. Execute the Test-PhoneNumber function using different input types:

Use custom validation to test parameters inside our function:

  1. Update the function to use custom validation internal to the script with regular expressions:

    Function Test-PhoneNumberReg
    {
        param($phoneNumber)
        $regString=[regex]"^\d{3}-\d{3}-\d{4}$|^\d{3}-\d{4}$"
        if($phoneNumber -match $regString){ 
            Write-Host "$phoneNumber is a valid number"
        } else {
            Write-Host "$phoneNumber is not a valid number"
        }
    }
  2. Test the function using different input types:

How it works...

We start off with a simple Hello-World function with no input validation. Three different calls to the function are performed, one with a username (as the function was designed to work), one with a number, and yet another without any parameters. As you can see, all three commands complete successfully without returning errors.

In the first group of functions in the steps 3 to 10, we see a set of examples using Type Validation to confirm the parameters are of a specific type. There are four iterations of the Hello-World example that accept a string, integer, float, and array as inputs. As you see from the calls to Hello-WorldString, both text and numbers are viewed as strings and return successfully. However, the calls to Hello-WorldInt succeed when a number is passed, but fail when text is passed.

Note

You may notice that the original number 867.5309 passed to the function Hello-WorldInt was rounded and truncated when returned. This is because integers are whole numbers—that is, not partial numbers. When the number was cast as an integer, it was rounded to the nearest whole value, which in this case caused it to round up to 868.

In the second set of functions in steps 11 to 20, we see a set of examples using basic input validation. These functions use ValidateLength, ValidateRange , ValidateSet , ValidateScript , and ValidatePattern attributes of the parameters to validate the input. Additionally, these validation types can be used in conjunction with the basic type validations to further ensure the input is of the correct type and value.

The last set of examples in steps 21 to 24 perform validation internal to the script, instead of relying on the in-built validation. The function named Test-PhoneNumberReg uses a regular expression in order to validate the input as part of the script. Instead of relying on validation using types or ValidatePattern, this function simply passes the variable to the PowerShell code in order to check the input. By managing the validation as part of the function, we have more flexibility on how we handle and present validation errors to our users, and can return a more user-friendly message to the end user.

There's more...

It is considered a best practice to perform at least basic validation for all inputs. Lack of input validation can result in the function crashing, operating unpredictably, or even resulting in damaging data in your environment. This has been a common method for computer attackers to access secure systems and should be taken diligently.

See also

Piping data to functions


In addition to passing data to functions via parameters, functions can receive data directly from another object or command via a pipe "|". Receiving values by piping helps improve scripting by limiting the use of temporary variables, as well as more easily passing complex object types or descriptors.

How to do it...

In this recipe, we will create a simple function that receives input from command line as well as pipe. To do this, carry out the following steps:

  1. Create a simple function that accepts a parameter:

    Function Square-Num
    {
        Param([float] $FirstNum)
        Write-Host ($FirstNum * $FirstNum)
    }
  2. Use the ValueFromPipeline parameter to enable the script to accept input from the pipeline:

    Function Square-Num
    {
        Param([float]
        [Parameter(ValueFromPipeline = $true)]
        $FirstNum )
        Write-Host ($FirstNum * $FirstNum)
    } 
  3. Test the function using parameters and by passing data from the pipeline:

How it works...

The script in the first step itself is simple—it creates a variable named $FirstNum, squares it by multiplying the number against itself, and returns the result. In the second step we updated the parameter line with the following code:

    [Parameter(ValueFromPipeline=$true)]

This parameter option allows the function to assign a value to $FirstNum from the command line, as well as from the pipeline. PowerShell will first look for the value on the command line via name or location, and if it isn't listed, it will look for the value from the pipe.

There's more...

PowerShell will attempt to use all arguments provided to a function, and will report errors if there are unknown arguments. For instance, if we try to provide values from the pipeline and command line at the same time as shown in the following screenshot:

As you can see from the example, we attempt to pass both 8 and 7 to the Square-Num function, the first via the pipe and the second via the command line. PowerShell reports an error, and then provides an answer of 49, the result of 7 X 7.

Recording sessions with transcripts


When working in PowerShell doing various tasks, I find myself doing something that I then want to document or turn into a function and I have to ask myself What did I just do?, Do I know all of the variables I had previously set?, Do I know all the objects I am using?, What kind of authentication am I using?, and so on.

The PowerShell console and ISE have some level of in-built history, but if you're doing large tasks across multiple server environments, this history quickly becomes too small.

Enter PowerShell transcripts. Transcripts are a great way of recording everything you do in a PowerShell session and saving it in a text file for later review.

How to do it...

Carry out the following steps:

  1. Open the PowerShell console (not the ISE) and begin recording a transcript in the default location by executing Start-Transcript.

  2. Stop the recording by executing Stop-Transcript.

  3. Begin recording a transcript into a different location by calling Start-Transcript with the –Path switch:

How it works...

In the first step, we execute the command Start-Transcript, which automatically creates transcript under the user's My Documents folder. Each transcript file is named with a unique timestamp that ensures files don't overwrite or conflict with each other. We can stop the recording by then executing Stop-Transcript.

In the third step, we tell PowerShell to save the transcript file to C:\temp\foo.txt. When pointing transcripts to an existing file, PowerShell will attempt to append to the file. If the file is read-only, using the –Force command will instruct PowerShell to attempt to change the permissions on the file, and then append to it.

There's more...

  • Transcript limitations: Session transcripts only work with the PowerShell console, and not the PowerShell ISE. The ISE helps overcome some of this limitation by providing a larger scroll-back area, but if you want to use transcripts, you have to use the console application.

  • Fun with transcripts: Also, because transcripts capture anything typed or written to the screen, you need to be careful what you run. For instance, if you run the following commands, you will result in a recursive loop that has to be manually stopped:

See also

  • See the Creating and using PowerShell profiles recipe for information on how to automatically start transcripts for your sessions.

Signing PowerShell scripts


When creating PowerShell scripts, modules, and profiles, it is considered best practice to digitally sign them. Signing scripts performs the following two functions:

  • Ensures the script is from a trusted source

  • Ensures the script hasn't been altered since it was signed

Getting ready

To sign a PowerShell script, a code-signing certificate will be needed. Normally these certificates will be provided by your enterprise Private Key Infrastructure (PKI), and the PKI Administrator should be able to help you with the requesting process. Code-signing certificates can also be purchased from third party Certificate Authorities (CA) which can be helpful if your scripts are being distributed outside of your corporate environment.

Once received, the code-signing cert should be added to your Current User | Personal | Certificates certificate store on your computer. Additionally, the root certificate from the Certificate Authority should be added to the Trusted Publishers store for all computers that are going to execute the signed scripts.

How to do it...

Carry out the following steps:

  1. Create and test a PowerShell script.

  2. Sign the script with Set-AuthenticodeSignature.

    $cert = Get-ChildItem Cert:CurrentUser\My\ -CodeSigningCert
    Set-AuthenticodeSignature C:\temp\ServerInfo.ps1 $cert

How it works...

The signing process is fairly simple, but also extremely powerful. The process starts by searching the Current User certificate store for a certificate capable of code signing and is placed into a $cert variable. Set-AuthenticodeSignature is then called to sign the script with the certificate.

If there is more than one code signing certificate on your system, you need to select which certificate to use. To achieve this, update the first line to include a where clause. For example:

$cert = Get-ChildItem Cert:CurrentUser\My\ -CodeSigningCert | Where-Object Subject -eq 'CN=CorpInternal' 

If you open the script in a text editor after it has been signed, you will notice several lines of content appended to the end. These additional lines are the signature that PowerShell will verify before running the script.

Note

Any change to the script (even adding or removing a space) will invalidate the signature. Once the script has been signed, if you need to make changes, you must repeat the signing process.

There's more...

If you don't have an available PKI to obtain a code-signing certificate, or your PKI Administrator is hesitant to give you one, you can create a self-signed certificate for testing purposes. To do this, you can use the following PowerShell script which is based on the script by Vishal Agarwal at http://blogs.technet.com/b/vishalagarwal/archive/2009/08/22/generating-a-certificate-self-signed-using-powershell-and-certenroll-interfaces.aspx:

$name = new-object -com "X509Enrollment.CX500DistinguishedName.1"
$name.Encode("CN=TestCode", 0)

$key = new-object -com "X509Enrollment.CX509PrivateKey.1"
$key.ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
$key.KeySpec = 1
$key.Length = 1024
$key.SecurityDescriptor = "D:PAI(A;;0xd01f01ff;;;SY)(A;;0xd01f01ff;;;BA)(A;;0x80120089;;;NS)"
$key.MachineContext = 1
$key.Create()

$serverauthoid = new-object -com "X509Enrollment.CObjectId.1"
$serverauthoid.InitializeFromValue("1.3.6.1.5.5.7.3.3") # Code Signing
$ekuoids = new-object -com "X509Enrollment.CObjectIds.1"
$ekuoids.add($serverauthoid)
$ekuext = new-object -com "X509Enrollment.CX509ExtensionEnhancedKeyUsage.1"
$ekuext.InitializeEncode($ekuoids)

$cert = new-object -com "X509Enrollment.CX509CertificateRequestCertificate.1"
$cert.InitializeFromPrivateKey(2, $key, "")
$cert.Subject = $name
$cert.Issuer = $cert.Subject
$cert.NotBefore = get-date
$cert.NotAfter = $cert.NotBefore.AddDays(90)
$cert.X509Extensions.Add($ekuext)
$cert.Encode()

$enrollment = new-object -com "X509Enrollment.CX509Enrollment.1"
$enrollment.InitializeFromRequest($cert)
$certdata = $enrollment.CreateRequest(0)
$enrollment.InstallResponse(2, $certdata, 0, "")

Executing this script will create the certificate and install it on the local computer as shown in the following screenshot:

Note

The self-signed certificate still needs to be added to your Trusted Root Certification Authorities and Trusted Publishers store for the certificate to be considered valid by client computers.

Sending e-mail


Integration with e-mail is a key capability for automating administration tasks. With e-mail, you can run tasks that automatically let you know when they are complete, or e-mail users when information is needed from them, or even send reports to administrators.

This recipe shows different methods of sending e-mail to users.

Getting ready

To send an e-mail using PowerShell, we will need a mail system capable of accepting SMTP mail from your computer. This can be a Microsoft Exchange server, and IIS server, a Linux host, or even a public mail service such as Google Mail. The method we use may change how the e-mail appears to the end recipient and may cause the message to be flagged as spam.

How to do it...

To send e-mail using the traditional .NET method:

  1. Open PowerShell and load the following function:

    function Send-SMTPmail($to, $from, $subject, $smtpServer, $body) 
    {
    	$mailer = new-object Net.Mail.SMTPclient($smtpServer)
    	$msg = new-object Net.Mail.MailMessage($from, $to, $subject, $body)
    	$msg.IsBodyHTML = $true
    	$mailer.send($msg)
    }
  2. To send the mail message, call the following function:

    Send-SMTPmail -to "admin@contoso.com" -from "mailer@contoso.com" `
    -subject "test email" -smtpserver "mail.contoso.com" -body "testing" 
  3. To send e-mail using the included PowerShell Cmdlet.

  4. Use the Send-MailMessage command as shown:

    Send-MailMessage -To admin@contoso.com -Subject "test email" `
    -Body "this is a test" -SmtpServer mail.contoso.com `
    -From mailer@contoso.com  

How it works...

The first method shown uses a traditional .NET process to create and send the e-mail. If you have experience programming in one of the .NET languages, this process may be familiar. The function starts by creating a Net.Mail.SMTPclient object that allows it to connect to the mail server. Then a Net.Mail.MailMessage object is created and populated with the e-mail content. Lastly, the e-mail is sent to the server.

The second method uses a in-built PowerShell cmdlet named Send-MailMessage. This method simplifies the mailing method into a single command while providing flexibility in the mailing options and methods of connecting to mail servers.

There's more...

Most e-mail functions can be performed by the Send-MailMessage command. More information can be found by executing help Send-MailMessage. The following additional command switches allow the command to perform most mail functions needed:

  • Attachments

  • CC/BCC

  • SMTP authentication

  • Delivery notification

  • E-mail priority

  • SSL encryption

Sorting and filtering


One of the great features of PowerShell is its ability to sort and filter objects. This filtering can be used to limit a larger result set and reporting only the information necessary.

This section will review several methods of filtering and sorting data.

How to do it...

  1. To explore the filtering capabilities of PowerShell, we look at the running processes on our computer. We then use the Where clause to filter the results.

    Get-Process | Where-Object {$_.Name -eq "chrome"}
    Get-Process | Where-Object Name -eq "chrome"
    Get-Process | Where-Object Name -like "*hrom*"
    Get-Process | Where-Object Name -ne "chrome"  
    Get-Process | Where-Object Handles -gt 1000
  2. To view sorting in PowerShell, we again view the processes on our computer and use Sort-Object to change the default sort order.

    Get-Process | Sort-Object Handles
    Get-Process | Sort-Object Handles -Descending
    Get-Process | Sort-Object Handles, ID –Descending
  3. To explore the select features of PowerShell, we use Select-String and Select-Object clause:

    Select-String -Path C:\Windows\WindowsUpdate.log -Pattern "Installing updates"
    Get-Process | Select-Object Name -Unique 
  4. To view the grouping capabilities of PowerShell, we use Format-Table with the –GroupBy command:

    Get-Process | Format-Table -GroupBy ProcessName  

How it works...

In the first section, we review various methods of filtering information using the Where clause:

  • The first method uses the PowerShell Where-Object clause format. The $_ identifier represents the object being passed through the pipe, so $_.Name refers to the Name property of the object. The –eq is an equals parameter that instructs Where-Object to compare the two values.

  • The second method performs the same task as the first but uses a parameter format that is new in PowerShell 3.0. In this method the Where-Object comparison no longer needs to use $_ to reference the object being passed, and we no longer need the curly braces {}.

    Note

    Even though the second method shown can often be easier to use and easier to understand, it is important to know both methods. The first method is still in use by PowerShell scripters, and there are some situations that work better using this method.

  • The last three methods perform similar comparisons using different parameter. The –like parameter allows for the use of the wildcard character * allowing for less exact filtering. The –ne parameter means not equal and is the exact opposite of the equals parameter. And the –gt parameter means greater than and compares the attribute value to a known value.

The second section uses the Sort-Object command to sort and organize the object attributes. In this section, we show sorting by the handles attribute in both ascending (the default method), and descending format. Additionally, we see that multiple attributes can be sorted at the same time.

The third section uses the Select-String and Select-Object commands to restrict what is returned. The first method searches the WindowsUpdate.log for the string Installing updates and returns the results. The second method takes the output of Get-Process and filters it to only return a unique list of named processes.

The fourth section shows how to perform grouping based on an attribute. The Format-Table command includes a property named –GroupBy that, instead of returning a single table, will return multiple tables. In this case, for each unique ProcessName, a separate table is returned.

Using formatting to export numbers


Numbers in their raw form are useful when you want the most exact calculation, but can become messy when presenting them to users. Because of this PowerShell uses standardized .NET formatting rules to convert and present numbers in different contexts.

In this recipe, we will take a number and use PowerShell to present it in different number formats. This allows us to quickly see the differences between how PowerShell performs number formatting.

How to do it...

Carry out the following steps:

  1. Start with a simple PowerShell script to present numbers using different formats:

    $jenny = 1206867.5309
    Write-Host "Original:`t`t`t" $jenny
    Write-Host "Whole Number:`t`t" ("{0:N0}" -f $jenny)
    Write-Host "3 decimal places:`t" ("{0:N3}" -f $jenny)
    Write-Host "Currency:`t`t`t" ("{0:C2}" -f $jenny)
    Write-Host "Percentage:`t`t`t" ("{0:P2}" -f $jenny)
    Write-Host "Scientific:`t`t`t" ("{0:E2}" -f $jenny)
    Write-Host "Fixed Point:`t`t" ("{0:F5}" -f $jenny)
    Write-Host "Decimal:`t`t`t" ("{0:D8}" -f [int]$jenny)
    Write-Host "HEX:`t`t`t`t" ("{0:X0}" -f [int]$jenny)
  2. Execute the script and review the results:

How it works...

Because PowerShell is based on the .NET framework, it automatically inherits its number formatting capabilities. In this script, we are creating a variable named $jenny and assigning a number to it. Then, several number formatting strings are created (the string in the curly braces {}) and the formatting is applied to $jenny.

Note

In the code shown, we are using `t (backtick + letter t) to make the output easier to read. This causes a tab to be added to the text, which then aligns the output on the right.

The backtick character on most keyboards is the key to the left of the number 1 and also contains the ~ character. In PowerShell this character is occasionally referred to as an Esc character and is used to pass special commands such as tabs and new lines.

The formatting strings are composed of three distinct elements, each with a unique role. Inside the curly braces, the first zero (before the colon) refers to the first variable being used. Since we are only using one variable, this number never changes. The letter after the colon defines the format type of number, percentage, decimal, and so on. And the final number defines how many decimal places to include in the results.

One area of special note is when we convert $jenny to a decimal or hexadecimal number. You may have noticed the [int] attribute before the variable. This attribute explicitly casts the variable as an integer prior to applying the formatting. This is necessary because decimal and hexadecimal numbers only work with integers by default. Attempting to pass a complex number like ours natively to these formatting commands will result in an error.

There's more...

In addition to the built in formatting strings shown previously, custom formatting strings can also be created and applied.

Write-Host "Static Size:`t`t" ("{0:0000000000.00}" -f $jenny)
Write-Host "Literal String:`t`t" ("{0:000' Hello '000}" -f $jenny)
Write-Host "Phone Number:`t`t" ("{0:# (###) ### - ####}" -f ($jenny*10000))

The first custom string creates a number that is composed of 10 digits, a decimal point, and two digits. If the number is not large enough to fill the formatting, zeros are prepended to it.

The second string creates a number with a literal string in the middle of it.

The third string multiplies the variable by 10,000 (to make it an 11 digit integer), and formats it into a phone number. The number is returned complete with county and area codes.

See also

Using formatting to export data views


One of the great things about PowerShell is that it gives you access to lots of information. However, this plethora of information can be a downside of PowerShell if it is not the exact information or type of information you are looking for. In previous chapters, we saw how to filter and format data; in this chapter, we will review different methods to format data to provide the information in a way that is usable to us.

How to do it...

Carry out the following steps:

  1. Using Get-Process to list all running Chrome processes

    Get-Process chrome 
  2. To list all available attributes for our processes, execute the following code:

    Get-Process chrome | Select-Object * 
  3. To return a select list of attributes, update the following command:

    Get-Process chrome | `
    Select-Object Name, Handles, Threads, `
    NonpagedSystemMemorySize, PagedMemorySize, `
    VirtualMemorySize, WorkingSet, `
    PrivilegedProcessorTime, UserProcessorTime, `
    TotalProcessorTime 

    Note

    Note the use of the backtick character at the end of all but the last line. This tells PowerShell to include the contents of the lines as a single line. This allows us to more easily format the script for readability.

  4. Combine the values of different attributes to provide more usable information

    Get-Process chrome | `
    Select-Object Name, Handles, Threads, `
    NonpagedSystemMemorySize, PagedMemorySize, `
    VirtualMemorySize, WorkingSet, `
    PrivilegedProcessorTime, UserProcessorTime, `
    TotalProcessorTime, `
    @{Name="Total Memory";Expression=`
    {$_.NonpagedSystemMemorySize + `
    $_.PagedMemorySize + $_.VirtualMemorySize + `
    $_.WorkingSet}} 
  5. Use formatting to return the values in a human readable format.

    Get-Process chrome | `
    Select-Object Name, Handles, Threads, `
    NonpagedSystemMemorySize, PagedMemorySize, `
    VirtualMemorySize, WorkingSet, `
    PrivilegedProcessorTime, UserProcessorTime, `
    TotalProcessorTime, `
    @{Name="Total Memory (M)";Expression=`
    {"{0:N2}" -f (($_.NonpagedSystemMemorySize + `
    $_.PagedMemorySize + $_.VirtualMemorySize + `
    $_.WorkingSet)/1MB)}}   

How it works...

In the first step, we simply execute Get-Process to return all running processes named chrome. This command returns basic process information such as the number of handles, the amount of memory resources, and the amount of CPU resources used by each process.

In the second step, we do the same as before, but this time telling the command to return all attributes for the processes. Dozens of attributes are returned, including details about the executable, threads, debugging information, and even when the process was started. Most importantly, this returns all available attributes for the process, thus allowing us to determine which attributes we are interested in returning in future queries.

The third step identifies specific attributes to return from the running processes. Any available process attribute can be included here in addition to or instead of the memory related attributes shown here.

The fourth step uses an expression to create a calculated result. In this situation, we create a calculated column named Total Memory and add several of the memory related attributes together. Most mathematical functions such as multiplication and subtraction, and most textual commands such as append or replace, can be used as well.

The final step adds numeric formatting to the calculated column to make it more readable to the user. Two types of formatting are performed here:

  • The result is divided by 1 MB (or X / 1,048,576) to present the number in megabytes instead of bytes

  • Formatting is applied to limit the resulting number to two decimal places

Using jobs


Many of the PowerShell scripts we will create will execute in a serial fashion, that is, A starts before B which starts before C. This method of processing is simple to understand, easy to create, and easy to troubleshoot. However, sometimes there are processes that make serial execution difficult or undesirable. In that situation, we can look at using jobs as a method to start a task, and then move it to the background so that we can begin the next task.

A common situation I ran into is needing to get information, or execute a command, on multiple computers. If it is only a handful of systems, standard scripting works fine. However, if there are dozens, or hundreds of systems, a single slow system can slow down the entire process.

Additionally, if one of the systems fails to respond, it has the possibility of breaking my entire script, causing me to scramble through the logs to see where it failed and where to pick it back up. Another benefit of using jobs is that each job has the ability to execute independent of the rest of the jobs. This way, a job can fail, without breaking the rest of the jobs.

How to do it...

In this recipe, we will create a long-running process and compare the timing for serial versus parallel processing. To do this, carry out the following steps:

  1. Create a long-running process in serial:

    # Function that simulates a long-running process
    $foo = 1..5
    Function LongWrite
    {
        Param($a)
        Start-Sleep 10
        $a
    }
    $foo | ForEach{ LongWrite $_ }
  2. Create a long-running process using jobs:

    # Long running process using jobs
    ForEach ($foo in 1..5)
    {
        Start-Job -ScriptBlock {
        Start-Sleep 10
        $foo } -ArgumentList $foo -Name $foo
    }
    Wait-Job *
    Receive-Job *
    Remove-Job *

How it works...

In the first step, we create an example long-running process that simply sleeps for 10 seconds and returns its job number. The first script uses a loop to execute our LongWrite function in a serial fashion five times. As expected, this script takes just over 50 seconds to complete.

The second step executes the same process five times, but this time using jobs. Instead of calling a function, this time we are using Start-Job that will simultaneously create a background job, start the job, and then return for more. Once all the jobs have been started, we use Wait-Job * to wait for all running jobs to complete. Receive-Job retrieves the output from the jobs, and Remove-Job removes the jobs from the scheduler.

Because of the setup and teardown process required for creating and managing jobs, the process runs for more than the expected 10 seconds. In a test run, it took approximately 18 seconds total to create the jobs, run the jobs, wait for the jobs to complete, retrieve the output from the jobs, and remove the jobs from the scheduler.

There's more...

  • Scaling up: While moving from 50 seconds to 18 seconds is impressive in itself (decreasing it to 36 percent of the original run-time), larger jobs can give even better results. By extending the command to run 50 times (instead of the original 5), run-times can decrease to 18 percent of the serial method.

  • Working with remote resources: Jobs can be used both locally and remotely. A common need for a server admin is to perform a task across multiple servers. Sometimes, the servers respond quickly, sometimes they are slow to respond, and sometimes they do not respond and the task times out. These slow or unresponsive systems greatly increase the amount of time needed to complete your tasks. Parallel processing allows these slow systems to respond when they are available without impacting the overall performance.

By using jobs, the task can be launched among multiple servers simultaneously. This way, the slower systems won't prompt other systems from processing. And, as shown in the example, a success or failure report can be returned to the administrator.

Dealing with errors in PowerShell


When creating a script in any language, error handling is needed to ensure proper operations. Error handling is useful when debugging scripts and ensuring scripts work properly, but they can also present alternative methods of accomplishing tasks.

How to do it...

Carry out the following steps:

  1. Create a simple function that uses no error handling

    Function Multiply-Numbers
    {
        Param($FirstNum, $SecNum)
        
        Write-Host ($FirstNum * $SecNum)
    }
  2. Test the function using various arguments:

  3. Update the function using a Try/Catch block:

    Function Multiply-Numbers
    {
        Param($FirstNum, $SecNum)
        Try
        {
            Write-Host ($FirstNum * $SecNum)
        }
        Catch
        {
            Write-Host "Error in function, present two numbers to multiply"
        }
    }
  4. Test the Multiply-Numbers function using various arguments:

  5. In the PowerShell console, execute a command to generate an error such as Get-Item foo.

  6. View the $Error variable to return the error code history.

How it works...

In the first step, we create a function that takes two numbers, multiplies them, and returns the result. As we see in the second step, the function operates normally as long as two numbers are presented, but if something other than a number is presented, then an unfriendly error is returned.

In the third step, our updated script uses a Try/Catch block to find errors and return a more friendly error. The Try block attempts to perform the multiplication, and if an error is returned then processing exits. When the Try block fails for any reason, it then executes the Catch block instead. In this case, we are returning a command specific error message, in other scenarios we could initiate an alternative task or command that was based on the error.

The fifth and sixth steps generate an error in the PowerShell console, and then show the $Error variable. The $Error variable is an in-built array that automatically captures and stores errors as they happen. You can view the variable to report all errors listed, or you can use indexing such as $Error[1] to return specific errors.

There's more...

  • Clearing error codes: By default, the $Error array will retain a number of error codes. These errors are only removed from the array when it reaches its maximum size, or when the user session is ended. It is possible to clear out the array before doing a task, so that you can then review the $Error array after and know that all the alerts are relevant.

    $Error.Count
    $Error.Clear()
    $Error.Count

    This example starts by returning the number of items in the array. Then $Error.Clear() is called to empty the array. Lastly, the number of array items is returned to confirm that it has been cleared.

  • $ErrorActionPreference: In many programming/scripting languages, there are methods to change the default action when an error occurs. In VBScript, we had the option "On Error Resume Next", which told the script to continue on as though no error had occurred. In PowerShell, we have the $ErrorActionPreferece variable. There are four settings for this variable:

    • Stop: Whenever an error occurs, the script or process is stopped. This is the default action.

    • Continue: When an error occurs, the error will be reported and the process will continue.

    • SilentlyContinue: When an error occurs, PowerShell will attempt to suppress the error and the process will continue. Not all errors will be suppressed.

    • Inquire: When an error occurs, PowerShell will prompt the operator to take the correct action.

To set your preference, simply set the variable to the desired string value as shown in the following code:

$ErrorActionPreference = "Stop" 

Tuning PowerShell scripts for performance


In PowerShell, as with most things with computers, there is often more than one way to accomplish a task. Therefore the question is not always how to accomplish a task, it is how best to accomplish the task. Often times the answer comes down to how fast a certain method performs.

In this recipe, we will look at different methods to retrieve the local groups on a member server. The different methods will be benchmarked to determine the optimal method.

Getting ready

In this example, we will be listing the NT groups on the local computer. To do this we will be querying the Win32_Group WMI class. This class however, returns all local computer groups, as well all domain groups. If you have a domain environment with a large number of groups, this process can be extensive.

How to do it...

Carry out the following steps:

  1. Start by identifying different methods to list local groups on a computer:

    Get-WmiObject -Class Win32_Group | Where-Object Domain -eq $env:COMPUTERNAME 
    Get-WmiObject -Query "select * from Win32_Group where Domain='$env:ComputerName'"
  2. Benchmark the first task using Measure-Command:

  3. Benchmark the second task using Measure-Command:

How it works...

Both of these commands perform the same task, querying WMI for local groups on our server.

  • The first command retrieves all groups from WMI (local and domain), then filters based on the domain name attribute

  • The second command uses a query against WMI with a filter applied based on the domain name, WMI then returns the group objects to PowerShell

In this situation, the first command took several minutes to complete, while the second command took only 79 milliseconds. Both commands result in returning the same data, so this suggests the second method is more ideal for my current situation.

There's more...

Neither of these tasks is right nor wrong, they simply differ where the filtering process took place. However, the first method may be preferred based on what else is being done. For instance, if I was doing a large amount of work with groups and group membership, both in the domain and local system, the first method may be preferred.

If the results of the first WMI command were saved to a variable prior to filtering, then different filtering could be applied after. This one object could be filtered multiple times to provide different information, instead of requiring multiple queries against WMI.

Creating and using Cmdlets


In the past, each system or application would have its own set of tools used to manage it. Each tool had its own nomenclature, input, and output methods, and differing levels of manageability. In PowerShell, this all changes with Cmdlets.

PowerShell creates a consistent run-time for toolsets to be created that function and operate in a consistent manner. Input parsing, error presentation, and output formatting are all managed via PowerShell. This means that the developer does not need to spend a large amount of time filtering input and guessing at the output the administrator needs.

Cmdlets allow you to use the full power of custom C# code, without having to worry about input or output functions. Cmdlets also utilize native .NET framework classes that allow for managed code and working with objects.

This section shows how to use Visual Studio to create a custom Cmdlet and then utilize the functions exposed in that Cmdlet. Specifically, we will be creating a Cmdlet that queries the performance counters on a system and returns how long the system has been online.

Getting ready

Unlike functions and modules, to create a Cmdlet we require specialized tools. The first item we need is Visual Studio. If you don't have Visual Studio currently, there are "express" versions available that provide a free, but limited feature set. Alternatively, you can use the command line if you are familiar with compiling .NET classes from command line.

Additionally, you will need to download and install the Windows SDK. The SDK provides several system and .NET components necessary to create our Cmdlet.

How to do it...

Carry out the following steps:

  1. Open Visual Studio and select to create a new Class Library project.

  2. Import the references.

    • In Solution Explorer, right-click on References and then we select Add Reference. On the Browse tab, browse to C:\Program Files (x86)\Reference Assemblies\Microsoft\WindowsPowerShell\v3.0\ and select System.Management.Automation.dll.

    • In Solution Explorer, we right-click on References and then we select Add Reference. On the .NET tab, select System.Configuration.Install.

    • Solution Explorer should now look similar to the following screenshot:

  3. Add Cmdlet code:

        [Cmdlet(VerbsCommon.Get, "Uptime")]
        public class GetUptimeCommand : Cmdlet
        {
            protected override void ProcessRecord()
            {
                using (var uptime = new PerformanceCounter("System", "System Up Time"))
                {
                    uptime.NextValue();
                    WriteObject(TimeSpan.FromSeconds(uptime.NextValue()));
                }
            }
        }
  4. Add specific items for creating a Cmdlet:

        [RunInstaller(true)]
        public class GetUptimePSSnapIn : PSSnapIn
        {
            public GetUptimePSSnapIn()
                : base()
            {
            }
            public override string Name
            {
                get { return "GetUptimePSSnapIn"; }
            }
            public override string Vendor
            {
                get { return "Ed"; }
            }
            public override string Description
            {
                get { return "Returns the uptime of the system"; }
            }
            public override string VendorResource
            {
                get
                {
                    return "GetUptimePSSnapIn,Ed";
                }
            }
        }
  5. Compile the project.

    • On the Menu bar, select Build | Build GetUptime

  6. If the folder for the module doesn't exist yet, create the folder.

    $modulePath = "$env:USERPROFILE\Documents\WindowsPowerShell\Modules\GetUptime"
    if(!(Test-Path $modulePath))
    {
        New-Item -Path $modulePath -ItemType Directory
    }
  7. Copy GetUptime.dll from the output of Visual Studio to the new module folder.

    $modulePath = "$env:USERPROFILE\Documents\WindowsPowerShell\Modules\GetUptime"
    Copy-Item -Path GetUptime.dll -Destination $modulePath
  8. In a PowerShell console, execute Get-Module –ListAvailable to list all the available modules:

  9. Use the Cmdlet by calling the included commands:

How it works...

In the first step, we are creating a Visual Studio project for a class library. In this instance, I used Visual C# due to both to personal preference and the fact that there is more information available for creating Cmdlets with C#. Visual Basic could have been used as well.

Note

I configured the Visual Studio session as a .NET framework 2.0 project. This could have been 3.0, 3.5, or 4.0 instead.

In the second step, we add the necessary references to create and install our Cmdlet. The first reference—System.Managment.Automation.dll—loads the necessary components to tag this project as a Cmdlet. The second reference—System.Configuration.Install—loads the components necessary to install the Cmdlet on a system.

In the third step, we add the code for our Cmdlet. The code section can be broken into four sections: class attribute, class, ProcessRecord, and C# code.

  • The Cmdlet code begins with the line [Cmdlet(VerbsCommon.Get, "Uptime")], which is an attribute that describes the class and what it does. In this case, it defines the class as a PowerShell Cmdlet with a verb-noun pair of Get-Uptime.

  • The GetUptimeCommand class is a standard C# class and inherits from the Cmdlet class.

  • The ProcessRecord is the section that is executed when the Cmdlet is called. There is also an optional BeginProcessing and EndProcessing section that can be added to provide a build-up and tear-down process. The build-up and tear-down can be used to load information before processing and clear out variables and other objects when done processing

  • The C# code is the basic code and can be almost anything that would normally be included in a class project.

In the fourth step, we create the Cmdlet installer named GetUptimePSSnapin. The installer is a fairly simple class, similar to the Cmdlet class, which inherits the PSSnapin class and contains overrides that return information about the Cmdlet. In many scenarios, this section can be copy/paste into new projects and simply updated to reflect the new Cmdlet name.

In the fifth step, we compile the project. It is important to review the output from Visual Studio at this point to ensure no errors are reported. Any errors shown here may stop the project from compiling correctly and stop it from functioning.

Next, we create a folder to hold the compiled Cmdlet. This process is the same as we performed in the Creating and using modules recipe.

Lastly, we execute our commands to confirm the module loaded properly.

There's more

Cmdlet naming convention: Cmdlets are traditionally named in a verb/noun pair. The verb describes the action, such as get, set, or measure. The noun describes that object the action is being performed on or against. It is best practice to build functions and Cmdlets using this same naming convention for easy use.

For more information about which verbs are available and when they should be used, run Get-Verb from within PowerShell.

See also

Left arrow icon Right arrow icon

Key benefits

  • Extend the capabilities of your Windows environment
  • Improve the process reliability by using well defined PowerShell scripts
  • Full of examples, scripts, and real-world best practices

Description

Automating server tasks allows administrators to repeatedly perform the same, or similar, tasks over and over again. With PowerShell scripts, you can automate server tasks and reduce manual input, allowing you to focus on more important tasks. Windows Server 2012 Automation with PowerShell Cookbook will show several ways for a Windows administrator to automate and streamline his/her job. Learn how to automate server tasks to ease your day-to-day operations, generate performance and configuration reports, and troubleshoot and resolve critical problems. Windows Server 2012 Automation with PowerShell Cookbook will introduce you to the advantages of using Windows Server 2012 and PowerShell. Each recipe is a building block that can easily be combined to provide larger and more useful scripts to automate your systems. The recipes are packed with examples and real world experience to make the job of managing and administrating Windows servers easier. The book begins with automation of common Windows Networking components such as AD, DHCP, DNS, and PKI, managing Hyper-V, and backing up the server environment. By the end of the book you will be able to use PowerShell scripts to automate tasks such as performance monitoring, reporting, analyzing the environment to match best practices, and troubleshooting.

What you will learn

Streamline routine administration processes Automate the implementation of entire AD infrastructures Generate automatic reports that highlight unexpected changes in your environment Monitor performance and report on system utilization in detailed graphs and analysis Create and manage a reliable and redundant Hyper-V environment Utilize the Best Practices Analyzer from Microsoft to ensure your environment is configured optimally Manage the patch level of your enterprise Utilize multiple protocols to share information in a heterogeneous environment

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Mar 26, 2013
Length 372 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781849689465
Vendor :
Microsoft

Table of Contents

19 Chapters
Windows Server 2012 Automation with PowerShell Cookbook Chevron down icon Chevron up icon
Credits Chevron down icon Chevron up icon
About the Author Chevron down icon Chevron up icon
About the Reviewers Chevron down icon Chevron up icon
www.PacktPub.com Chevron down icon Chevron up icon
Preface Chevron down icon Chevron up icon
Understanding PowerShell Scripting Chevron down icon Chevron up icon
Managing Windows Network Services with PowerShell Chevron down icon Chevron up icon
Managing IIS with PowerShell Chevron down icon Chevron up icon
Managing Hyper-V with PowerShell Chevron down icon Chevron up icon
Managing Storage with PowerShell Chevron down icon Chevron up icon
Managing Network Shares with PowerShell Chevron down icon Chevron up icon
Managing Windows Updates with PowerShell Chevron down icon Chevron up icon
Managing Printers with PowerShell Chevron down icon Chevron up icon
Troubleshooting Servers with PowerShell Chevron down icon Chevron up icon
Managing Performance with PowerShell Chevron down icon Chevron up icon
Inventorying Servers with PowerShell Chevron down icon Chevron up icon
Server Backup Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.