UGTS Document #47 - Last Modified: 8/29/2015 3:23 PM
Powershell Best Practices
If you write any great deal of code in Powershell, you'll soon discover a
set of best practices:
- Sharp Tools - As with any programming language, you
should always use the most recent version of the tools available for
that language. The most recent production release of Powershell is
2.0, which has a basic development environment called the ISE to step
through and debug powershell code.
Powershell 3.0 is currently in
CTP1 as of 9/19/11, and will be released at the same time as Windows 8,
with immediate download for Windows 7 and Server 2008 R2 likely.
Backporting to Vista is likely, but Powershell 3.0 on Windows XP /
Server 2003 might not be supported. 3.0 includes intellisense and
a large number of other improvements to the ISE, along with a large
number of modules, and some language simplifications.
- Strict Mode - You can't require that variables in
Powershell be declared before using them, but you can require that they
be initialized before using them, and you should do so to catch typos.
Set-strictmode -Version 2.0 will cause the
compiler to treat references to uninitialized variables as errors.
- Explicit Type - If a variable is simple type such
as a string, date, or integer, you should explicitly declare the type of
the variable so that you don't get unexpected results during assignment.
Use [string]$s = "7" instead of
$s = "7", and then you won't be scratching
your head when a subsequent assignment puts a decimal value into the
variable when you thought that it was a string.
- Stop and Catch Errors - The default behavior of
Powershell out of the box is to silently continue after any and all
errors, like the On Error Resume Next
statement from VB6 days. In today's version of Powershell where we
have try/catch/finally exception handling, this is unnecessary and it
can backfire if an error arises which should cause the script to stop
(but the script continues anyway and does unintended damage to the
system). Always set $ErrorActionPreference
= 'Stop' at the beginning of your script, and use exception
handling to deal with errors or report them to the admin.
- Named Parameters - One of the strengths of
powershell is the ability to pass parameters by name. When you
call a function or cmdlet by passing in all the parameters by name
rather than by position, you can modify the function to add as many
extra parameters as you need later, without necessarily breaking the
clients, or making it hard for the clients to call the function by
requiring a lot of parameters in an ordered list. Powershell
functions should take at most
one parameter by position (that is, without specifying the name), and
this parameter should be the primary/pipeline input. Furthermore,
when you call a function you've written, you should pass all the
parameters by name, because you never know when you might need to
further customize the function for other clients. This both
improves readability and future-proofs your code. A third reason
to use pass parameters by name is that it reminds you not to try to pass
a paramater list with parenthesis and commas, which is not what you want
to do with Powershell functions.
As you begin to use Powershell a lot, you'll find that Powershell has the
- Admins Only - Powershell was not designed to be
user-friendly. It was designed to be admin-friendly.
Powershell runs in a console window, and not as a GUI application.
While you can make Powershell scripts display windows (such as the open
file dialog), it is not easy to do, and Powershell was not built for
that. Powershell does not support the event driven programming
model that you find in Visual Studio which makes it easy to write a GUI
application. Furthermore, the only way to hide the console window
is by running the powershell process in a different login session than
the interactive one. Powershell scripts are for interactive
sessions with admins, or for back-end automated processes. They
don't make sense for anything else.
- Slow and Heavyweight - Powershell has power, but it
comes at the expense of speed and memory. Whereas a VBScript can
launch and run with overhead of about 0.1 seconds or less, just loading
the Powershell window usually takes over 1.5 seconds all by itself.
Starting a Powershell script is not something that you can rely on for
split second timing. However, once you have the console open, it
can be reasonably fast - about the same speed as VBScript - on the order
Powershell 3.0 is going to offer some speed
improvements, making it up to 6x faster - possibly loading in about a
third of a second. This would make it fast enough for most uses,
but still noticably slower than VBScript.
Powershell is also
quite a memory hog - the console window alone uses about 30MB of memory,
whereas a VBScript has only 1-2MB of memory use in most cases.
VBScript is lighter and faster than Powershell by a great margin.
If you Google search for optimizing or stress testing Powershell,
you'll find close to zero relevant results - no one bothers because
everyone knows it's a hog and that it was not made to be small and fast.
- No Classes - Powershell is great functional
language, but a terrible object-oriented language. You cannot
define your own classes except through dynamically compiliing .NET code
with the Add-Type cmdlet. The only way to have privately defined
members and namespaces is to use modules, and they must be included one
by one with only one module allowed per file. There is likewise no
inheritance or interface support - instead you dynamically test objects
for the existence of members by name to determine if an object
You may also discover some pitfalls with scripting in Powershell:
- Don't Touch the Default Profile - A lot of
Microsoft and 3rd party products are starting to use powershell for
installation routines. If you modify the default profile on a
machine (for example, by setting strict mode), you might inadvertently
breaks these installers. Leave the default profile.ps1 file alone,
or at least be very careful about the changes you make there.
Furthermore, as you script in Powershell, nearly all of your development work will fall into one of three
- Roaming Admin - If you are a consultant,
contractor, or general fix-it guy, and you're working on systems owned
by another organization, and you don't get to control what gets
installed on these machines, then you have to use what's 'in the box' as
Ed Wilson says. In most cases, it's not feasible or desired by the
client for you to customize their machine to the way you think it ought
- Dedicated Admin - If it's your job to administer a
group of networked servers where you work, then you do have the power to
control what gets installed on the machines, and to customize them
within reason, and you can always assume that the computers are either
on the network or offline. However, you probably won't work there
forever, and generally it's a good idea not to make things too complex
for the next guy to work there or the other admins who have to run
things while you're on vacation.
- Personal Use - If you're a software developer and
you run Powershell on your personal machine to streamline your heavy
duty tasks, you have full control over what's on the system and how it
runs, but you can't assume that you have a connection to your network or
even to the internet. However, even in this case you shouldn't
make things too complex, because machines crash, and if you have to do a
rebuild of yours, you don't want to have to remember a whole lot about
how to setup your environment to get it back to being operational.
These hazards and patterns of use lead to a second set of best practices:
- Powershell = Do-it-All - Powershell is a do-it-all
language. It doesn't need to be optimized to be fast or
lightweight, but it does need to be interactive and easy to use for you
as the admin to be able to do just about anything and tie it all
together easily. Being able to do-it-all on a moment's notice
requires having a cmdlet that you can call in one line that will load
all your modules and setup your execution environment.
modules and cmdlets that your do-it-all initializer loads on the local
machine if this is personal use, or on a central fileserver on the LAN
if this is for company use (optionally cache the files locally if the
connection to the central fileserver is slow). Have the
initializer unload everything before loading it in so that you can be
sure that you're always working with the most recent code that you've
written every time you run your scripts.
- Avoid Optimization - Optimizing Powershell except
for the most badly performing code is usually a waste of your time.
If optimization is critical, consider putting the performance critical
code in a .NET assembly and loading that from Powershell.
- Report All Warnings and Errors - Scripts should log
everything that they do to text files on the local machine, and any
crash or severe warning should generate an email to the admin if
possible, optionally stopping the script. Take the time to make
your error reporting as thorough, verbose, reliable, and convenient as
possible. Powershell is a language that is used to tie everything
together, and when you tie everything together, you multiply the number
of things that can go wrong.