Home Programming Visual Studio 2010 Best Practices

Visual Studio 2010 Best Practices

By Peter Ritchie
books-svg-icon Book
eBook $25.99 $17.99
Print $43.99
Subscription $15.99 $10 p/m for three months
$10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
BUY NOW $10 p/m for first 3 months. $15.99 p/m after that. Cancel Anytime!
eBook $25.99 $17.99
Print $43.99
Subscription $15.99 $10 p/m for three months
What do you get with a Packt Subscription?
This book & 7000+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook + Subscription?
Download this book in EPUB and PDF formats, plus a monthly download credit
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with a Packt Subscription?
This book & 6500+ ebooks & video courses on 1000+ technologies
60+ curated reading lists for various learning paths
50+ new titles added every month on new and emerging tech
Early Access to eBooks as they are being written
Personalised content suggestions
Customised display settings for better reading experience
50+ new titles added every month on new and emerging tech
Playlists, Notes and Bookmarks to easily manage your learning
Mobile App with offline access
What do you get with eBook?
Download this book in EPUB and PDF formats
Access this title in our online reader
DRM FREE - Read whenever, wherever and however you want
Online reader with customised display settings for better reading experience
What do you get with video?
Download this video in MP4 format
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with video?
Stream this video
Access this title in our online reader
DRM FREE - Watch whenever, wherever and however you want
Online reader with customised display settings for better learning experience
What do you get with Audiobook?
Download a zip folder consisting of audio files (in MP3 Format) along with supplementary PDF
What do you get with Exam Trainer?
Flashcards, Mock exams, Exam Tips, Practice Questions
Access these resources with our interactive certification platform
Mobile compatible-Practice whenever, wherever, however you want
  1. Free Chapter
    Working with Best Practices
About this book
When you are developing on the Microsoft platform, Visual Studio 2010 offers you a range of powerful tools and makes the whole process easier and faster. After learning it, if you are think that you can sit back and relax, you cannot be further away from truth. To beat the crowd, you need to be better than others, learn tips and tricks that other don't know yet. This book is a compilation of the best practices of programming with Visual Studio. Visual Studio 2010 best practices will take you through the practices that you need to master programming with .NET Framework. The book goes on to detail several practices involving many aspects of software development with Visual Studio. These practices include debugging and exception handling and design. It details building and maintaining a recommended practices library and the criteria by which to document recommended practices The book begins with practices on source code control (SCC). It includes different types of SCC and discusses how to choose them based on different scenarios. Advanced syntax in C# is then covered with practices covering generics, iterator methods, lambdas, and closures. The next set of practices focus on deployment as well as creating MSI deployments with Windows Installer XML (WiX)óincluding Windows applications and services. The book then takes you through practices for developing with WCF and Web Service. The software development lifecycle is completed with practices on testing like project structure, naming, and the different types of automated tests. Topics like test coverage, continuous testing and deployment, and mocking are included. Although this book uses Visual Studio as example, you can use these practices with any IDE.
Publication date:
August 2012
Publisher
Packt
Pages
280
ISBN
9781849687164

 

Chapter 1. Working with Best Practices

In any given software developer's career, there are many different things they need to create. Of the software they need to create, given time constraints and resources, it's almost impossible for them to perform the research involved to produce everything correctly from scratch.

There are all sorts of barriers and roadblocks to researching how to correctly write this bit of code or that bit of code, use that technology, or this interface. Documentation may be lacking or missing, or documentation may be completely wrong. Documentation is the same as software, sometimes it has bugs. Sometimes the act of writing software is unit testing documentation. This, of course, provides no value to most software development projects. It's great when the documentation is correct, but when it's not, it can be devastating to a software project.

Even with correct documentation, sometimes we don't have the time to read all of the documentation and become total experts in some technology or API. We just need a subset of the API to do what we need done and that's all.

 

Recommended practices


I call them "recommended practices" instead of "best practices." The superlative "best" implies some degree of completeness. In almost all circumstances, the completeness of these practices has a shelf-life. Some best practices have a very small shelf-life due to the degree to which technology and our knowledge of it changes.

Recommended practices detail working with several different technologies with a finite set of knowledge. Knowledge of each technology will increase in the future, and each technology will evolve in the future. Thus, what may be a best practice today may be out of date, obsolete, and possibly even deprecated sometime in the future.

One of the problems I've encountered with "best practices" is the inferred gospel people assume from best. They see "best" and assume that means "best always and forever." In software, that's rarely the case. To a certain extent, the Internet hasn't helped matters either. Blogs, articles, answers to questions, and so on, are usually on the Internet forever. If someone blogs about a "best practice" in 2002 it may very well have been the recommended approach when it was posted, but may be the opposite now. Just because a practice works doesn't make it a best practice.

Sometimes the mere source of a process, procedure, or coding recipe has the reader inferring "best practice." This is probably one of the most disturbing trends in certain software communities. While a source can be deemed reliable, not everything that a source presents was ever intended to be a "best practice", documentation at best. Be wary of accepting code from reputable sources as "best practices." In fact, read on to get some ideas on how to either make that code one of your recommended practices, or refute it as not being a best practice at all.

Further, some industries or organizations define business practices. They're defined as the one and only practice and sometimes referred to as "best" because there is nothing to compare. I would question the use of "best" in such a way because it implies comparison with at least one other practice, and that other practice was deemed insufficient in some way. To that end, in software practices, just because there is only one known way to do something, that doesn't mean it should be coined a "best practice."

There have been many other people who have questioned "best" in "best practice." Take Scott Ambler for example. Scott is a leader in the agile software development community. He is espousing "contextual practices" as any given "best practice" is limited at least to one context. As we'll see shortly a "best practice" may be good in one context but bad in another context.

"Best" is a judgment. While the reader of the word "best" judges a practice as best through acceptance, in the general case, most "best practices" haven't really been judged. For a practice to be best the practice needs to be vetted, and the requisite work involved in proving how and why the practice is best is never done. It's this very caveat that make people like Eugene Bardach question "best practices" as a general concept. In his article The Problem with "Best Practice", Bardach suggests terms like "good" or "smart." But that's really the same problem. Who vets "good" or "smart?" At best they could be described as "known practices."

Without vetting, it's often taken at face value by the reader based either solely on the fact that "best" was used, or based on the source of the practice. This is why people like Ambler and Bardach are beginning to openly question the safety of calling something a "best practice."

Most practices are simply a series of steps to perform a certain action. Most of the time, context is either implied or the practice is completely devoid of context. It leaves the reader with the sense that the context is anywhere, which is dangerous.

 

Intransitive "best" practices


Intransitive relationships in math can be denoted by A = B, B = C Δ C may or may not equal A. Moving from one framework to another, or one language to another, may mean some "best practices" are not transitive (that is, they can't be moved from one context to another and be expected to be true).

In the early days of C#, for example, it was assumed that the gospel of the double-checked locking pattern was transitive from C++ to C# (at least by a few people). The double-checked locking pattern is a pattern by which, in order to avoid slow locking operations, a variable is checked for null prior to locking as an optimization. This variable checking is done in order to implement lazy-initialization (for example, in a singleton). For example (as stolen from Jon Skeet, comments mine):

public sealed class Singleton
{
static Singleton instance = null;
static readonly object padlock = new object();
Singleton()
{
}
public static Singleton Instance
{
get
{
if (instance == null) // first check
{
lock (padlock)
{
if (instance == null) // double-checked
{
instance = new Singleton();
}
}
}
return instance;
}
}
}

Note

Downloading the example code

You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

As you can see from the code, it's apparent where the name "double-checked" came from. The assumption with this code is that a lock is only needed if the one and only initialization of instance needs to be performed. It's assumed that once it's initialized (and thus, no longer null) there's no need to lock because it never changes again. All fairly logical assumptions, unless we take into account the memory model of .NET (1.x, at the time). Neither the compiler nor the processor were required to perform the previous instructions in the order they were written, and the processor wasn't guaranteed not to be using a cached value (more on the Itanium than the x86, but who knows what processor will be used when we run our code). I won't get into the nitty-gritty details, but suffice it to say, an established "best" practice in C++ was a "worst" practice in C#.

Incidentally, the double-checked locking pattern was never really a good idea in C++. It was proved flawed in 2004 by Scott Meyers and Andrei Alexandrescu for very similar reasons compared to C#. This goes to show how some practices are not only intransitive, but become "worst" practices with further research or knowledge.

In .NET 2.0 (as well as later versions of Java) the memory model was changed to actually make double-checked locking work (at least in a non-debug build in .NET). You could actually write it in such a way as to get it to work in both .NET 1.x and .NET 2.0+, but, by that point the damage was done and double-checked locking had become evil. It certainly wasn't the quickest code anyway, but I digress. If you are interested in more of the details, I'd recommend Jon Skeet's C# In Depth, where he details use of the static initialization feature of C# to implement singletons and avoid the need for double-check locking altogether.

 

Benefits of using practices


There is no point to using practices if they don't add any value. It's important to understand at least some of the benefits that can be obtained from using practices. Let's have a look at some of the common practices.

Avoiding pragmatic re-use

We can sometimes find good documentation. It describes the API or technology correctly and includes sample code. Sample code helps us understand the API as well as the concepts. I don't know about you, but I think in code; sample code is often easier for me to understand than prose, but , sample code is a double-edged sword.

One drawback of sample code is it may have the side-effects you're looking for, so you take it at face value and re-use it in your code. This is a form of pragmatic re-use.

Pragmatic re-use is when a developer re-uses code in a way which the original code was not intended to be re-used. This is quite common, and one of the most common forms of pragmatic re-use is copying and pasting code, such as copying and pasting the sample code as shown earlier.

In C#, classes are open for derivation unless they are modified with the sealed keyword to prevent inheritance. The lack of modification with sealed doesn't necessarily imply that the class is intended to be derived from. Deriving from a class like this is another form of pragmatic re-use because it's being re-used where re-use was not expected.

There are many motivators for pragmatic re-use. When a developer has neither the time nor the resources to learn code to perform a certain task, they often resort to a form of pragmatic re-use such as copy and paste.

Reducing technical debt

Technical debt is a fairly well understood concept, but, it bears repeating as one of the potential motivators of best practices. Technical debt refers to the negative consequences of code, design, or architecture. There are all sorts of negative consequences that can occur from code. One common consequence is from code with no test coverage. The negative consequence of this is the lack of stability introduced from any change to that code.

Pragmatic re-use has the side-effect of taking on technical debt. At the very least, the code is doing something in a way it was never intended. This means it was not designed to do that, and therefore could never have been tested to work correctly in that scenario. The most common impetus for pragmatic re-use is that the developer either didn't understand how to do it himself, or didn't understand the original code. This means there is code in the code base that potentially no one understands. This means they don't understand why it works, how to test it correctly, what to do if something goes wrong with the code, or how to correctly change in response to changing requirements.

To be clear, technical debt isn't always bad. A team can take on technical debt for a variety of reasons. The important part is that they know the consequences and are willing to live with those consequences, maybe for a short period of time, to get some sort of benefit. This benefit could be time-to-market, proof-of-concept (maybe directly related to funding) meeting a deadline, budget, and so on.

There are all sorts of great sources of information on managing technical debt, so we won't get into technical debt beyond its impetus behind using best practices. If you're not clear on technical debt, I recommend as an exercise for the reader to learn more about it. Perhaps Martin Fowler's bliki (http://martinfowler.com/bliki/TechnicalDebt.html) or Steve McConnell's blog (http://blogs.construx.com/blogs/stevemcc/archive/2007/11/01/technical-debt-2.aspx) would be a good start.

 

Not invented here syndrome


Not invented here (NIH) syndrome has become much more understood over the past decade or so. There was a time when there were a handful of developers in the world developing software. Most knew development teams needed to figure out how to write basic data structures such as linked lists, and basic sorting algorithms such as quick sort, or how to perform spell checking. This knowledge wasn't generally available and componentization of software had yet to occur. The value of their project was overshadowed by the sheer complexity of the infrastructure around producing effective software.

Shoot ahead in time slightly into an era of componentized software. Components, libraries, APIs, and frameworks began to appear, that took the infrastructure-like aspects of a software project, and made sharable components that anyone within reason could simply drop into their project and start using. Presumably, the time required to understand and learn the API would be less than having to write that component from scratch.

To a select few people this isn't the case. Their ability to write software is at such a high level that for them to understand and accept an API was a larger friction (they thought) than it was to write their own. Thus, the NIH syndrome began. Because a certain technology, library, API, or framework wasn't invented by a member of the developer team, and therefore under their entire control, it needed to be written from scratch.

In the early days, this wasn't so bad. Writing a linked list implementation was indeed quicker than trying to download, install, and understand someone else's linked list implementation (for most people). These libraries grew to millions of lines of code, and hundreds of person-hours worth of work, but NIH continued. Language frameworks and runtimes became more popular. C++'s STL, Java, .NET, and so on, included standard algorithms (framework) and abstractions to interface with underlying operating systems (runtimes), so it has become harder to ignore these libraries and write everything from scratch. But the sheer magnitude of the detail and complexity of these libraries was difficult to grasp given the detail of the documentation. In order to better utilize these libraries and frameworks, information on how to use them began to be shared. Things like best practices made it easier for teams to accept third-party libraries and frameworks. Lessons learned were being communicated within the community as best practices. "I spent 3 days reading documentation and tweaking code to perform operation Y, here's how I did it" became common.

Practices are a form of componentization. We don't actually get the component, but we get instructions on where, why, and how to make our own component. It can help us keep our software structured and componentized.

 

Beyond practices


Some methodologies from other disciplines have recently begun to be re-used in the software industry. Some of that has come from lean manufacturing, such as kaizen, and some from the martial arts, such as katas. Let's have a brief look at using these two methodologies.

Using katas

In the martial arts, students perform what are known as katas. These are essentially choreographed movements that the student is to master. Students master these katas through repetition or practice. Depending on the type of martial art, students move through dan grades through judgment of how well they can perform certain katas.

The principle behind kata is the muscle memory. As students become proficient in each kata the movements become second nature to them, and they can be performed without thought. The idea is that in battle the muscle memory gained from the katas would become reflexive and they would be more successful.

In the software development communities, kata-like sessions have become common. Developers take on specific tasks to be done in software. One way is to learn how to do that task, another is to repeat it as a way of remembering how to implement that specific algorithm or technique. The theory is that once you've done it at least once you've got "muscle memory" for that particular task. Worst-case is that you now have experience in that particular task.

"Kata" suffers slightly from the same syndrome as "best practice", in that "kata" isn't necessarily the most appropriate term for what is described previously. Getting better at a practice through repeated implementation results in working code. Kata is repeating movement not necessarily so the movement will be repeated in combat/competition, but so that your mind and body have experience with many moves that it will better react when needed. Software katas could be better viewed as kumites ("sparring" with code resulting in specific outcomes) or kihons (performing atomic movements like punches or kicks). But this is what coding katas have come to signify based on a rudimentary understanding of "kata" and the coding exercises being applied.

At one level, you can view practices as katas. You can implement them as is, repeating them to improve proficiency and experience, the more you practice. At another level, you could consider these practices as a part, or start, of your library of practices.

Reaching kaizen

In the past few years, there has been much in the way of process improvement in the software industry that has been taken from Japanese business and social practices. Much like kata, kaizen is another adopted word in some circles of software development. Originally borrowed from the lean manufacturing principles, lean manufacturing was originally attributed to Toyota. Kaizen, in Japanese means "improvement."

This book does not attempt to document a series of recipes, but rather a series of starting points for improvement. Each practice is simply one way of eliminating waste. At the most shallow-level, each practice illuminates the waste of trying to find a way to produce the same results as the detailed practice. In the spirit of kaizen, think of each practice as a starting point. A starting point not only to improve yourself and your knowledge, but to improve the practice as well.

Once you're comfortable with practices and have a few under your belt, you should be able to start recognizing practices in some of the libraries or frameworks you're using or developing. If you're on a development team that has its own framework or library, consider sharing what you've learned about the library in a series of recommended practices.

How would you start with something like this? Well, recommended practices are based on people's experience, so start with your experiences with a given framework or library. If you've got some experience with a given library, you've probably noticed certain things you've had to do repeatedly. As you've repeated doing certain tasks, you've probably built up certain ways of doing it that are more correct than others. It has evolved over time to get better. Start with documenting what you've learned, and how that has resulted in something that you'd be willing to recommend to someone else as a way of accomplishing a certain task.

It's one thing to accept practices to allow you to focus on the value-added of the project you're working on. It's another to build on that knowledge and build libraries of practices, improving, organizing, and potentially sharing practices.

Aspects of a practice

At one level, a practice can simply be a recipe. This is often acceptable, "Just Do It" this way. Sometimes it might not be obvious why a practice is implemented in a certain way. Including motivators or the impetus behind why the practice is the way it is can be helpful not only to people learning the practice, but also to people already skilled in that area of technology. People with skills can then open a dialog to provide feedback and begin collaborating on evolving practices.

Okay, but really, what is a "best practice?" Wikipedia defines it as:

"...a method or technique that has consistently shown results superior to those achieved with other means...".

The only flaw in this definition is when there's only one way to achieve certain results, it can't still be "best" without being compared to some other means. "...method or technique" leaves it pretty open to interpretation on whether something could be construed as a best practice. If we take these basic truths, and expand on them, we can derive a way to communicate recommended practices.

The technique or method is pretty straightforward (although ambiguous to a certain degree). That really just distills down to a procedure or a list of steps. This is great if we want to perform or implement the practice, but, what do we need to communicate the procedure, intent, impetus, and context?

Evaluating practices

I could have easily jumped into using practices first, but, one of the points I'm trying to get across here is the contextual nature of practices whether they're referred to as "best practices" or not. I think it's important to put some sort of thought into the use of a practice before using it. So, let's look at evaluation first.

Once we define a practice we need a way for others to evaluate it. In order to evaluate practices, an ability to browse or discover them is needed.

In order for someone else to evaluate one of our practices, we need to provide the expected context. This will allow them to compare their context with the expected context to decide if the practice is even applicable.

In order for us to evaluate the applicability of another process, we need to know our context. This is an important point that almost everyone misses when accepting "best practices." The "best" implies there's no need for evaluation, it's "best", right? Once you can define what your context means you can better evaluate whether the practice is right for you, whether it can still be used but with a little evolution, or simply isn't right for you.

Documenting practices

Documenting a practice is an attempt at communicating that practice. To a certain degree, written or diagrammatic documentation suffers from an impedance mismatch. We simply don't have the same flexibility in those types of communication that we do in face-to-face or spoken communication. The practice isn't just about the steps involved or the required result, it's about the context in which it should be used.

I have yet to find a "standard" way to documenting practices. We can pull some of what we've learned from patterns and devise a more acceptable way of communicating practices. We must first start out with the context in which the practice is intended to be used, or the context in which the required outcome applies.

Scott Ambler provides some criteria for providing context about teams that can help a team evaluate or define their context. These factors are part of what Ambler calls Agile Scaling Model (ASM). The model is clearly agile-slanted, but many of the factors apply to any team. These factors are discussed next.

Geographic distribution

This involves the distribution of the team. Is the team co-located or are they distributed over some geographic location? This distribution could be as small as cubes separated by other teams, team members separated by floors, team members in different buildings, cities, or countries and time zones. A practice that assumes the context is a co-located team might be more difficult to implement with a globally-distributed team. Scrum stand-ups is an example. Scrum stand-ups are very short meetings, held usually once a day, where everyone on the team participates to communicate what they have worked on, what they are working on, and any roadblocks. Clearly, it would be hard to do a "stand up" with a team geographically distributed across ten time zones.

Team size

Team size is fairly obvious and can be related to geographic distribution (smaller teams are less likely to be very geographically distributed). Although different from geographic distribution, similar contextual issues arise.

Regulatory compliance

Many companies are burdened with complying with regulatory mandates. Public companies in the United States, for example, need to abide by Sarbanes-Oxley. This basically defines reporting, auditing, and responsibilities an organization must implement. Applicability of practices involving audit or reporting of data, transactions, customer information, and so on, may be impacted by such regulations.

Domain complexity

Domain complexity involves the complexity of a problem the software is trying to solve. If the problem domain is simple, certain best practices might not be applicable. A calculator application, for example, may not need to employ domain-driven design (DDD) because the extra overhead to manage domain complexity may be more complex than the domain itself. Whereas a domain to manage an insurance domain may be so complex that using DDD will have partitioned the domain complexity and would make it easier to manage and understand.

Organizational distribution

Similar to team distribution, organizational distribution relates to the geographic distribution of the entire organization. Your team may be co-located but the actual organization may be global. An example of where a globally-distributed company may impact the viability of a practice could be the location of the IT department. If a particular practice involves drastically changing or adding to IT infrastructure, the friction or push back to implementing this practice may outweigh the benefit.

Technical complexity

Technical complexity can be related to domain complexity, but really involves the actual technical implementation of the system. Simple domain complexity could be implemented in a distributed environment using multiple subsystems and systems, some of which could be legacy systems. While the domain may be simple, the technical complexity is high. For example, practices involving managing a legacy system or code would not be applicable in a greenfield project where there are yet to be any legacy systems or code.

Organizational complexity

Organizational complexity can be related to organizational distribution but is generally independent. It's independent for our purposes of evaluating a practice. For example, in a complex organization with double-digit management levels, it may be easier to re-use hardware than it is to purchase new hardware. Practices that involve partitioning work amongst multiple systems (scaling out) may be more applicable than best practices that involve scaling up.

Enterprise discipline

Some enterprises have teams that drive their own discipline, and some enterprises have consistent discipline across the enterprise, not just the software development effort. Practices that are grounded in engineering disciplines may be easier to implement in enterprises that are already very disciplined.

Life-cycle scope

Some projects span a larger life cycle than others. Enterprise applications, for example, often span from conception to IT delivery and maintenance. Practices that are geared towards an off-the-shelf model of delivery (where deployment and maintenance is done by the customer) and ignore the enterprise-specific aspects of the project, may be counterproductive in a full life-cycle scope.

Paradigm

Finally, when attempting to evaluate practices, the development paradigm involved should be taken into account. For example, on an agile project, best practices around "waterfall" paradigms may not be applicable.

Regardless of the team factor, it's important to not discount practices just because factors may be different or opposite. Just because the context is different doesn't mean that the practice is completely incompatible.

One way of viewing a context is as a requirement. There are various practices for eliciting, documenting, and engineering requirements that we can inspire our method of documenting the context. Practices are a behavior, or an action. Behavior-driven design (BDD), although completely orthogonal to documenting the context of a practice, builds on the fact that users of software use the software's behavior. In order to better describe their requirements so that the correct behavior can be discovered, the concept of "specifications" is used.

Specifications in BDD are a more flexible way of specifying requirements. One form of documenting these specifications is using the Gherkin syntax. This syntax is basically Given X [and X2] When Y [and Y2] Then Z [and Z2]. Building on that type of syntax, we can simply re-use Given to document our context.

For example, with the canonical source code control example, we could document our context as follows:

Given a multi-person, collaborative, software project
And the software evolves over time
And may change concurrently for different reasons
When making changes to source code
Then use an off-the-shelf source code control system

But, there's no reason why you should limit yourself to re-using existing documentation semantics. If something is clearer to read and easier to follow, use that method.

Categorization

There is not much documented on categorizing practices. For the most part, this can be fairly obvious. We could have procedural practices (using source code control), design practices (employing authentication to ensure correct access to private data), and so on.

Starting out, building a practices library may not need much, if any, of categorization. As you build on your skill set, and increase your knowledge and experience with more practices, you may find that certain degrees of categorization for you and your team may be necessary. It's one thing to have your own knowledge and experience with practices, but if you're trying to mentor a team and help the team improve as a whole, then categorization can begin to prove its worth.

This is another area which we can draw upon how structured patterns have become in their proliferation and dissemination. Patterns too have somewhat ambiguous recommendations for categorization, but to build on something already in place requires less reinvention, learning, and a broader audience.

Although categorization is useful for organizing practices, you might also want to consider aggregating certain practices into one practice, and detailing the different steps involved in the different contexts. Performing lengthy operations on a background thread, for example, has many different contexts, and each context may have a specific way of implementing the practice (WPF, WinForm, WCF, and so on).

Just because we use the term "category" doesn't mean membership in this category is mutually exclusive.

Patterns are generally pre-categorized as "design," but some categories are often used to group patterns. These categories are discussed next.

Creational

Patterns that apply to the creation of objects. Probably not something you'd normally classify a practice under.

Structural

Structural patterns are patterns involving specific ways to design or manage relationships between classes, objects, or components in a code. Could be used as a subcategory of architecture practices.

Behavioral

Technically, this category involves patterns that relate to designing how classes, objects, or components communicate between one another, but it depends on your interpretation of "behavioral." Stands well enough alone.

Integration

Practices involving integration of systems or subsystems.

Procedural

These are generally business procedures. While most of what this book discusses is not business procedures, there are some really good business practices in the software development industry that I'd recommend, for example, agile practices.

Anti-patterns

There's much written on ways not to write software. We could go out on a limb and do the same thing with practices. But, for the most part, there aren't anti-practices. Almost all practices should be considered useful. It's mere context that defines where the practices are useful. I wouldn't suggest building a category of anti-practices as much as spending time improving how contexts are described in practices. However, would include invalid contexts (contexts where the practice is not recommended) when documenting the context of a given practice.

Practices are generally less focused than patterns so their categories can include various other categories, such as:

  • Security: Practices involving improving or retaining security of data could be grouped here. This can be topics like authentication, authorization, encryption, and so on.

  • Architectural: This always ends up being a broad and subjective category, and can end up being a catch-all for practices that just don't fit anywhere else. However, we can use this category to subcategorize other categories. For example, a practice may be tagged as security and architectural, for example, a practice to keep private data within its own database and on a separate server.

  • Quality: To a certain extent all practices have something to do with quality. The mere use of a practice implies that someone else has implemented and worked out all of the kinks, just improving the quality of your software over having to invent the practice yourself. However, some practices are specifically geared towards improving the quality of your software. The practices of using unit tests or using test-driven design (TDD), for example, are practices specifically geared at helping improve quality.

  • User experience: I'm sure on any given day you can find dozens and dozens of practices around user interface (UI) design, user experience (UX) design, and so on. There are lots of UX practices out there. An example of such a practice relating more about software design than UI design could be: perform lengthy operations on a background thread.

  • Application health: This category deals with the dissemination and reporting of application or system health. In simple applications, this may deal with informing the user of errors, warnings, and so on. These are fairly simple practices that can often be covered under UX. In larger, more complex, distributed systems, or systems with components without a UI, it's vital that problems with the system, such as being able to perform its intended tasks (health), be communicated outside of the system. For example, given a Windows service when errors are encountered then log the error to Windows Event Log.

  • Performance: Performance practices are a bit tricky because performance improvements are always something that need to be observed and evaluated, for example, premature optimizations. But there are some practices that programmers can use that are known to be faster than other types of implementations. Picking the fastest algorithm for a given situation (context) is never a premature optimization.

  • Scalability: Practices involving the ability for a system to scale, either horizontally (out) or vertically (up) can be categorized as scalability practices. Examples of such practices may involve things like breaking work into individual tasks that can be executed concurrently, or employing the use of messaging.

  • Language: Some practices can get down to a much lower-level, such as the language level. Practices about using language features in a specific way could be categorized here, for example, avoiding closures within loops. Many such practices in this category can be monitored and/or evaluated through static code analysis. Tools such as Visual Studio Code Analysis can be used to monitor compliance with such practices.

  • Deployment: Deploying systems and applications in and of itself can have many practices. There are many tools that basically implement these practices for you, but some complex situations require their own practices relating to deployment, for example, preferring WiX over Visual Studio deployment projects.

In this book

For the purposes of this book, I'll keep it simple. Practices in this book will be documented as follows:

Context: In the context of X.

Practice: Consider/do Y.

Context details the situation in which practice should apply.

Evolving practices—a collaborative effort

We don't really want to reinvent the wheel, and neither do most other people. But, as we create, communicate, and evolve practices we often begin a dialog. We interact with people on practices involving frameworks and libraries that they also have had experience with. Their experiences may have been different than yours. Collaborate with team or community members to evolve and improve practices so that other, less-skilled people can write software concentrating on the value to be added quicker, and with higher quality.

 

Axiomatic practices


At some point you'll either encounter or create an axiomatic practice. Axiomatic means "self-evident truth." "Use source code control" is an axiomatic practice. No one can really argue against it, they may not be doing it, but they know they should.

Axiomatic practices are fine, but they should be avoided when possible. They could indicate that they are too vague and too hard to evolve. Part of the goal here is to improve over time.

Most "best practices" are presented as axiomatic practices. There's no expected context and the implication is that it applies in all circumstances. It's important to read-between-the-lines with these types of practices. Try to figure out what the context might be then compare it to yours. If that's difficult, make sure you evaluate every possible angle.

 

Patterns


It may seem like these types of practices are patterns. Indeed some of the practices actually detail certain patterns and how to implement them, but they're different from patterns in that they are not specific implementations of logic to produce a specific result or fulfill a specific need. Practices can detail implementation but they don't need to. They can detail higher-level tasks or process without providing specific detail of how to implement the practice.

"Use source code control," for example, is a very common recommended practice. Any given software project involves collaborating with other team members working on the same code base and sometimes the same files in the code base. The team also needs to deal with subsequent and concurrent versions of the code base in their work of creating and maintaining the code base.

 

Why practices?


There are various reasons why practices can help teams write software. Let's have a look at some common reasons why you'd want to use practices to help improve the process of writing a software.

An empirical and not a defined process

Software development is generally an empirical process. The empirical process control model is one that imposes control over process through inspection and adaptation. A defined process documents all of the steps to arrive at a known result. Through the use of a defined process to detail ways of developing a software and defining processes through practices, we can offload much of the burden of inspection and adaptation in software development. But any unknown or undiscovered areas of software development will require empirical process control. As we become more skilled in our understanding of many practices, much of the thought that goes into developing software can also be concentrated on the value of the solution we're trying to implement.

At lower-levels, we can't really define the process by which we will implement most software in its entirety. We try to impose a defined process on a software with things like the software development life cycle (SDLC), and define that there are several phases to writing software such as: inception, analysis, architecture, design, development, test, delivery, operations, maintenance, and so on. In fact there have been processes defined throughout the history of software development to try and turn what is generally an empirical process into a defined process, or at least taking what is known of the process and making it defined. Unfortunately, these defined processes hide the fact that much of the details of producing software are empirical.

The practices in this book do not try to distract from the fact that most software projects are empirical and try to impose processes. In a way, practices are a way of making more of the software development process defined rather than empirical. This book tries to define a way to reach commonly required goals of many software development projects. The goals shared amongst many software development projects cease to become value-added to that particular project, they become commodities. Commodities are important to the delivery and health of the software, but are neither unique to the project, nor require much, if any, research. Research, into areas of a project that don't add value obviously doesn't provide the same return on investment. If we can implement one of these commodities with little or no research then the project is better for it. Better because it can move on to spending time on the value that the project is intending to provide.

The quintessential example in so many contexts is logging. Microsoft Word is not an application, library, or framework that provides logging, but Word may perform its own logging in order to debug, gauge health, aid support, gather metrics, and so on. All of which help Word satisfy existing and future customers. But the software developers on the Word team do not need to discover any particular logging implementation because they are trying to produce a word processing product.

Cross-cutting concerns

Based on what you have just read, and if you look closely at practices, you'll notice that the goal of each practice is a goal shared by many software teams or products. This book obviously does not try to detail how to write a word processor or how to write a web browser. However, it does detail certain practices that would aid in the development of almost all software: cross-cutting concerns. A cross-cutting concern is a task that needs to be performed by more than one piece of software, more than one module in your software, or more than one class in your module. It is said that each class/module/software has concerns over and above what is deemed to be its responsibility.

An invoicing module has the responsibility of tracking line items, prices, discounts, customer, applying taxes, and so on. But in many aspects it needs to take on more than it is responsible for. An invoicing module may need to log the actions it performs, it may need to perform data caching, it may need to provide health monitoring, and so on. None of these things are really what the software does, but they help the software do what it does in a more efficient and effective way.

Focus on the value

In many of the earlier paragraphs one thing should shine through: practices are a means by which we can take what isn't truly our "value proposition" in the software solution we're trying to implement (such as infrastructure), and concentrate our efforts on the value we want to provide. This gets our value out sooner and theoretically lets us spend more time on ensuring quality of that value.

As software developers move from journeymen to craftsmen or masters, much of what we gain in skill is through learning practices that allow us to focus on a solution's value. Craftsmen and masters need to communicate practices as well as mentor journeymen in a better way if our industry is going to thrive and improve.

 

The power of mantras


"Best practices" is so commonly used that it has become a mantra. One definition of mantra is a word or phrase commonly repeated. I believe commonly used terms begin to take on a life of their own and begin to lose their meaning. People repeat them because they're so common, not because of their meaning. "Best practices" is one of those phrases. Many people use the term "best practice" simply because it's part of our technical vocabulary, not because they really think the practices are "best" in all places. They use the term as an idiom not to be taken literally, but to take as "recommended practices," "contextual practices," or even "generally accepted practices."

The unfortunate problem with "best practice" as a mantra is that some people take the phrase literally. They haven't learned that you need to take it with a grain of salt. I believe if we use terms more appropriate for our industry, the way it works, and the degree to which technology changes within it, the more we use these terms the greater adoption they will have. Eventually, we can relegate "best practices" to the niche to which it describes.

"Best Practices" is an inter-industry term that's been around for a long time and is well recognized. It will be a long time before we can move to a more accurate term. I, of course, can only speculate how it started being used in the software development industry. Other industries, like woodworking, don't suffer from the quick technology turnover, so their "best practices" can be recommended for a very long time, and are therefore more accurately be called "best practices".

Still other industries openly boast different terms. Accounting and other organizations have chosen "generally accepted" to refer to principles and practices.

 

Summary


I hope the information in this chapter has motivated you to help become part of the solution of thinking more about "recommended practices" or "contextual practices." I've tried to ensure that each practice is complete, correct, and up-to-date when it was written. But over time, each of these practices will become more and more out-of-date. I leave it as an exercise to the reader to improve each practice as time goes on.

So don't take this as a recipe book. You should try to understand each of the recommended practices, recognize the context for which it is intended, and try your hardest to either improve it or to tailor it to your project or your context. You're doing yourself a disservice if you simply take these practices and employ pragmatic re-use.

I hope this chapter has either re-enforced your thoughts on the term "best practices", or opened your eyes slightly. "Best practices" are far from a panacea, and far from "best" in every context. We've seen several motivating factors about why we might want to "use recommended practices" and why we're sometimes forced to resort to "recommended practices" rather than figure it out. I hope the information in this chapter has motivated you to help become part of the solution of thinking more about "recommended practices" or "contextual practices." Finding and using recommended practices is just the first part of the puzzle. In order to use a practice properly, we need to evaluate the practice: we need to know the context for which it's intended as well as the context in which we'd like to use it. This can be a complex endeavor, but several criteria can help us evaluate applicability of a practice. Once we know our own context, the context in which we would like to apply a particular pattern, it is only then we can truly evaluate a practice and use it properly. After all, we don't want to use a practice to save time and avoid technical debt if, in fact, it increases our technical debt and reduces quality.

In the next chapter we'll begin looking at source control practices. We'll look at some terminology, source code control architectures, and source code control usage practices.

About the Author
  • Peter Ritchie

    Peter Ritchie is a software development consultant. Peter is president of Peter Ritchie Inc. Software Consulting Co., a software consulting company in Canada's National Capital Region specializing in Windows-based software development management, process, and implementation consulting. Peter has worked with such clients as Mitel, Nortel, Passport Canada, and Innvapost from mentoring to architecture to implementation. Peter has considerable experience building software development teams and working with startups towards agile software development. Peter's range of experience ranges from designing and implementing simple standalone applications to architecting distributed n-tier applications spanning dozens of computers; from C++ to C#. Peter is active in the software development community attending and speaking at various events as well as authoring various works including Refactoring with Microsoft Visual Studio 2010.

    Browse publications by this author
Visual Studio 2010 Best Practices
Unlock this book and the full library FREE for 7 days
Start now