en pt

Tag: csharp

Functional Programming in C#: Map, Filter, and Reduce Your Way to Clean Code

Editorial note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. While you’re there, have a look at CodeIt.Right, which can help you improve the quality of your code.

C# is supposed to be an object-oriented language, but it’s possible that you, as a .NET/C# developer, have been using functional programming concepts without even knowing it.

And that’s what today’s post is about. I’ll just first briefly cover the attractions of functional programming and why it makes sense to apply it even when using a so-called object-oriented language. Then I’ll show you how you’ve already been using some functional style in your C# code, even if you’re not aware of it. I’ll tell you how you can apply functional thinking to your code in order to make it cleaner, safer, and more expressive.

C# Functional Programming: Why?

We know the .NET framework offers some functional capabilities in the form of the LINQ extension methods, but should you use them?

To really answer this, we need to go back a step and understand the attraction of functional programming itself. The way I see it, the easiest path to start understanding the benefits of functional programming is to first understand two topics: pure functions and immutable data

Pure functions are functions that can only access the data they receive as arguments and, as a consequence, can’t have any side effects. Immutable data are just objects or data structures that, once initialized, can’t have their values changed, making them easier to reason about and automatically thread-safe.

Fundamental Functional Programming Operations and How to Perform Them Using C#

With the what and why of functional programming out of the way, it’s time to get to the how.

l’ll be covering three fundamental functions: map, filter, and reduce. I’ll start by showing some use cases, then I’ll show a traditional, procedural way of solving the problem. And finally, I’ll present the functional way.


In simple terms, the “map” operation takes a sequence of items, applies some transformation to each one of those items, and returns a new sequence with the resulting items. Let’s see some examples.

Suppose you wrote the following code, due to a customer’s demand:

	static void AddThreeToEachElement(int[] arr)
	    for (var i = 0; i < arr.Length; i++)
	       arr[i] += 3;

It’s a function that adds three to each element of the given array of integers. Pretty straightforward.

Now a request for a new function comes in. This time, it should add five to each element in an array. Ignoring the rule of three, you jump right ahead into a generalized version, parameterizing the number to be added:

	static void AddNumberToEachElement(int[] arr, int n)
	   for (var i = 0; i < arr.Length; i++)
	        arr[i] += n;

Then yet another request comes in. Now you must write a function that will multiply each element of the given array by, let’s say, three. I won’t add the code sample now because I’m sure you’ve got the picture. By now, you should know better than to hardcode the number, so you’d probably jump ahead to a general version right away. Even then, some duplication would still exist: the loop itself. Hmm…what if you could keep just the loop and instead parameterize the action to be applied on each item?

The Functional Way

Take into consideration what you’ve just read about pure functions—and also your previous knowledge ofprogramming best practices in general—and think of ways the code could be improved.

From my perspective, the main problems are

  • The code is too specific. It can’t be easily changed to accommodate other transformations being applied to the array elements. It just performs a sum, and that’s it.
  • Too much boilerplate. Look at the previous sample again. Count the lines. There are seven, of which only one really concerns itself with carrying through the business logic of the method.

How would the functional way improve on this? That’s the way I’d write the first example in F#, for instance:

	let result = Seq.map (fun x -> x + 3) numbers

I’m assuming here that “numbers” is a sequence of integers I’ve got somehow. Then I use the map function on the Seq module, passing the sequence as a parameter, along with a function that takes an int and adds three to it.

The Functional Way, .NET/C# Flavor

.NET implements the map operation in the form of the “Select” LINQ extension method. So you could rewrite the F# example above like this:

var result = numbers.Select(x => x + 3);

One important point that needs explaining is that the type of the resulting sequence doesn’t need to match the type of the source sequence. Do you have a list of ‘Employee’ and need a sequence of ints (containing, for instance, their IDs)? Easy peasy:

	List<Employee> employees = EmployeeRepository.All();
	IEnumerable<int> ids = employees.Select(x => x.Id);


I think filter is, hands down, the easiest operation of the bunch. It has a very intuitive name, and the need for filtering stuff is so common in programming that I bet you correctly guessed what it is just by its name (if you didn’t know it already).

For the sake of completeness, though, let’s define it. The filter operation…wait for it…filters a sequence, returning a new sequence containing just the items approved by some criteria.

The Imperative Way

Since we’ve used employees in the previous section, let’s keep within the theme. Let’s say you need to come up with a list of the employees who have used at least three sick days.

In a more procedural style, you’d maybe write something along the following lines:

	public static List<Employee> GetEmployeesWithAtLeastNSickdays(List<Employee> employees, int number)
	    List<Employee> result = new List<Employee>();
	    foreach (var e in employees)
	        if (e.Sickdays >= number)
	    return result;

I wouldn’t say there’s something definitely wrong with this code. The method’s name is a bit too long, but it’s very descriptive. The code does what it promises. And it’s readable enough.

But similarly to the previous section, we can make the argument that the code is too noisy. We can say that, essentially, the only line that does something domain related is the if test. All the other lines are basically boilerplate-y infrastructure code. Can a functional approach help us here?

The Functional Way

Let’s rewrite the method above by using LINQ:

	public static List<Employee> GetEmployeesWithAtLeastNSickdays(List<Employee> employees, int number)
	    return employees.Where(x => x.SickDays >= n).ToList();

Here we use the “Where” extension method, passing the filtering criterium as a delegate. To be honest, the outer method became not very useful since it just delegates the work. In real life, I’d get rid of it.


Reduce is often the one many developers have some difficulty understanding. But it isn’t hard at all. Think of it like this: you have a sequence of something, and you also have a function that takes two of these “somethings” and returns one, after doing some processing.

Then you start applying the function. You apply it to the first two elements in the sequence and store the result. Then you apply it again to the result and the third element. Then you do it again to the result and the fourth item, and so forth.

The classical example of reduce is adding up a list of numbers, so that’s exactly what we’re going to do in our example.

The Imperative Way

So, suppose we’re to sum a bunch of integers. We could do it like this:

	public int Sum(IEnumerable<int> numbers)
	    var result = 0;
	    foreach (var number in numbers)
	        result += number;
	    return result;

At this point, you’re probably familiar with what I have to say about this code: it isn’t necessarily wrong, but it’s inflexible and noisy. Can functional programming save us?

The Functional Way

In .NET/C#, the “Reduce” operation assumes the form of the “Aggregate” extension method. This time, I’ll just get rid of the enclosing method and write the LINQ solution right away:

	var sum = number.Aggregate((x, y) => x + y);

Things look a little bit more complex here, but don’t get scared. In this case, we’re just passing a function that takes two parameters, instead of one, like in the previous examples. It has to be that way since the function must be applied to two elements of the sequence each time.

But as it turns out, there’s an even easier way of solving this particular problem (adding a bunch of numbers). Since summing a sequence of numbers is such a common use case, there’s a dedicated method to do just that. It’s called, not surprisingly, “Sum”:

	var sum = numbers.Sum();

What’s “Aggregate” good for, then? Well, adding a list of integers is just one of the applications for reduce, but you’re not in any way restricted to only that. You can use it with any binary operation, such as concatenating strings or summing custom types.

The Verdict: Is the Functional Approach Better?

After these examples, you might be wondering if the “functional” way is any better? It’d be extremely hard to define what “better” is, so I won’t even bother. Let’s consider another criterion: readability.

Though we know that code readability can also be highly subjective, I’d say that yes, the functional examples are more readable. Suppose we need to retrieve and sum all the salaries from employees with more than five years of company time. We could easily do that by writing a loop, in which we’d test the condition and accumulate the salary if the test turned out true.

Or we could just write this:

	var sum = employees.Where(x => x.CompanyTimeInYears > 5).Select(x => x.Salary).Sum();

I honestly believe this line to be more readable (and generally better) than the procedural approach. It’s more declarative; it shows the intention of what we’re trying to get done without being too concerned with the how.

It almost reads like natural language: “The list of employees where their time in the company is greater than five years, select their salary and sum them”.

Add Some Functional Spice to Make Your Code Tastier

Many people use LINQ for years without even realizing they’re using functional programming concepts. I take this as proof that functional programming isn’t beyond the capabilities of the enterprise developer who lacks a strong background in math.

Some of the concepts presented here are neither new nor restricted to functional programming. The benefits of distinguishing between functions that produce side effects from those that don’t is the basis of principles like command-query separation (CQS), for instance.

The goal of this post was not to teach you functional programming. This is honestly beyond my capabilities, as I’m still studying it myself. And besides, there are awesome resources for that purpose if you want to learn more.

Instead, what I wanted here is to give you a little taste of what a functional style can do for your code, which is to make it more expressive, concise, and declarative. Now it’s up to you to try to apply the functional mindset to the code you write.


Value Objects: A Tool for Self-Documented Code and Fewer Errors

Editorial note: I originally wrote this post for the NDepend blog. You can check out the original here, at their site. While you’re there, take a look at NDepend.

Have you ever heard of value objects? I bet you have. Even though they’re talked about a lot less than I’d like, they’re still talked about enough that many developers have at least some passing familiarity with the term.

“Passing familiarity” isn’t good enough, though. So that’s what we’re fixing with this post. Today you’re going to learn what value objects are and how you, as a C# developer, can harness their power to make your code clearer, self-documenting, and less error-prone.

What Are Value Objects?

Value objects are one of the building blocks of domain-driven design, as proposed by Eric Evans on his seminal book Domain-Driven Design: Tackling Complexity in the Heart of Software.

Simply put, a value object is an object that represents a value. And I’m aware that sounds excruciatingly obvious and even boring when said this way. So, what’s all the fuss about it?

Some Properties

I think it’s easier to understand value objects when we quit trying to explain what they are and talk about their characteristics instead.

Value Objects Don’t Have Identity

I think it’s fair to say that the main characteristic of a value object is that it lacks identity.  But what does that really mean in practice?

Let’s say you go to the nearest ATM and deposit a $50 bill into your checking account. Then you drive a couple of hours to another town, go to a bank there, and withdraw $50.

Now comes the question: does it matter to you that the bill you’ve got in your hands now isn’t the same one you deposited earlier? Of course not!  And why is that? Well, the thing we generally care about, as it concerns money, is its value, not the vessel that holds that value.

In other words, we couldn’t care less about the identity of that particular bill. The only thing that matters is its value.

It’s no coincidence that money is a classic example of a value object.

Value Objects Are Immutable

Can you change the number five? No, you can’t. There’s nothing you (or anyone else) can do to mutate the value of the number five. If you add one to it, it doesn’t change; instead, you get six, which is another number.

Could you alter a date? Nope, you also can’t do that. If you start with “2018-01-9” and add one day to it, you get “2018-01-10.” You don’t change the original date at all. In fact, the immutability aspect of a value object is a direct consequence of the previous point; since a value object doesn’t have identity, we can say the value object is its value. Therefore, it doesn’t even make sense to talk about changing it.

The implication of this for you as a developer is that value objects are inherently safer and easier to reason about. There’s no risk of changing them by accident since they can’t be changed at all.

Value Objects Have Structural Equality

Imagine you could magically teleport people to anywhere you wish, and you’ve decided to swap two men called “John Smith” during the night. How do you think their respective partners would react to see a total stranger in their beds instead of their husbands?

People are obviously not interchangeable, despite sharing one or more characteristics. Even if our two Johns had not only the same name but also the same height, weight, skin color, and hair color, they would still be two completely different people. Even identical twins (or, on a slightly Black Mirror note, clones) continue to be different people, despite being as equal to one another as you can get.

On the other hand, people change continuously during their lives, but they are still the same people (as long as we don’t get philosophical here, as in “a man can’t step into the same river twice” type of thing).

You may be wondering if I’ve gotten off track here, but I haven’t. This only serves to illustrate the crucial differences between entities and value objects. With entities, we care about identity, not about the value of its attributes.  With value objects, we care only about the value itself.

The implication of this, in programming terms, is that value objects typically present structural equality. It makes sense to compare them by their values, not their references or identities. So, when implementing a value object, you’ll want to override “Equals” and “GetHashCode.”

What’s in It for Me?

By now you should have a pretty good idea of what value objects are. What’s not clear yet is why you should use them.  To answer this, let’s consider the following line of code:

    double  distance  =  4.5;

Is there something wrong with this? Well, I could Ben Kenobi you and say that it might be wrong “from a certain point of view.” But I won’t. Instead, I’ll say it’s definitely wrong. It doesn’t matter that it compiles. It also doesn’t matter that it actually works some or even most of the time.

The problem here is the code smell known as “primitive obsession,” i.e., modeling domain concepts using primitive types. The next few sections will dive in into why is this such a problem and how the use of value objects can help.

Value Objects Provide Context

OK, so why is primitive obsession a bad thing? There are in fact several reasons, but one of the main problems with the code snippet presented in the previous section is that it lacks a critical piece of information. As you can see, the code assigns the value 4.5 to the variable. But 4.5 what? Meters? Kilometers? Miles? Parsecs? In other words, we don’t have the unit of measurement.

This can be a recipe for disaster. It just takes a developer fetching a value from a database or a file, thinking it’s supposed to represent meters but it’s in fact kilometers. When they then proceed to use the value in a calculation, say, adding kilometers to miles…silence. Instead of failing fast, you’d get a program that silently misbehaves while corrupting data and providing inconsistent results.

Well, at least you’re using unit tests…right?

Sure, nothing prevents you from encoding that information in the variable name itself:

	double  distanceInKilometers  =  4.5;

Yeah, this is slightly better than the previous version, but it’s still a very brittle solution. At any moment, the value can be assigned to another variable or even passed as an argument to some function, and then the information is lost.

By using value objects, you can eliminate this problem easily. You’d just have to choose a unit to be the internal representation of the type—for distance, it probably makes sense to use meter, since it’s an SI unit. And then you can provide several static factory methods for each necessary unit:

	var  distance  =  Distance.FromMeters(4000);
	var  distance2  =  Distance.FromKilometers(4);
	Assert.AreEqual(distance,  distance2);

If you go on to overload the “+” operator (or create a “Plus” method), you can safely add two distances that originate from different units of measurement since the internal representation is the same.

Value Objects Are Type Safe

Let’s say you have a method with this signature:

	double  PerformSomeImportantCalculation(double  distance,  double  temperature);

What would happen if you made a mistake and inverted the values when calling the method? The program would silently misbehave, and you wouldn’t even be aware. Hopefully, you’d have a good QA process in place that would catch this bug before it hits production, but hope isn’t exactly a strategy, right?

Well, as it turns out, that’s the exact kind of problem value types are great at preventing. You’d just have to use custom types for each concept instead of relying on primitives:

	double  PerformSomeImportantCalculation(Distance distance,  Temperature temperature);

That way, you can’t just pass the parameters in the wrong order: the compiler won’t let you!

Value Objects Prevent Duplication of Domain Logic

When you model domain concepts using primitive types, you tend to have a lot of code related to that concept spread throughout the whole application. Let’s say you’re building an application that has the concept of a license plate, and you’re using strings to represent those. Of course, not all strings are valid license plates. So your code ends up with format validations for license plates everywhere.

This could be prevented by creating a “LicensePlate” class and performing the necessary validations on its constructor. That way you’d consolidate the validation code in one place; should it ever change in the future, you’d only have to change it in this one place.

Value Objects and Value Types Aren’t Synonymous

This section is necessary in order to clarify a common misconception, which is to mix up value objects with the concept of value types in C#. See, in C#, we have two categories of types: reference types and value types.

While you certainly can use structs (value types) to implement value objects—examples in the BCL would be DateTime or the primitive numeric types— there’s nothing preventing you from using classes.

On the other hand, structs are not automatically value objects. For instance, while it’s considered good practice to keep structs immutable, they’re not immutable by default.

In short, value type is an implementation detail in C#/.NET while value object is a design pattern. Keep that in mind and consult the Microsoft design guidelines and you should be fine.

Value Objects Are Worth It!

Value objects are a relatively low-cost technique that can greatly enhance the manageability and clarity of your code. By employing value objects, you can make your code easier to reason about, crafting APIs that are self-documenting, easy to understand, hard to use incorrectly, and inherently type-safe.


Coding Best Practices When You’re Short On Time

Photo by Ales Nesetril on Unsplash

Editorial note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. While you’re there, download and try their CodeIt.Right product.

One topic in software development that really fascinates me is coding best practices. I’m always searching for ways to improve my work and deliver value in a fast and consistent manner.

It can be tricky to define what a “coding best practice” is. Some people are even in favor of downright retiring the term! But one thing pretty much everyone agrees upon is this: coming up with and implementing strategies—by whatever name you call them—to improve the output of one’s work is something that any developer worth his or her salt should be continuously doing.

Of course, there’s no free lunch. The adoption of a best practice takes time…and sometimes you just don’t have much of that to begin with. And then there’s management, whose buy-in is not always guaranteed.

So, what to do if your development team is struggling with the poor quality of a codebase while lacking time to implement best practices that would help?

The answer I offer you today is what I’ll call the “coding best practices emergency pack”: a small list of coding best practices that you can adopt on relatively short notice to get your team and your codebase from utter chaos to a more manageable state.

Because there’s lots of advice on coding best practices out there, to the point where it’s hard not to feel overwhelmed, I narrowed down my list of emergency-pack best practices by requiring they meet three criteria:

  • They must be fundamental, in the sense that they’re the building blocks with which you can implement more sophisticated practices later.
  • You can adopt them in relatively short notice. (I’d say a week is feasible.)
  • Their cost is free or very low.

The practices that follow all fit these parameters. And without further ado, here it is: my coding best practices emergency pack, with items listed in the order they should be implemented and starting with the most critical one.

Version Control System

I once worked for a software development shop where no version control system was used. The source files were placed in a shared folder that every developer could access. What was the process we used when editing a file? Yeah, you guessed it:  we’d simply create a copy of the file and rename it to “filename_old.ext” or something like that.

This was about eight or nine years ago. So maybe things have improved, right? Well, they certainly have, to some extent, but not completely. There are still companies out there that don’t use a VCS.

How to Proceed?

From now on, I’ll just assume you agree that a VCS is a fundamental coding best practice. If that’s not the case, there’s plenty of resources out there explaining what a VCS is and why should you use one.

With that out of the way, it’s time to get to specifics. Which tool should you use? How to go about its adoption?

Git is a solid choice. And despite having a steeper learning curve for those more used to centralized version control systems, such as Subversion or TFVC, it’s de facto standard in our industry. So by all means, learn it, since not doing so can harm your career in the future.

But it’s possible that Git is not the best choice for your team right now. Remember, you’re short on time. So we need to get your team to adopt these coding best practices ASAP.

How do we do this? So, let’s say you have experience with Subversion, having used it at your previous company, but you have no experience with Git at all. If that’s the case, I’d say Subversion is the best choice for you. The overhead of learning a new system and teaching it to your co-workers while putting it into production would be too great.

Code Review

I’m not going to lie: I love code reviews—and I’m not alone in that. I’ve witnessed firsthand how a good code review can reduce the number of bugs in a codebase, make the code look and feel more consistent, and perhaps best of all, spread knowledge throughout a development team.

And here’s a major selling point: a code review practice is relatively easy to implement. Start as simple as you can, and then tweak and experiment with your approach as the need arises.

What Do I Mean by Code Review?

Talking about “code review” can be tricky. People sometimes mean widely different things when they use the term, so I think it warrants further clarification.

I’m not in favor of a highly stressful and bureaucratic code review process, where your code is scrutinized and criticized in public for hours. I don’t believe in public shaming as a tool for achieving quality. On the contrary, the type of code review I advocate for is a lightweight and low-stress process, usually initiated by submitting a pull request or using your favorite IDE.

How to Proceed 

Since we’re now on the same page about what a code review should look like, how would one go about implementing the practice? My answer is, not surprisingly, “the simplest way that could possibly work.” 

For instance, if yours is a .NET shop using TFS/TFVC, you can start by installing a check-in policy that requires a code review for each check-in. If your team uses GitHub, you can use pull requests. Just start performing reviews so you and your team can get used to it. Then, with time, start tuning and perfecting your approach.

Here are some of the questions that can appear as you refine your process for this:

  • What’s the goal of a code review? Are we looking for bugs? Trying to improve readability? Checking adherence to the company’s coding standard?
  • Where do we draw the line between “suggestions” and “impediments”? Is it OK to give a thumbs-down to someone’s code for bad indentation or a slightly off variable name?
  • What do when reviewer and reviewee can’t come to a consensus? Bring in a mediator to give the final word? And who should be this mediator? The lead developer?

The answer to all of these questions can be found in automation. Much of the awkwardness of a code review can be removed when you employ a code analyzer to handle the automatable portions of the process.

For instance, SubMain’s CodeIt.Right will give you real-time feedback from inside Visual Studio, alerting you of possible coding issues and even automatically fix code smells and violations for you.

By employing automation, you set your developers free to worry about higher level concerns when performing reviews, such as code clarity or architectural decisions.

Automated Builds

You may be thinking that I’ve got it wrong. After all, does it even make sense to talk about automated builds without mentioning automated tests?

Well, I’m going argue that yes, it does make sense, and for one very simple reason: it eliminates “it works on my machine” syndrome. 

By having a central place where builds are performed, you shed light on all kinds of problems, from poor management of dependencies to bad test discipline.

How to Proceed

My advice here is the same as before: do the simplest thing that could work.

If your team already uses TFS, then learn how to create a build definition and you’re good to go. On the other hand, if you host your projects on GitHub, you might be interested in taking a look at Travis CI.

With time, you should improve your strategy. Remember the static code analyzers I mentioned earlier? You can integrate them into your build process. Unit testing and other kinds of automated tests are a very important addition as well.

Speaking of which…

Notable Absences

You might be surprised to see that I haven’t included unit testing in the list of coding best practices, despite being myself a firm believer in the importance of automated testing to the overall quality of a codebase. And why is that?

Adding unit tests to a legacy application, unfortunately, is hard, to the point that there’s even a famous bookthat focuses solely on this. It’s just not a feasible task for you to tackle quickly.

In a similar fashion, it’s possible that a portion of readers expected me to talk about clean code or the SOLIDprinciples. I do encourage you to research and learn about these topics,  but I don’t think they’re a good fit for the purpose of this post. They are, as the name already points out, principles. Think of them as philosophical guidelines—useful, but not as easy to break down into simple, actionable advice.

Deploy Your Package ASAP!

It’s possible that some of you found these practices to be extremely basic and not post-worthy. “Who doesn’t use version control in twenty-freaking-eighteen???” I hear you saying.

Well, it really doesn’t take long to find evidence (anecdotal, but still) that things are not all sunshine and rainbows. To believe that even basic coding best practices, such as using version control or automated testing, are universally applied is probably more wishful thinking than what we’d like to believe.

For the rest of you, I hope this list proves useful.

You know what they say. “When in a hole, stop digging.” And that’s exactly the type of help I wanted to offer with this post: a quick and easy fix, meant to give you and your teammates just enough sanity that you can focus and regain control of your application, ensuring its long-term health.


4 Common Datetime Mistakes in C# — And How to Avoid Them

Editorial note: I originally wrote this post for the SubMain blog. You can check out the original here, at their site. While you’re there, have a look at CodeIt.Right, which can help you with time-related issues and much more.

Do you remember the “falsehoods programmers believe about X” meme that became popular among software blogs a few years ago? The first one was about names, but several others soon followed, covering topics such as addresses, geography, and online shopping.

My favorite was the one about time. I hadn’t thought deeply about time and its intricacies up until that point, and I was intrigued by how a fundamental domain could be such a fertile ground for misunderstandings.

Now even though I like the post, I have a problem with it: it lists wrong assumptions, and then it basically stops there. The reader is likely to leave the article wondering:

  • Why are these assumptions falsehoods?
  • How likely is it that I’ll get in trouble due to one of these assumptions?
  • What’s the proper way of dealing with these issues?

The article is interesting food for thought, but I think it’d make sense to provide more actionable information.

That’s what today’s post is about. I’m going to show you four common mistakes C#/.NET developers make when dealing with time. And that’s not all. I’ll also show what you should do to avoid them and make your code safer and easier to reason about.

1. Naively Calculating Durations

Consider the code below:

Will this code work? It depends on where and when it’s going to run.

When you use DateTime.Now, the DateTime you get represents the current date and time local to your machine (i.e., it has the Kind property set to Local).

If you live in an area that observes DST (Daylight Saving Time), you know there’s one day in the year when all clocks must be moved forward a certain amount of time (generally one hour, but there are places that adjust by other offsets). Of course, there’s also the day when the opposite happens.

Now picture this: today is March 12th, 2017, and you live in New York City. You start using the program above. The StartMatch() method runs at exactly 01:00 AM. One hour and 15 minutes later, the EndMatch() method runs. The calculation is performed, and the following text is shown:

Duration of the match: 00:02:15

I bet you’ve correctly guessed what just happened here: when clocks were about to hit 2 AM, DST just kicked in and moved them straight to 3 AM. Then EndMatch got back the current time, effectively adding a whole hour to the calculation. If the same had happened at the end of DST, the result would’ve been just 15 minutes!

Sure, the code above is just a toy example, but what if it were a payroll application? Would you like to pay an employee the wrong amount?

What to Do?

When calculating the duration of human activities, use UTC for the start and end dates. That way, you’ll be able to** unambiguously point to an instant in time**. Instead of using the Now property on DateTime, use `UtcNow to retrieve the date time already in UTC to perform the calculations:

What if the DateTime objects you already have are set to Local? In that case, you should use the ToUniversalTime() method to convert them to UTC:

A Little Warning About ToUniversalTime()

The usage of ToUniversalTime() — and its sibling, ToLocalTime()— can be a little tricky. The problem is that these methods make assumptions about what you want based on the value of the Kind property of your date, and that can cause unexpected results.

When calling ToUniversalTime(), one of the following things will happen:

  • If Kind is set to UTC, then the same value is returned.
  • On the other hand, if it’s set to Local, the corresponding value in UTC is returned.
  • Finally, if Kind is set to Unspecified,** then it’s assumed the datetime is meant to be local, **and the corresponding UTC datetime is returned.

The problem we have here is that local times don’t roundtrip. They’re local as long as they don’t leave the context of your machine. If you save a local datetime to a database and then retrieve it back, the information that’s supposed to be local is lost: now it’s unspecified.

So, the following scenario can happen:

  • You retrieve the current date and time using DateTime.UtcNow.
  • You save it to the database.
  • Another part of the code retrieves this value and, unaware that it’s supposed to already be in UTC, calls ToUniversalTime() on it.
  • Since the datetime is unspecified, the method will treat it as Local and perform an unnecessary conversion, generating a wrong value.

How do you prevent this? It’s a recommended practice to use UTC to record the time when an event happened. My suggestion here is to follow this advice and also to make it explicit that you’re doing so. Append the “UTC” suffix to every database column and class property that holds a UTC datetime. Instead of Created, change it to CreatedUTC and so on. It’s not as pretty, but it’s definitely more clear.

2. Not Using UTC When It Should Be Used (and Vice Versa)

We could define this as a universal rule: use UTC to record the time when events happened. When logging, auditing, and recording all types of timestamps in your application, UTC is the way to go.

So, use UTC everywhere! …Right? Nope, not so fast.

Let’s say you need to be able to reconstruct the local datetime — to the user’s perspective — of when something happened, and the only information you have is a timestamp in UTC. That’s a piece of bad luck.

In cases like this, it’d make more sense to either (a) store the datetime in UTC along with the user’s time zone or (b) use the DateTimeOffset type, which will record the local date along with the UTC offset, enabling you to reconstruct the UTC date from it when you need it.

Another common use case where UTC is not the right solution is scheduling future local events. You wouldn’t want to wake up one hour later or earlier in the days of DST transitions, right? That’s exactly what would happen if you’d set your alarm clock by UTC.

3. Not Validating User Input

Let’s say you’ve created a simple Windows desktop app that lets users set reminders for themselves. The user enters the date and time at which they want to receive the reminder, clicks a button, and that’s it.

Everything seems to be working fine until a user from Brazil emails you, complaining the reminder she set for October 15th at 12:15 AM didn’t work. What happened?

DST Strikes Back

The villain here is good old Daylight Saving Time again. In 2017, DST in Brazil started at midnight on October 15th. (Remember that Brazil is in the southern hemisphere.) So, the date-time combination the user supplied simply didn’t exist in her time zone!

Of course, the opposite problem is also possible. When DST ends and clocks turn backward by one hour, this generates ambiguous times.

What Is the Remedy?

How do you deal with those issues as a C# developer? The TimeZoneInfo class has got you covered. It not only represents a time zone but it also provides methods to check for a datetime validity:

What should you do then? What should replace the “do something” comments in the snippets above?

You could show the user a message saying the input date is invalid. Or you could preemptively choose another date for the user.

Let’s talk about invalid times first. Your options: move forward or backward. It’s somewhat of an arbitrary decision, so which one should you pick? For instance, the Google Calendar app on Android chooses the former. And it makes sense when you think about it. That’s exactly what your clocks already did due to DST. Why shouldn’t you do the same?

And what about ambiguous times? You also have two options: choose between the first and second occurrences. Then again, it’s somewhat arbitrary, but my advice is to pick the first one. Since you have to choose one, why not make things simpler?

4. Mistaking an Offset for a Time Zone

Consider the following timestamp: 1995-07-14T13:05:00.0000000-03:00. When asked what the -03:00 at the end is called, many developers answer, “a time zone.”

Here’s the thing. They probably correctly assume that the number represents the offset from UTC. Also, they’d probably see that you can get the corresponding time in UTC from the offset. (Many developers fail to understand that in a string like this, the offset is already applied: to get the UTC time, you should invert the offset sign. Only then should you add it to the time.)

The mistake is in thinking that the offset is all there is to a time zone. It’s not. A time zone is a geographical area, and it consists of many pieces of information, such as:

  • One or more offsets. (DST is a thing, after all.)
  • The dates when DST transitions happen. (These can and do change whenever governments feel like it.)
  • The amount of time applied when transitions happened. (It’s not one hour everywhere.)
  • The historical records of changes to the above rules.

In short: don’t try to guess a time zone by the offset. You’ll be wrong most of the time.

It’s About Time…You Learn About Time!

This list is by no means exhaustive. I only wanted to give you a quick start in the fascinating and somewhat bizarre world of datetime issues. There are plenty of valuable resources out there for you to learn from, such as the time zone tag on Stack Overflow or blogs such as Jon Skeet’s and Matt Johnson’s, who are authors of the popular NodaTime library.

And of course, always use the tools at your disposal. For instance, SubMain’s CodeIt.Right has a rule to force you to specify a IFormatProvider in situations where it’s optional, which can save you from nasty bugs when parsing dates.


Cargo Cult Programming Is The Art of Programming by Coincidence

Editorial note: I originally wrote this post for the NDepend blog. You can check out the original here, at their site. While you’re there, download NDepend and give it a try.

I first learned about cargo cult programming a few years ago. I remember thinking back then, “What a strange name for a programming-related concept.”

If you share my past self’s astonishment, then today’s post is for you!

First, you’ll see what cargo cult programming is and why you should care. Then, we’re going to look at some practical examples, using the C# language. Finally, we’ll close with advice about what you can do, as a developer, to avoid falling into this trap.

Cargo Cult Programming: Doing Stuff Just Because

According to Wikipedia, “Cargo cult programming is a style of computer programming characterized by the ritual inclusion of code or program structures that serve no real purpose.”

In other words, it’s when a developer writes code without really understanding it. The developer may use a very trial-and-error approach—maybe copy and paste some code from somewhere else and then tweak it and test it until works, or sort of works. Then the developer will stop tweaking the code, for fear it will stop working. In the process, maybe they leave some lines of code that don’t do anything.

Or maybe they tried to use an idiom they picked up from another developer while failing to understand that the contexts are different and it’s useless in the current situation.

Finally, it might just be lack of education: maybe the developer has a poor mental model of how the tools they’re using really work.

Why is Cargo Cult Programming a Problem?

As Eric Lippert puts it, cargo cult programmers struggle to make meaningful changes to a program and end up using a trial-and-error approach since they don’t understand the inner workings of the code they’re about to change.

This is not so different from what the Pragmatic Bookshelf calls “programming by coincidence”:

Fred doesn’t know why the code is failing because he didn’t know why it worked in the first place. It seemed to work, given the limited “testing” that Fred did, but that was just a coincidence.

That single sentence pretty much sums it up for me: if you don’t know how or why your code works, neither will you understand what happened when it no longer works.

Origin of the Term

Although practices that are considered cargo cult today have been recorded as early as the late 19th century, the term itself dates from 1945, when it was first used to describe practices that emerged during and after World War II between Melanesian islanders.

These islanders would mimic the soldiers’ behavior, such as dressing up as flight controllers and waving sticks, hoping that airplanes would descend from the skies with a lot of cargo.

Since then, the term cargo cult has been used in a variety of contexts to mean to imitate form without content—to perfectly copy the superficial elements while failing to understand the deeper meanings and workings of whatever one’s trying to emulate.

Talk is Cheap; Show Me the Code!

Enough with the history lesson. Time to see some code! I’m going to show you five examples of cargo cult programming in the C# language.

Checking a Non-Nullable Value Type for Null

This one is a pet peeve of mine since I see it a lot in production code. It goes like this:

	public Product Find(int id)
   	   if (id != null) // this check is useless
	       Console.WriteLine("This line will always get reached.");
	   return new Product();

Here we have a developer that probably doesn’t grok the difference between value and reference types. It would be completely forgivable, in the case of a junior developer, except for the fact that the compiler warns you about that.

You could argue that I’m nitpicking. After all, the code will run fine in spite of this. In fact, the check won’t even be included in the resulting IL, as you can see from this print of a decompiling tool:

An image depicting a code excerpt that does not contain the null check.

You can see in this code snippet that the compiler has optimized the null check out.

There are plenty of worse problems, granted. Yes, the application won’t crash because of this. So what’s the big deal?

Well, for starters, I’d be worried about a development shop where the sole quality measure was “it runs without crashing.” But the real problem is that this type of code shows a lack of understanding of some fundamental characteristics of the language and platform that can bite you in the future.

Unnecessary Use of ToList() in LINQ to Object Queries

Like the previous one, this is something I routinely see in production code. Consider the code below:

	var result = users.ToList()
	.Where(x => x.PremiumUser)
	.Select(x => new { Name = x.Name, Birth = x.DateOfBirth })

The problem we have here is that these calls to ToList() are completely unnecessary (except maybe the last one, if you really needed the result to be a List and not only an IEnumerable).

In my experience, this happens when the developer doesn’t understand the nature of LINQ; they erroneously think that the LINQ methods belong to the concrete type List<T> instead of being extension methods that can be used with any IEnumerable<T> implementation.

By calling ToList() several times like this, the developer creates several new lists, which can be detrimental to the performance of the application.

You could rewrite the code above like this:

	var result = users.Where(x => x.PremiumUser).Select(x => new { Name = x.Name, Birth = x.DateOfBirth });

Unnecessary Conversions

Consider the following line:

	DateTime creationDate = DateTime.Parse(row["creation_date"].ToString());

Here we have not only one but two unnecessary conversions. First, the developer creates a new string and then parses it to DateTime when a simple cast would have sufficed:

	DateTime creationDate = (DateTime)row["creation_date"];

This example assumes that the underlying database type is some specific type for dealing with dates (for instance, date or datetime in SQL Server). Of course, if you were using an inadequate type (such as varchar) then this would be a problem of its own.

Try-Catch Everywhere

Also known as Pokémon syndrome (“Gotta catch ’em all!”), the anti-pattern here is to add a try-catch block to every single line that could possibly throw an exception.

Bonus points if the code is attempting to catch System.Exception instead of a more specific exception, thus blurring the distinction between expected and unexpected errors.

More bonus points if the catch block doesn’t contain any code at all!

The general advice here is this: never catch unless you have a very specific reason for doing so. Otherwise, just the let the exception bubble up until it’s dealt with by the top-level exception handler.

If this advice seems vague (“How would I know if I have the right reason for catching an exception?”), that’s because it is vague. It’s beyond the scope of this post to go deeper into this matter, but Eric Lippert’s excellent article called “Vexing Exceptions” will greatly improve your understanding of exception handling.

Using StringBuilder Everywhere

It’s the stuff of superhero movies: after reading somewhere that concatenating strings by using the ‘+’ operator is incredibly inefficient, the well-meaning developer takes upon themselves the Herculean task of updating every single concatenation in the codebase to StringBuilder.

The reasoning for this is, of course, that System.String is immutable. So every time you “modify” a string, you’re in fact creating a new instance in memory, which can hurt performance pretty badly.

Well, guess what? The compiler is pretty smart. Let’s say you have the following line:

	string a = "Hello " + "World";

This, in fact, gets translated to

	string a = "Hello World";

The fast rule of thumb is it’s fine to use the simple concatenation when you know the number of strings to append in compile time. Otherwise, a StringBuilder probably makes more sense.

Of course, some scenarios aren’t that clear-cut. The only advice worth giving here is to do your homework. When in doubt, research and benchmark to your heart’s content.

I’ll leave you with more sound advice from Eric Lippert:

Unnecessary code changes are expensive and dangerous; don’t make performance-based changes unless you’ve identified a performance problem.

Is There a Remedy?

I’d say it’s fair to assume that more inexperienced developers are more prone to commit mistakes due to cargo cult programming. But no developer is really immune to it, independent of their knowledge or experience.

We’re only human after all. Tiredness, deadlines, cognitive biases, and (to be really honest) the eventual laziness can turn even the best developer into a cargo cult programmer.

Unfortunately, there’s no 100% guaranteed way of preventing this from happening. Yet there are some measures you could take to, at least, decrease the odds.

Let’s take a look at some of them.

Use Code Review/Pair Programming

The first measure you could take to avoid cargo cult programming is to simply get a second pair of eyes on your code. The benefits of having a second person reviewing each line of code before it goes to production can’t be overstated. And while code review and pair programming aren’t perfect equivalents, both of these practices will bring you this benefit.

Always Test Your Hypothesis

Write unit tests (and other types of tests as well). Monitor your application in production. If something doesn’t perform well, benchmark the heck out of it. Don’t just assume things. Testing your hypothesis can bring valuable insights and save you when your intuition gets it wrong.

Read Other People’s Code

Reading other people’s code is a great way to learn. It’s a perfect tool to compare your own ideas and assumptions against what other developers are doing, exposing you to novel concepts that can force you to gain a deeper understanding of the issues at hand.

In the era of GitHub, there isn’t much of an excuse for not doing that.

Learn From Your Tools

There are currently a plethora of tools that can help your team improve the quality of their code. Here’s the thing, though: you shouldn’t just use these tools. You should also learn from them. If you use NDepend, read about its rules. Try and understand the rationale behind them. What are the principles and best practices that guided its authors when coming up with the rules?

The same goes for other types of tools—and even the warnings the compiler gives you.

Computer Science, Not Computer Superstition

Even though no one is immune to cargo cult programming, we should strive to overcome it. There’s hard-earned industry wisdom at our disposal, slowly generated over more than seven decades. Let’s use it. Let’s understand our tools and our craft and write better software.


C# 8.0 Features: A Glimpse of the Future

C# 8.0 is coming and will bring some great new features. Let’s check out what the future holds for us.


C# 7 Features Worth Knowing - Part 2

In this post we’ll see some more new features from C# 7.0.


C# 7 Features Worth Knowing - Part 1

C# 7 is finally among us. Time to check out some of its features.


It's about time you start using these C# 6 features

The 7th version of C# is coming, and it’s expected to bring some new and exciting features to our tool sets. Here’s the thing, though: Are you up to speed with its predecessor’s features?


Value and reference types in C#, Part 2 - Why can't a DateTime be null?

“Why is not allowed to assign null to a DateTime?” Again and again, this question keeps showing up on StackOverflow and similar sites. Different phrasing, maybe a different type (“Why type “int” is never equal to ‘null’?”), but the same question, in essence. Which is only natural, considering that probably thousands of developers join the field every year.


Value and reference types in C#

This is my first “real” post here on my blog, and I decided to talk about value types and reference types. This is somewhat of a basic subject, in the sense that it is something that you should already know if you write C# code for a living.But at the same time, it can be a little non-intuitive if you’re not an experienced developer.