unit test

.NET Core, Testing, Testing 1, 2, 3!

Posted on Updated on

In the previous post, I created a simple file processor app with .NET Core.
In this post, I want to share how seamlessly unit testing can be done with .NET Core as well.

For unit testing, I have referenced the followings from NuGet:
1. Unit Test Framework – MSTest
2. Mocking Framework – Moq
3. BDD Style Unit Test – TestStack.BDDfy

FileProcessorTest - NuGet

And by following the separation of concerns design principle, the application is split into separate components as per below:
1. Processor = the main component that has the brain/logic to achieve the task by using different components
2. IDataRepository = the component to retrieve the data/files
3. ILockManager = the component to perform the synchronization
4. ILogger = the component to do loggings

FileProcessor - ClassDiagram

PS: I created the above diagram online with Creately. Pretty nice, isn’t it!

Unit testing then became quite straight forward. For example, the unit test code snippet below was taken from the Processor.Tests class where the unit tests are structured by the Given When Then (BDD style) and BDDfy framework will execute the tests by the convention.

[TestClass]
[Story(
    AsA = "As a File Processor Instance",
    IWant = "I want to retrieve files",
    SoThat = "So that there is no race condition where multiple file processors are picking up same files")]
public abstract class File_Processor_Processes_Items
{
    // Parts of the unit tests have been removed, pls refer to the full unit test at https://github.com/bembengarifin/FileProcessor/blob/master/FileProcessor.Tests/Processor.Tests.cs

    [TestMethod]
    public void ExecuteTestScenario()
    {
        // the generic type is passed to explicitly bind the story to the base type
        this.BDDfy<File_Processor_Processes_Items>(this.GetType().Name.Replace("_", " "));
    }

    [TestClass]
    public class Lock_Is_Available_Immediately : File_Processor_Processes_Items
    {
        public void Given_Lock_Is_Not_Being_Held_By_Any_Other_Process()
        {
            // setup the expectation for data repository
            _mockedDataRepo.Setup(x => x.GetNextItemsToProcess(_itemsToFetchAtATime));

            // setup the expectation for the lock manager, with the return true when being called
            IEnumerable<IDataObject> result;
            Func<IEnumerable<IDataObject>> get = () => _mockedDataRepo.Object.GetNextItemsToProcess(_itemsToFetchAtATime);
            _mockedLockManager.Setup(x => x.TryLockAndGet(_getFileLockKey, _lockMsTimeout, get, out result)).Returns(true);
        }

        public void When_The_Run_Process_Is_Executed()
        {
            _processor.RunProcess();
        }

        public void Then_Lock_Manager_Was_Called()
        {
            _mockedLockManager.Verify();
        }

        public void And_Data_Should_Be_Retrieved_For_Further_Processing()
        {
            _mockedDataRepo.Verify();
        }
    }
}

And here’s the nice HTML output results (note the report file path where it’s generated)

FileProcessor - BDDReport

There’s no more excuse not to do the unit testing with .NET Core, isn’t it?

Happy Unit Testing :)

Link to the unit test/project.

 

Advertisements

Fallen into the pit of overconfidence

Posted on Updated on

If someone had asked me yesterday whether I’m a professional, mature, seasoned developer. I would certainly replied “Indeed, I am!”. Well, fortunately, no one did, or else I can’t imagine meeting him/her again today after the pathetic thing below.

You see, I was working on a defect/bug yesterday, it was a pretty simple bug that I was happily to fix as it looks like a less than an hour job, so I did the fix below, got the test evidences and marked that defect as fixed, ready for promotion to the test environment.

private Boolean _someFlag;
private String _someOption;

private void Execute()
{
    //... some codes here

    // the following block of codes is the fix to resolve the defect/bug
    if (_someOption.Contains("something"))
    {
        _someFlag = true;
    }
    else
    {
        _someFlag = false;
    }

    //... some codes here
}

To my horror this morning, there’s a new defect being raised, complaining that there’s an error pop-up whenever the user tried to click on the execute button.

And quickly I realized that the error was coming from the modified code above. And to make things worst, the code above is in the base class of let say 10 UI forms and 3 of them are now not working/usable at all due to this new bug.

Now fixing this new bug is another simple task to do, but the thing that stayed on my mind till now is that when I wrote that fix yesterday, it actually came to my mind that I should be putting some unit tests on them, this new bug will definitely get captured if I were to write the unit tests (basic positive, negative, null tests). But nah, my overconfidence took over that this is a simple fix, nothing’s going to ever happen with this fix, everything will be running happily tomorrow, I should check this in ASAP and hurrah! one defect down.

So what are the consequences with that?

  1. Testing activities were blocked on for 3 of 10 UIs, where actually without the code fix, all were actually working fine, it’s just a minor dodgy state that was incorrect
  2. One more day was wasted to get the new fix promoted so the testing can be continued
  3. For having end users/testers to actually see this error message, it’s embarrassing the development team
  4. And in the end, this kind of error is truly embarrassing for a .Net programmer, with these years of experiences for sure :|
The following is an excerpt taken from Uncle Bob Martin‘s The Clean Coder book.

(hope you don’t mind, Uncle Bob, I’ll make sure that I write the book review when I’m done reading your book :))

As you mature in your profession, your error rate should rapidly decrease towards the asymptote of zero. It won’t ever get to zero, but it is your responsibility to get as close as possible to it.

Hit the nail on the head of what I was expected to do. So moral of the story, be humble, don’t ignore your inner voice, and believe in unit testing/TDD.

PS: Tony, if you actually read this, just know that today in my head, i kept imagining you’re telling me “See, I told you, Bembeng” :p

Controlling the concurrency level using MaxDegreeOfParallelism in Parallel

Posted on

This is another nice available in the box feature when using the Parallel class, is the ability to basically limit the concurrency for the actions that we want to execute in parallel.

Why would we ever want to do this? wouldn’t it be best if we parallel some activities as many as possible?

For certain cases, we might want to be careful for not creating too many sudden requests at the same time, a good example will be if we’re triggering a long running process in our application server, you wouldn’t want to spawn too many requests at the same time to the server, as the server may not be able to handle so many requests at the same time and not surprisingly go down due to overload. This can be categorized as DOS attack, your back end guys will hate you for this, trust me :p

To set the concurrency level of the actions that you’re going to invoke with the Parallel class is pretty simple.

Parallel.Invoke(new ParallelOptions() { MaxDegreeOfParallelism = concurrencyLevel }, actions);

Find more about the MaxDegreeOfParallelism property in msdn.
We don’t have this such of property in the TaskCreationOptions that we can simply pass for Task, but there’s a LimitedConcurrencyLevelTaskScheduler nicely available which we can achieve the same result below where i basically spawn 10 processes with 2 max concurrency level.


15-Apr-2011 21:04:33.948 - Thead#13 - Creating 10 process definitions
15-Apr-2011 21:04:33.948 - Thead#13 - Start queueing and invoking all 10 processes
15-Apr-2011 21:04:33.948 - Thead#13 - Doing something here
15-Apr-2011 21:04:33.948 - Thead#14 - Doing something here
15-Apr-2011 21:04:34.964 - Thead#13 - Doing something here
15-Apr-2011 21:04:34.964 - Thead#14 - Doing something here
15-Apr-2011 21:04:35.964 - Thead#13 - Doing something here
15-Apr-2011 21:04:35.964 - Thead#14 - Doing something here
15-Apr-2011 21:04:36.964 - Thead#13 - Doing something here
15-Apr-2011 21:04:36.964 - Thead#14 - Doing something here
15-Apr-2011 21:04:37.964 - Thead#13 - Doing something here
15-Apr-2011 21:04:37.964 - Thead#14 - Doing something here
15-Apr-2011 21:04:38.964 - Thead#13 - All processes have been completed

using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace ParallelTests
{
    [TestClass]
    public class TaskSchedulerConcurrencyTest
    {
        [TestMethod]
        public void TestMethod1()
        {
            var p = new ProcessorUsingParallel();
            p.DoProcess(processToCreate: 10, concurrencyLevel: 2);
        }
    }

    public class ProcessorUsingParallel
    {
        public void DoProcess(int processToCreate, int concurrencyLevel)
        {
            SetTurboMode();

            Debug("Creating {0} process definitions", processToCreate.ToString());

            var actions = new Action[processToCreate];
            for (int i = 0; i < processToCreate; i++)
            {
                actions[i] = () => DoSomething(1000);
            }

            Debug("Start queueing and invoking all {0} processes", processToCreate.ToString());
            var options = new ParallelOptions();
            options.MaxDegreeOfParallelism = concurrencyLevel;
            //options.TaskScheduler = new LimitedConcurrencyLevelTaskScheduler(concurrencyLevel); -- we can achieve the same result with this
            Parallel.Invoke(options, actions);

            Debug("All processes have been completed");
        }

        private void DoSomething(int Sleep)
        {
            Debug("Doing something here");
            Thread.Sleep(Sleep);
        }

        /// <summary>
        /// oh i just wish the framework would have this in place like Console.WriteLine
        /// </summary>
        /// <param name="format"></param>
        /// <param name="args"></param>
        private static void Debug(string format, params object[] args)
        {
            System.Diagnostics.Debug.WriteLine(
                string.Format("{0} - Thead#{1} - {2}",
                    DateTime.Now.ToString("dd-MMM-yyyy HH:mm:ss.fff"),
                    Thread.CurrentThread.ManagedThreadId.ToString(),
                    string.Format(format, args)));
        }
        /// <summary>
        /// This is not intended for production purpose
        /// </summary>
        private static void SetTurboMode()
        {
            int t, io;
            ThreadPool.GetMaxThreads(out t, out io);
            Debug("Default Max {0}, I/O: {1}", t, io);

            var success = ThreadPool.SetMinThreads(t, io);
            Debug("Successfully set Min {0}, I/O: {1}", t, io);
        }
    }
}

Using Parallel in .Net 4.0 without worrying WaitHandle.WaitAll 64 handles limitation

Posted on

As my previous post was providing an example of using the Task class from the TPL library to overcome the 64 waithandles limitation on WaitHandle.WaitAll, here’s the other alternative of code which leverage the Parallel class from the same library. Here’s the download link for the whole test project.

However, notice one thing below that I’m using ConcurrentBag<T> type which is a thread-safe bag implementation, optimized for scenarios where the same thread will be both producing and consuming data stored in the bag (taken from the msdn page).

An interesting fact with this, it takes around 74 secs to complete where if we’re not concern about return values (remove the usage of the ConcurrentBag<T>), it will only take around 28 seconds where using Task (with or without return values), it will only take 22 secs. I guess we can probably use a “state object” to pass into the invocation to hold the values for each process to avoid the locking costs, but since Task is already doing this well, it will need a justification why we need to do that manually ourselves with Parallel.Invoke solution. Moral of the story: always get the facts first by each alternative for your solution then decision will be easy and justifiable.

using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Threading.Tasks;
using System.Threading;
using System.Collections.Concurrent;

namespace ParallelismTests
{
    [TestClass]
    public class ParallelTest
    {
        [TestMethod]
        public void TestMethod1()
        {
            var p = new ProcessorUsingParallel();
            var counts = p.DoProcess(hiLevelParallelism: 1000, loLevelParallelism: 1000);

            Assert.AreEqual(1000 * 1000, counts);
        }
    }

    public class ProcessorUsingParallel
    {
        public int DoProcess(int hiLevelParallelism, int loLevelParallelism)
        {
            Utility.SetTurboMode();

            Utility.Debug("Start queueing {0} high level processes", hiLevelParallelism.ToString());

            var counts = new ConcurrentBag<int>();
            var r = new Random();

            var actions = new Action[hiLevelParallelism];
            for (int i = 0; i < hiLevelParallelism; i++)
            {
                actions[i] = () => counts.Add(DoHighLevelProcess(r, loLevelParallelism));
            }

            Utility.Debug("Invoking all {0} high level tasks", hiLevelParallelism.ToString());
            Parallel.Invoke(actions);

            Utility.Debug("All processes have been completed");
            return counts.Sum(t => t);
        }

        private int DoHighLevelProcess(Random r, int loLevelParallelism)
        {
            var counts = new ConcurrentBag<int>();
            var actions = new Action[loLevelParallelism];
            for (int i = 0; i < loLevelParallelism; i++)
            {
                actions[i] = () => counts.Add(DoLowLevelProcess(r.Next(1, 10)));
            }
            Parallel.Invoke(actions);

            Utility.Debug("DoHighLevelProcess - Completed with {0} LowLeveProcesses", loLevelParallelism);
            return counts.Sum(t => t);
        }
        private int DoLowLevelProcess(int Sleep)
        {
            Thread.Sleep(Sleep);
            return 1;
        }
    }
}

Using Task in .Net 4.0 without worrying WaitHandle.WaitAll 64 handles limitation

Posted on

In my previous post, I was looking into having 2 level of concurrent processes, creating concurrent processes which will spawn another concurrent processes each.

Now by using WaitHandle.WaitAll, we can see there’s a limit of 64 waithandles for waiting, so with the new Task Parallel Library in .Net 4.0, we don’t have to worry about that kind of limitation anymore.

Using the code in the previous post, I then used the Task class to perform the concurrent processes. Notice the code below, I’m actually create 1.000 high level processes which will create 1.000 low level processes each, so in total 1.000.000 processes. And another very nice thing here is that we can actually return a value from each task very easily compared to the previous solution using Threadpool.

One more interesting here is that, noticed the line#40 & #54: Task.WaitAll(tasks.ToArray());, with that line in place, it would only take 20+ seconds to complete where without it (remove/comment those lines), it would then take 11 minutes. That a pretty huge difference ;)

Reason: I think it makes sense, because without that line, we will be waiting for the task completion sequentially in the SUM operation when Task.Result was being called, where the completion of 1 task with the next/previous task in the collection might be different, where Task.WaitAll will basically get all the signals by each task as when it completed, so when the SUM operation is being called, all the results are already available without any need to wait anymore :)

using System;
using System.Collections.Generic;
using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Threading.Tasks;
using System.Threading;

namespace ParallelismTests
{
    [TestClass]
    public class TaskTest
    {
        [TestMethod]
        public void TestMethod1()
        {
            var p = new ProcessorUsingTask();
            var counts = p.DoProcess(hiLevelParallelism: 1000, loLevelParallelism: 1000);

            Assert.AreEqual(1000*1000, counts);
        }
    }

    public class ProcessorUsingTask
    {
        public int DoProcess(int hiLevelParallelism, int loLevelParallelism)
        {
            SetTurboMode();

            Debug("Start queueing {0} high level processes", hiLevelParallelism.ToString());

            var tasks = new List<Task<int>>();
            var r = new Random();
            for (int i = 0; i < hiLevelParallelism; i++)
            {
                var task = Task.Factory.StartNew<int>(() => DoHighLevelProcess(r, loLevelParallelism));
                tasks.Add(task);
            }

            Debug("Waiting for all {0} high level tasks to complete", hiLevelParallelism.ToString());
            Task.WaitAll(tasks.ToArray()); // try comment this line out and see the performance impact :)

            Debug("All processes have been completed");
            return tasks.Sum(t => t.Result);
        }

        private int DoHighLevelProcess(Random r, int loLevelParallelism)
        {
            var tasks = new List<Task<int>>();
            for (int i = 0; i < loLevelParallelism; i++)
            {
                var task = Task.Factory.StartNew<int>(() => DoLowLevelProcess(r.Next(1, 10)));
                tasks.Add(task);
            }
            Task.WaitAll(tasks.ToArray()); // try comment this line out and see the performance impact :)
            
            Debug("DoHighLevelProcess - Completed with {0} LowLeveProcesses", loLevelParallelism);
            return tasks.Sum(t => t.Result);
        }
        private int DoLowLevelProcess(int Sleep)
        {
            Thread.Sleep(Sleep);
            //Debug("DoLowLevelProcess - Completed after {0} ms", Sleep.ToString());
            return 1;
        }

        /// <summary>
        /// oh i just wish the framework would have this in place like Console.WriteLine
        /// </summary>
        /// <param name="format"></param>
        /// <param name="args"></param>
        private static void Debug(string format, params object[] args)
        {
            System.Diagnostics.Debug.WriteLine(
                string.Format("{0} - Thead#{1} - {2}",
                    DateTime.Now.ToString("dd-MMM-yyyy HH:mm:ss.fff"),
                    Thread.CurrentThread.ManagedThreadId.ToString(),
                    string.Format(format, args)));
        }
        /// <summary>
        /// This is not intended for production purpose
        /// </summary>
        private static void SetTurboMode()
        {
            int t, io;
            ThreadPool.GetMaxThreads(out t, out io);
            Debug("Default Max {0}, I/O: {1}", t, io);

            var success = ThreadPool.SetMinThreads(t, io);
            Debug("Successfully set Min {0}, I/O: {1}", t, io);
        }
    }
}

Demystifying 64 handles limit in WaitHandle.WaitAll

Posted on

I have this doubt about this 64 handles limit recently for a while. We have been trying to increase the degree of parallelism in the existing application to optimize the performance and also to fully utilize the scaled out servers that we have on the grid.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In the initial stage when I started the changes, I hit the following exception System.NotSupportedException: The number of WaitHandles must be less than or equal to 64. We then introduced some threshold limit for the degree of parallelism to avoid this exception. So all are working fine afterwards.

Now after one change to another, we started to see the opportunity of increasing the parallelism degree such as below, this brought me into a doubt where this 64 handles limitation is being applied (process / app domain level) and whether this recursive method will trigger the same issue if in the end we’re waiting for more than 64 handles for all layers.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now if we RTFM, it basically says “The WaitAll method returns when all the handles are signaled. On some implementations, if more than 64 handles are passed, a NotSupportedException is thrown”

Just to confirm this, I created a test below which will basically execute x number of high level parallel processes which will then also execute x number low level parallel processes based on the provided parameters. So the code below will try to create around 64 * 64 = 4096 handles in totals

var p = new Processor();
p.DoProcess(hiLevelParallelism : 64, loLevelParallelism : 64);

Results

05-Apr-2011 00:42:06.781 - Thead#13 - Start queueing 64 high level processes
05-Apr-2011 00:42:06.815 - Thead#13 - Waiting for all 64 high level handles to complete
05-Apr-2011 00:42:07.049 - Thead#17 - DoHighLevelProcess - Completed with 64 LowLeveProcesses
05-Apr-2011 00:42:07.141 - Thead#15 - DoHighLevelProcess - Completed with 64 LowLeveProcesses
05-Apr-2011 00:42:07.287 - Thead#24 - DoHighLevelProcess - Completed with 64 LowLeveProcesses
..
.. omitted 58 similar lines here
..
05-Apr-2011 00:42:08.087 - Thead#28 - DoHighLevelProcess - Completed with 64 LowLeveProcesses
05-Apr-2011 00:42:08.179 - Thead#27 - DoHighLevelProcess - Completed with 64 LowLeveProcesses
05-Apr-2011 00:42:08.297 - Thead#17 - DoHighLevelProcess - Completed with 64 LowLeveProcesses
05-Apr-2011 00:42:08.455 - Thead#13 - All processes have been completed

It always feel much better if we see it running in the code, doesn’t it? :)

Source code

using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Threading;
using System.Diagnostics;

namespace ParallelismTests
{
    [TestClass]
    public class WaitHandleTest
    {
        [TestMethod]
        public void TestMethod1()
        {
            var p = new Processor();
            p.DoProcess(hiLevelParallelism : 64, loLevelParallelism : 64);
        }
    }

    public class Processor
    {
        public void DoProcess(int hiLevelParallelism, int loLevelParallelism)
        {
            SetTurboMode();

            Debug("Start queueing {0} high level processes", hiLevelParallelism.ToString());

            var r = new Random();
            var hiLevelHandles = new AutoResetEvent[hiLevelParallelism];
            for (int i = 0; i < hiLevelParallelism; i++)
            {
                var hiLevelHandle = new AutoResetEvent(false);
                hiLevelHandles[i] = hiLevelHandle;
                ThreadPool.QueueUserWorkItem((s) =>
                {
                    DoHighLevelProcess(r, loLevelParallelism);
                    hiLevelHandle.Set();
                });
            }

            Debug("Waiting for all {0} high level handles to complete", hiLevelParallelism.ToString());
            WaitHandle.WaitAll(hiLevelHandles);

            Debug("All processes have been completed");
        }

        private void DoHighLevelProcess(Random r, int loLevelParallelism)
        {
            var loLevelHandles = new AutoResetEvent[loLevelParallelism];
            for (int i = 0; i < loLevelParallelism; i++)
            {
                var loLevelHandle = new AutoResetEvent(false);
                loLevelHandles[i] = loLevelHandle;
                ThreadPool.QueueUserWorkItem((s) =>
                {
                    DoLowLevelProcess(r.Next(1, 10));
                    loLevelHandle.Set();
                });
            }
            WaitHandle.WaitAll(loLevelHandles);

            Debug("DoHighLevelProcess - Completed with {0} LowLeveProcesses", loLevelParallelism);
        }
        private void DoLowLevelProcess(int Sleep)
        {
            Thread.Sleep(Sleep);
            //Debug("DoLowLevelProcess - Completed after {0} ms", Sleep.ToString());
        }

        /// <summary>
        /// oh i just wish the framework would have this in place like Console.WriteLine
        /// </summary>
        /// <param name="format"></param>
        /// <param name="<span class=" />args">
        private static void Debug(string format, params object[] args)
        {
            System.Diagnostics.Debug.WriteLine(
                string.Format("{0} - Thead#{1} - {2}",
                    DateTime.Now.ToString("dd-MMM-yyyy HH:mm:ss.fff"),
                    Thread.CurrentThread.ManagedThreadId.ToString(),
                    string.Format(format, args)));
        }
        /// <summary>
        /// This is not intended for production purpose
        /// </summary>
        private static void SetTurboMode()
        {
            int t, io;
            ThreadPool.GetMaxThreads(out t, out io);
            Debug("Default Max {0}, I/O: {1}", t, io);

            var success = ThreadPool.SetMinThreads(t, io);
            Debug("Successfully set Min {0}, I/O: {1}", t, io);
        }
    }
}

Simple Benchmarking Unit Test Code

Posted on

Recently I have been working on performance related stuffs, so I kept writing codes which have stopWatch.Start, Stop, bla bla bla.
Finally yesterday, I just refactored all into the code below:

        static void Run_Benchmark(Action action, Int32 repeatTimes)
        {
            var stopWatchAll = Stopwatch.StartNew();
            var stopWatchItem = Stopwatch.StartNew();

            long maxElapsedTime = 0;
            long minElapsedTime = long.MaxValue;
            long[] elapsedTimes = new long[repeatTimes];

            var startTime = DateTime.Now;
            stopWatchAll.Start();

            for (int i = 0; i < repeatTimes; i++)
            {
                stopWatchItem.Reset();
                stopWatchItem.Start();

                // do the action here
                action();

                stopWatchItem.Stop();
                var elapsedTime = stopWatchItem.ElapsedMilliseconds;

                elapsedTimes[i] = elapsedTime;

                if (elapsedTime > maxElapsedTime)
                {
                    maxElapsedTime = elapsedTime;
                }

                if (elapsedTime < minElapsedTime)
                {
                    minElapsedTime = elapsedTime;
                }
            }

            stopWatchAll.Stop();
            var endTime = DateTime.Now;

            var elapsedTimeAll = stopWatchAll.ElapsedMilliseconds;
            var avgElapsedTime = elapsedTimes.Average();

            Debug.WriteLine(string.Format("Looping: {0} times", repeatTimes));
            Debug.WriteLine(string.Format("Min: {0} ms", minElapsedTime));
            Debug.WriteLine(string.Format("Max: {0} ms", maxElapsedTime));
            Debug.WriteLine(string.Format("Average: {0} ms", avgElapsedTime));
            Debug.WriteLine(string.Format("Standard Deviation: {0}", CalculateStandardDeviation(elapsedTimes)));
            Debug.WriteLine(string.Format("Details: {0} ms", string.Join(",", elapsedTimes.Select(t => t.ToString()).ToArray())));
            Debug.WriteLine(string.Format("Total Benchmark Elapsed Time in Ms: {0}, Start: {1}, End: {2}", elapsedTimeAll, startTime, endTime));
        }

        private static double CalculateStandardDeviation(IEnumerable<long> values)
        {
            if (values == null || values.Count() == 0)
                throw new ArgumentException();

            double ret = 0;

            //Compute the Average
            double avg = values.Average();
            //Perform the Sum of (value-avg)_2_2
            double sum = values.Sum(d => Math.Pow(d - avg, 2));
            //Put it all together
            ret = Math.Sqrt((sum) / values.Count() - 1);

            return ret;
        }

Then we can use it like below

        [TestMethod]
        public void Do_Some_Benchmark_Test()
        {
            Run_Benchmark(new Action(() =>
                                         {
                                             Thread.Sleep(new Random().Next(1000, 2000));
                                             // do other stuffs like call method, etc
                                         })
                                         , 5 // repeatTimes
                                         );
        }

And you’ll get the results below ;)

Looping: 5 times
Min: 1371 ms
Max: 1768 ms
Average: 1545.6 ms
Standard Deviation: 161.89391588321
Details: 1488,1768,1371,1705,1396 ms
Total Benchmark Elapsed Time in Ms: 7729, Start: 04/06/2010 14:49:52, End: 04/06/2010 14:49:59

Nice huh ;)

PS: I got the calculation for the standard deviation from here