Wednesday, December 8, 2010

Speed of Contains() on string list and hash set in C#

Although I was 99.9% certain that a Contains() on a HashSet would outperform the same on a List I needed to see it happen before I could sleep again.

Here is the code that I used to test it.

int iterateNum = 100000000;
string longString = "this is a very very very long string that is in the list";
string searchString = "this is the search string";

List<string> stringList = new List<string>();
for (int i = 0; i < iterateNum; i++)
{
    stringList.Add(longString);
}
stringList.Add(searchString);

Stopwatch stopwatch = Stopwatch.StartNew();
bool contains = stringList.Contains(searchString);
Console.WriteLine("contains on list took: " + stopwatch.Elapsed.ToString());

HashSet<string> hashSet = new HashSet<string>();
for (int i = 0; i < iterateNum; i++)
{
    hashSet.Add(longString);
}
hashSet.Add(searchString);
stopwatch.Reset(); // should be stopwatch.Restart();
contains = hashSet.Contains(searchString);
Console.WriteLine("contains on hashset took: " + stopwatch.Elapsed.ToString());

And here are the results:

contains on list took: 00:00:01.2830928
contains on hashset took: 00:00:00

i.e. Was not possible to measure how long the hash set took.

In the code I'm adding the string to find as the last string in the list to make sure that the entire list is searched before it finds the string.

Edit: As per Bill's suggestion in the first comment I changed the code to populate a smaller and more realistic list and then exercised the Contains() method multiple times to get an "average" of its performance. I also replaced the Reset() method on the Stopwatch object with Restart(). Here is the new code:

int iterateNum = 4000;
int testNum = 100000;
string longString = "this is a very very very long string that is in the list";
string searchString = "this is the search string";

List<string> stringList = new List<string>();
for (int i = 0; i < iterateNum; i++)
{
    stringList.Add(longString);
}
stringList.Add(searchString);

Stopwatch stopwatch = Stopwatch.StartNew();
for (int i = 0; i < testNum; i++)
{
    bool contains = stringList.Contains(searchString);
}
Console.WriteLine("contains on list took: " + stopwatch.Elapsed.ToString());

HashSet<string> hashSet = new HashSet<string>();
for (int i = 0; i < iterateNum; i++)
{
    hashSet.Add(longString);
}
hashSet.Add(searchString);

stopwatch.Restart(); // .NET 4 method on Stopwatch
for (int i = 0; i < testNum; i++)
{
    bool contains = hashSet.Contains(searchString);
}
Console.WriteLine("contains on hashset took: " + stopwatch.Elapsed.ToString());

And the results are:

contains on list took: 00:00:04.6986760
contains on hashset took: 00:00:00.0056837
 

 

Tuesday, October 26, 2010

Open Source Software Stupid Tax

I'm sure I read or heard this somewhere before but my searches have turned up nothing.

If you fix a bug or add a feature to an open source project and you don't contribute that bug back to the project this is referred to as the stupid tax.

The reason that you fixed this bug in the first place is because it was preventing you from using the open source software the way that you wanted to use it. It might not even be a bug, it might be a feature that was missing that you needed.

Not contributing it back to the project is a stupid tax because it means that each time you want to upgrade to the latest version of the open source code you have to reapply your fix or feature. This is going to cost you in time and if you don't get it right first time it may introduce further bugs into your system. If you have someone else working on the team and they decide to upgrade the open source project to the latest version they might not know that you had "modded" the OS project. You might not even know that the developer that you replaced on the team applied a patch to it and then you're left scratching your head and trying to work out why the upgraded OS code is not working with your project.

Don't pay the stupid tax. Get your fixes and features back into the open source project as soon as possible and give yourself a pain free upgrade path to further releases of the OS project.

Saturday, October 16, 2010

RunAs Utility

I've recently discovered (been introduced to) the runas command line utility which I was previously unaware of. This command allows you to run in the context of another user which is particularly useful if you are testing access permissions under different user names. The syntax is:

runas /user:mydomain\myuser program.exe

The one annoyance is that when you run that command it requires that you type in the password and if its a particularly long and complex password I will often make a mistake. However, because this is in a command window you can copy the password to the clipboard and right click at the password prompt and select Paste and it works.

Try running the following with a different user who is registered on your local computer.

runas /user:mydomain\myuser cmd

This will bring up a new command prompt in the context of that user. Now type the command...

whoami

...in both command windows. You will see that each command window is running in the context of a different user.

Improving security and reducing HTTP header size in IIS

I've identified four unnecessary HTTP header elements that get transmitted with one of my ASP.NET MVC2 web sites. I wanted to remove these headers to improve security and reduce header size and thought that it would be easy to remove these HTTP headers from IIS7 in the same manner. It turned out that each of them had to be removed with a different technique.

X-Powered-By: ASP.NET

This header is removed through the IIS Manager. You can remove it on a site by site basis or remove it for the server. If removed from the server then it's removed from all sites on that server. This is the approach I prefer.

  1. Bring up IIS Manager.
  2. Click on the server name in the left panel
  3. Under the IIS section in the server area you will see HTTP Response Headers, double click on this.
  4. Click on the line that says "X-Powered-By ASP.NET Local"
  5. In the Actions pane on the right click Remove.

Server: Microsoft-IIS/7.0

This one is a bit more tricky to remove and needed some code to be added to the web site.

In the MasterPage.master code behind file in the Page_Load function I put the following:

HttpContext.Current.Response.Headers.Remove("Server");

X-AspNet-Version    4.0.30319

To get rid of this one I had to edit the web.config file and set the enableVersionHeader attribute to false:

<httpRuntime enableVersionHeader="false" />

X-AspNetMvc-Version: 2.0

In the constructor for the Global class in the global.asax.cs file I set the static DisableMvcResponseHeader property of the MvcHandler class to true.

public Global()
{
      MvcHandler.DisableMvcResponseHeader = true;
}

Sunday, October 3, 2010

The Male Brain by Louann Brizendine

The Male Brain by Louann BrizendineJust finished The Male Brain by Louann Brizendine.

I was very surprised at how much I liked this book and how good it was. I can't remember who recommended it or why I picked it up because it's not the sort of thing I would have normally read.

I particularly enjoyed the section about the teenage male brain. Finally I don't feel so guilty about the way that I behaved when I was a teenager, I was wired to behave that way.

I believe that the male brain is a sequel to The Female Brain which I have not read and I'm not sure if you need to read it after reading The Male Brain. I say this because at every stage of life she compares and contrasts the male brain to the female brain so you're really learning about both and their differences.

At the end of the day the reason why we act the way that we do is because we have these chemicals marinating our brains and pushing us back towards the stone age man.

Nested Parallel.ForEach()

I recently wrote some code that was using Parallel.ForEach() and in the function called during the Parallel.ForEach() I nested another call to Parallel.ForEach() to process a second array. This then got me thinking. If the first array that you are processing is larger than the number of cores on the machine that you're running it on then there's no advantage to nesting a second Parallel.ForEach(). You might as well just loop in that function because we'll already have all the processors saturated. So I put together a test to see the difference between Parallel.ForEach() with a regular loop and Parallel.ForEach() with a nested Parallel.ForEach()

static void ParallelMainNestedTest()
{
    int[] lowerRange = Enumerable.Range(0, 10).ToArray();
    int[] upperRange = Enumerable.Range(1000, 1010).ToArray();

    Stopwatch timer = Stopwatch.StartNew();
    Parallel.ForEach(lowerRange, lowerValue =>
        DoItemNotNested(lowerValue, upperRange));
    Console.WriteLine("Not Nested: {0}", timer.Elapsed);

    timer.Restart();
    Parallel.ForEach(lowerRange, lowerValue =>
        DoItemNested1(lowerValue, upperRange));
    Console.WriteLine("Nested: {0}", timer.Elapsed);

    Console.ReadKey();
}

static void DoItemNotNested(int lowerValue, int[] upperRange)
{
    foreach (int upperValue in upperRange)
    {
        Thread.Sleep(10);
    }
}

static void DoItemNested1(int lowerValue, int[] upperRange)
{
    Parallel.ForEach(upperRange, upperValue =>
        DoItemNested2(lowerValue, upperValue));
}
static void DoItemNested2(int lowerValue, int upperValue)
{
    Thread.Sleep(10);

The results of this test showed:

Not Nested: 00:00:14.6155628
Nested: 00:00:09.0907248

Now the reason that I put a sleep in the inner loop of the two tests is because this test code is designed to simulate network access (a REST call to a site) and I wanted to create a test where there would be idle wait time to determine which strategy is better. I appears, as you can see, that a pair of nested Parallel.ForEach() calls is most efficient in this scenario.

What if there was no latency in the inner loop and it was all down to the speed of the processor?

My suspicion is that the nested Parallel.ForEach() would be slower because of the extra overhead in managing the extra threads. To test this I replaced the Thread.Sleep(10) call with the following:

static Random r = new Random(Guid.NewGuid().GetHashCode());
private static void DoSomething(int lowerValue, int upperValue)
{
    for (int i = 0; i < 5000; i++)
    {
        int rand = r.Next(lowerValue, upperValue);
        rand = rand * rand;
    }
}

The result:

Not Nested: 00:00:05.6169188
Nested: 00:00:06.1500767

Confirmed: The nested Parallel.ForEach() shows a degradation in performance because of the extra overhead in managing all those extra threads.

Conclusion: Used nested Parallel.ForEach()'s when you know that the inner loop will be waiting on external resources (database, disk, network etc.). Use regular looping in the inner loop if you are doing a math or algorithmic intensive calculation that relies on the CPU alone.

Wednesday, September 15, 2010

Drive: The Truth About What Motivates Us

Drive: The Surprising Truth About What Motivates UsAnother audio book finished: Drive: The Surprising Truth About What Motivates Us by Daniel Pink.

Excellent book! If you've read any of the books mentioned in previous blog posts you will notice that you've already heard many of the examples that he uses. He quotes experiments done by Dan Ariely in Predictably Irrational and also uses many of the same examples as him. He also quotes the oft quoted Wikipedia example which I'm getting tired of.

In Drive, Dan discusses what motivates us and demonstrates that extrinsic motivators, if - then rewards, only work on routine tasks and have a negative impact on creative work which require intrinsic motivators.

He talks about a new "Operating System" for business revolving around the three elements of Autonomy, Master, and Purpose.

Autonomy - people want to direct their own lives.

Mastery - we want to get better and become experts or go-to-guys for something that matters.

Purpose - that what we are doing is in the service of something larger than ourselves, something meaningful.

I think that as a software engineer it's pretty easy to be in a job that satisfies the first two elements. Most development work is based around results and we're given the autonomy to produce those results. We have to master the subject to be able to do the job. Those are a given. I think that the job of software engineer might fall short when it comes to purpose and it's sometimes difficult to see the big picture of what you're contributing to.

Concepts that I enjoyed from this book

FedEx Days - Australian software company Atlassian give their employees a FedEx day once a quarter. Engineers start work on a Thursday evening at 5pm and have to deliver something by 5pm on the Friday - i.e. have to deliver something overnight - hence FedEx - so long as it has nothing to do with their regular job.

Google 20% Time - 20% of your time is spent working on a project that might benefit the company but has nothing to do with your day-to-day work. Similar to FedEx Days but you're given 1 day a week to do this instead of a 24-hour period once a quarter.

ROWE - Results Only Work Environment. No schedule, don't need to be in the office, meetings are optional. Personally I think that you have to have the right type of employee to make this work.

Tuesday, September 14, 2010

Create a chart using .NET 4 and ASP.NET MVC





This is a "pattern" that I have come up with to create a chart on-the-fly using .NET 4 and MVC. I happen to be using MVC2 but I don't believe that there are any v2 features that I'm using so this will work equally well in MVC 1.0. As you'll notice it can also easily be adapted to be used in a web form web app as well.
At the end of 2008 Scott Guthrie announced ASP.NET Charting which you could download and add to your project. Microsoft had bought Dundas Charting and was now giving it away for free. In .NET 4 they have bundled that charting library so you don't need a separate download and all you have to do is include the System.Web.UI.DataVisualization.Charting namespace, however, I'm getting ahead of myself.
Here are the components to the on-the-fly chart generation in an ASP.NET MVC web application.
The Controller
public ActionResult Chart(int id)
{
    ChartGen cg = new ChartGen();
    MemoryStream ms = cg.GenerateChart(id);

    return File(ms.ToArray(), "image/png", "mychart.png");
}

It really is that simple. All the heavy lifting takes place in my ChartGen class. What is happening here is that the GenerateChart() function is getting a memory stream and that is being passed back as a file action result of type image/png. The chart that is being generated is identified by the first parameter (id) passed in to the ShowChart() function. In this example it's a dummy placeholder value but would allow you to pull the data for the chart from a database based on that index.
The View
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
    <p>
        <img src="/Home/Chart/1" alt="This is a sample chart" />
    </p>
</asp:Content>

In its most simplistic form the view has a reference to the controller as the source attribute of an image tag. That's really all there is to it.
The Chart Generator
public Chart chart { get; set; }
public MemoryStream GenerateChart(int symbolId)
{
    List<Price> priceList = GetPrices().ToList();

    chart = new Chart();
    chart.Customize += new EventHandler(chart_Customize);
    chart.BorderSkin.SkinStyle = BorderSkinStyle.Emboss;
    chart.BackColor = ColorTranslator.FromHtml("#D3DFF0");
    chart.BorderlineDashStyle = ChartDashStyle.Solid;
    chart.Palette = ChartColorPalette.BrightPastel;
    chart.BackSecondaryColor = Color.White;
    chart.BackGradientStyle = GradientStyle.TopBottom;
    chart.BorderlineWidth = 2;
    chart.BorderlineColor = Color.FromArgb(26, 59, 105);
    chart.Width = Unit.Pixel(500);
    chart.Height = Unit.Pixel(300);

    Series series1 = new Series("Series1");
    series1.ChartArea = "ca1";
    series1.ChartType = SeriesChartType.Candlestick;
    series1.Font = new Font("Verdana", 8.25f, FontStyle.Regular);
    series1.BorderColor = Color.FromArgb(180, 26, 59, 105);

    foreach (Price dayBar in priceList)
    {
        bool upDay = dayBar.Open < dayBar.Close;
        series1.Points.Add(new DataPoint
        {
            BackSecondaryColor = upDay ?
                    Color.LimeGreen : Color.Red,
            BorderColor = Color.Black,
            Color = upDay ? Color.LimeGreen : Color.Red,
            AxisLabel = dayBar.Date.ToString("dd-MMM-yy"),
            YValues = new double[] { (double)dayBar.High,
                (double)dayBar.Low, (double)dayBar.Open,
                (double)dayBar.Close }
        });
    }

    chart.Series.Add(series1);

    ChartArea ca1 = new ChartArea("ca1");
    ca1.BackColor = Color.FromArgb(64, 165, 191, 228);
    ca1.BorderColor = Color.FromArgb(64, 64, 64, 64);
    ca1.BorderDashStyle = ChartDashStyle.Solid;
    ca1.BackSecondaryColor = Color.White;
    ca1.ShadowColor = Color.Transparent;
    ca1.BackGradientStyle = GradientStyle.TopBottom;

    ca1.Area3DStyle.Rotation = 10;
    ca1.Area3DStyle.Perspective = 10;
    ca1.Area3DStyle.Inclination = 15;
    ca1.Area3DStyle.IsRightAngleAxes = false;
    ca1.Area3DStyle.WallWidth = 0;
    ca1.Area3DStyle.IsClustered = false;

    ca1.AxisY.LineColor = Color.FromArgb(64, 64, 64, 64);
    ca1.AxisX.MajorGrid.LineColor = Color.Transparent;
    ca1.AxisY.MajorGrid.LineColor = Color.FromArgb(64, 64, 64, 255);
    ca1.AxisY.MajorGrid.LineDashStyle = ChartDashStyle.Dash;

    double max = (double)priceList.Select(a => a.High).Max();
    double min = (double)priceList.Select(a => a.Low).Min();
    double rangeAdjust = (max - min) * 0.03;
    max += rangeAdjust;
    min -= rangeAdjust;
    ca1.AxisY.Minimum = min;
    ca1.AxisY.Maximum = max;

    chart.ChartAreas.Add(ca1);

    MemoryStream memoryStream = new MemoryStream();
    chart.SaveImage(memoryStream, ChartImageFormat.Png);
    memoryStream.Seek(0, SeekOrigin.Begin);

    return memoryStream;
}

void chart_Customize(object sender, EventArgs e)
{
    CustomLabelsCollection yAxisLabels = chart.ChartAreas["ca1"].AxisY.CustomLabels;

    for (int labelIndex = 0; labelIndex < yAxisLabels.Count; labelIndex++)
    {
        decimal price = Convert.ToDecimal(yAxisLabels[labelIndex].Text);
        // Do your formatting of price here
        yAxisLabels[labelIndex].Text = (price/100).ToString("0.00");
    }
}

Random r = new Random((int)DateTime.Now.Ticks);
IEnumerable<Price> GetPrices()
{
    int open, high, low, close = r.Next(4000, 6000);
    for (int i = -20; i < 1; i++)
    {
        open = r.Next(close - 30, close + 30);
        close = r.Next(open - 70, open + 70);
        high = Math.Max(open, close);
        high = r.Next(high, high + 100);
        low = Math.Min(open, close);
        low = r.Next(low - 100, low);

        yield return new Price
        {
            Date = DateTime.Now.AddDays(i).Date,
            Open = open,
            High = high,
            Low = low,
            Close = close
        };
    }
}

That's a chunk of code to read through but it's not that bad.
We start off by creating a Chart object and attaching an event handler to the Customize property. This event is raised when all the axis and data have been calculated for the chart and just before the chart is rendered. This allows you to change formatting and options on the chart. For example the Chart object will generate values for you on the Y-Axis, when the customize event is raised you can format these values.
There are a bunch of colors and formats you can set for the chart.
The Series object allows you to define a series of data that will be displayed on the chart. In this example we're displaying stock market data in a candlestick format so we set the appropriate data. Once we've defined the series we add it to the chart object.
The rest of the code addresses mostly formatting. There is some code that sets the maximum and minimum values for the Y-Axis so that they are 3% off the lows and highs.
Finally we generate a memory stream and return this image as a memory stream.
I usually have caching in there as well and will cache the memory stream for a period of time using the id passed in to the function as the key to the cache. That way I can have many chart images in memory cache and pull them out without causing them to be generated each time. I have also excluded all the try catch blocks that I would usually have in there.
Here is the Price class that you'll need in the code above:
public class Price
{
    public DateTime Date { get; set; }
    public double Open { get; set; }
    public double High { get; set; }
    public double Low { get; set; }
    public double Close { get; set; }
}

This is the final result:


You, Inc.: The Art of Selling Yourself

You Inc The Art of Selling YourselfRecently finished listening to the audio version of You, Inc.: The Art of Selling Yourself by Harry Beckwith.

There was nothing in this book that I could disagree with. It all made perfect sense. The main takeaway that the book kept on hinting at was to look around you and take note of who impresses you, how they do it and why. Same applies to companies. Compare people to each other. Why does Jill impress you and Nancy not as much? Nancy knows more, speaks better, and is friendlier but she wears jeans and T-shirts to work. Jill is always wearing a business suit, that's all it takes. This is of course a contrived and simple example but it highlights how we are always selling ourselves in everything we do, say, wear and eat.

Sex, politics and religion

It seems that I have known forever that you don't talk about sex, politics, and religion in formal environments. I once attended a lunch with 7 journalists and during the first 30 minutes of lunch all that was talked about was sex, politics and religion. When I jokingly pointed this out they said that as journalists they could talk about anything at anytime. So maybe this rule doesn't apply to journalists. However, it definitely applies to all work environments that I have been in and I have seen some interesting mistakes through slight of tongue on these three forbidden topics.

Apart from listening to this book I also have the hard copy. One thing that I didn't realize about it while listening to it is that it's made up of lots of short one page chapters. Easy vignettes that can each be read in 30 seconds. This is a great book to keep in the lavatory - if that isn't awesome praise then nothing is.

Monday, September 13, 2010

Strengths Finder 2.0

Strengths Finder 2.0Just finished doing the test and reading the book for Strengths Finder 2.0 by Tom Rath.

This is an interesting "book" to read from the point of view that the book is only "usable" by one person. You cannot gift the book after you've read it. The book starts off with 30 pages of introduction which are fairly quickly put behind you and effectively convince you of the merits of continuing. You then turn to the back of the book and cut open the packet which has your unique access code to the strengthsfinder 2.0 assessment and website. Using this one-off code you do the 30 to 40 minute test online and this spits out top 5 strengths. Mine are:

  • Learner
  • Activator
  • Analytical
  • Achiever
  • Focus

Once you have the list your then read the 5 Themes and Ideas for Action from the list of 34 themes and actions in the next part of the book. These themes explain how these strengths impact who you are, give some quotes from real world people who share those attributes, and finally some action points that will help you leverage those strengths. There are also a few action points for working with people who have those strengths.

Learner

Your are energized by the steady and deliberate journey from ignorance to competence. Understand how you best learn (me: by doing). Track progress when you're learning and find opportunities to do courses and further yourself.

Activator

You learn more from real experience than from theoretical discussions. Most developers (software engineers) would fall into the activator category as I think that most of us "click" when we do rather than when taught or reading.

Analytical

Prove to me that this is true. "Show me the money." I'm not going to believe your theories until you've demonstrated to me that they have some sound validity. Objective and dispassionate.

Partner with someone with strong Activator talents. This person's impatience will move you more quickly through the analytical phase into the action phase. (This means, I think, that I should partner with myself.)

Achiever

You feel as if every day starts at zero. By the end of the day you must achieve something tangible in order to feel good about yourself. And by "every day" you mean every single day...

I completely agree with this assessment and how it applies to me. I also agree with the statement that the feeling of achievement is short lived and I need to start looking for the next item. In general I agree with the action items and considering the Learner I have to accept that the "attain certifications" item is going to have to now be on my list.

Focus

You stay on the main road and don't wander off down alleys that don't benefit the final goal. You are goal orientated and driven. You must have a purpose and goal to focus on. Include time-lines and measurements in goals.

Friday, September 10, 2010

Linchpin by Seth Godin

Linchpin by Seth GodinI recently finished reading Linchpin by Seth Godin. This is a great read and packed with insight. The full title is Linchpin - Are you Indispensable?

I found the book from reading Seth's blog which I try and read daily but usually catch-up with it on weekends.

The crux of the book is about how some employees are able to do anything. They are the go-to guys who seem to know someone or something that can get the job done and have the talent for making it happening. They are the linchpins.

A lot of the book discusses art and the generosity on non-reciprocal giving. I've read a number of books like this lately and the one example that keeps on coming up is Wikipedia and I have to say that this example has been overdone many times. Perhaps it's still relevant and perhaps it's the largest and best known example of its time but I would like to see some other examples apart from Wikipedia.

Seth is a big fan of shipping and doing so on time. He's right and to get a product to market you have to ship. Don't look for perfection or you will never ship.

Thursday, September 9, 2010

X.CO Url Shortener

Go Daddy have just launched their x.co url shortener which is the shortest url shortener that I have been able to find. If you are doing shortening on your server (as opposed to doing it client side using JavaScript) then here is some C# code that will work on your web server to shorten a url using their API. You will need your API key which can be found here: Integrating x.co with applications you develop.

public class XcoApi
{
    private const string apiKey = "put your API key here";

    public static string ShortenUrl(string longUrl)
    {
        var shortUrl = string.Format(
            "http://x.co/Squeeze.svc/text/{0}?url={1}",
            apiKey, HttpUtility.UrlEncode(longUrl));

        WebClient wc = new WebClient();
        return wc.DownloadString(shortUrl);
    }
}

The main advantage of shortening a URL on the server side instead of the client side is that you can keep your API key hidden and therefore usable only by you.

Friday, August 27, 2010

Erase an area in Paint.Net using the Paint Bucket

There might be an easier way to do this but I couldn't find it and it took me forever to work this out so I hope that it helps someone else as well.

Using the excellent and free Paint.Net image editing software I wanted to erase an area but I wanted to do it with the paint bucket so that it would would erase all the complicated edges that I couldn't get to.

Start off by selecting a small area that you're going to erase with the Rectangle Select tool and hitting the delete key. You now have a small checkered section.

Select the Color Picker tool and click in the just deleted area and this will set your color to transparent (deleted/erased).

Now click on the Paint Bucket tool and click in the area(s) that you want to erase with this tool and it will use the Paint Bucket to erase the area(s).

Please let me know if there's an easier way to do this.

 

Wednesday, August 25, 2010

Myers Briggs Type Indicator ENTJ Executive Fieldmarshal

I just took the Myers Briggs Type Indicator assessment and discovered that I'm an ENTJ. This is how my score pairs panned out:

E=18 & I=3

S=7 & N=15

T=28 & F=0

J=24 & P=5

Without reading any further about what an ENTJ personality type is I can only assume that we do not have a problem exposing our scores or writing about our type, otherwise I wouldn't be doing this.

In reading what wikipedia has to say about this I discovered the Keirsey Temperaments which classifies me as a Fieldmarshal. The personality page calls me The Executive. I'm now starting to feel pretty full of myself and without reading what The Executive is all about I give it to my wife to read to me.

She slows down, reads extra loud, and repeats some of the passages:

"...they are not naturally tuned in to people's feelings, and more than likely don't believe that they should tailor their judgments in consideration for people's feelings. ENTJs, like many types, have difficulty seeing things from outside their own perspective. Unlike other types, ENTJs naturally have little patience with people who do not see things the same way as the ENTJ..."

"...sentiments are very powerful to the ENTJ, although they will likely hide it from general knowledge, believing the feelings to be a weakness. Because the world of feelings and values is not where the ENTJ naturally functions, they may sometimes make value judgments and hold onto submerged emotions which are ill-founded and inappropriate, and will cause them problems..."

I read that other ENTJs include Margaret Thatcher and Bill Gates.

Wednesday, August 11, 2010

C# Script to list local users and disabled status on Windows Servers

The following script will list all local users and their disabled status on each of the windows servers that you put into the "servers" array variable. This code is written in C# and you will need to include the System.DirectoryServices assembly.


using System;
using System.DirectoryServices;

namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
            string[] servers = { "ServerName1", "ServerName2", "ServerName3" };
            DirectoryEntry de = new DirectoryEntry();

            foreach (string server in servers)
            {
                de.Path = "WinNT://" + server + ",computer";

                Console.WriteLine("Server: " + server);
                foreach (DirectoryEntry d in de.Children)
                {
                    if (d.SchemaClassName == "User")
                    {
                        int userFlags = (int)d.Properties["UserFlags"].Value;
                        bool disabled = (userFlags & 2) == 2;
                        string name = (string)d.Properties["Name"].Value;
                        Console.WriteLine(name + " (" + disabled + ")");
                    }
                }
                Console.WriteLine();
            }
           
            Console.ReadLine();
        }
    }
}
 

Thursday, July 22, 2010

Factorial Function in C#

 The Factorial Function is a classic interview question. It opens the doors to a discussion about stack overflow (if done recursively) and integer overflow (if the param is too large) as well as discussions around how to handle and catch errors.

Here is one way to do it using recursion:

static Int64 Factorial(int factor)
{
    if (factor > 1)
    {
        return factor * Factorial(--factor);
    }
    return 1;
}

Here is another way to do it using a loop: 

static Int64 Factorial(int factor)
{
    Int64 factorial = 1;
    for (int i = 1; i <= factor; i++)
    {
        factorial *= i;
    }
    return factorial;
}

Neither function does any sanity checks for input (e.g. >= 1) and assumes that we're looking for a factorial of 1 or greater.

There's a third and much better way to do factorials if you are going to use them in code that needs to be optimized. In fact I can't think of a reason why you wouldn't want to use the following method. If you are calculating an Int64 factorial then the maximum factor that you can calculate it for is 20. After that it will overflow the Int64 type. If you are calculating an Int32 factorial then the highest you can go is 12. 

static Int64[] factors64 = { 0, 1, 2, 6, 24, 120, 720, 5040, 40320,
362880, 3628800, 39916800, 479001600, 6227020800, 87178291200,
1307674368000, 20922789888000, 355687428096000, 6402373705728000,
121645100408832000, 2432902008176640000 };
 
static Int64 Factorial(int factor)
{
    return factors64[factor];
}

The tests that I ran showed the recursive factorial function to run 14 times slower than the array lookup and the loop to run 5 times slower.

Friday, July 9, 2010

Google AdSense Best Day Of Week

I've just been doing some Google AdSense analysis for some web sites that have moderate traffic. The owner said that I can publish this bit of information from her site. She's interested to hear if other AdSense publishers have had similar results.

The objective was to find out which day of the week the Cost Per Click (CPC) was the highest. So we ran the averages across a number of pages and the average CPC by day of week ranked the days in order from worst to best as follows:

  1. Monday
  2. Sunday
  3. Thursday
  4. Friday
  5. Wednesday
  6. Tuesday
  7. Saturday

Before running the analysis she was pretty certain that Tuesday was the best day of the week and was surprised to see that Saturday was in fact the best.

Monday, July 5, 2010

Parallel For and ForEach in .Net 4

I've started looking at the Parallel.For and Parallel.ForEach methods from .Net 4's Task Parallel Library. Before I use something as complex and potentially dangerous as this I like to test it out to make sure I'm going to get that performance boost before going down that path.




Here's the console app that I wrote to do the test:

static void Main(string[] args)
{
    int[] someInts = Enumerable.Range(0, 100).ToArray();

    Stopwatch timer = Stopwatch.StartNew();
    foreach (int item in someInts)
    {
        DoItem(item);
    }
    Console.WriteLine("Sequential: {0}", timer.Elapsed);

    timer.Restart();
    Parallel.ForEach(someInts, item => DoItem(item));
    Console.WriteLine("Parallel: {0}", timer.Elapsed);

    Console.ReadKey();
}

static void DoItem(int item)
{
    Random r = new Random((int)DateTime.Now.Ticks);
    for (int i = 0; i < (item * 100000); i++)
    {
        int j = r.Next(0, 10000);
        j = j * j;
    }
}


I ran this a number of times on a quad core machine (Intel Core2 Quad Q9400 @ 2.66GHz) on Windows 7 x64. Average time for the sequential loop was 14.7 seconds and for the parallel loop was 5.6 seconds for a ratio of 2.6 times faster using the parallel loop. This is seems right., you're going to get a loop like this to be about Core-1 times faster because the thread management will "cost" you one of your cores. So on an 8 core machine expect this loop to be 7 times faster and on a quad expect 3 times which is what I almost got.


Saturday, July 3, 2010

CREATE FILE encountered operating system error 5 Access is denied

I have an MSSQL 2008 R2 installation and was trying to attach a DB from a non-R2 installation using SQL Server Management Studio (SSMS) and I was getting the following error:

CREATE FILE encountered operating system error 5(Access is denied.) while attempting to open or create the physical file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\MyDataBase.mdf'. (Microsoft SQL Server, Error: 5123)

Found plenty of Google results pointing me to suggestions about SQL Server 2000 and 2005 with instructions on how to add user permissions and which user account needed to be added.

Turned out that all I needed to do was to run SSMS as an administrator and the attach worked.

Tuesday, June 29, 2010

Rocket Surgery Made Easy by Steve Krug

I just finished reading Rocket Surgery Made Easy by Steve Krug and recommend you read it.

Javascript: The Missing Manual

It's sub-titled The Do-It-Yourself Guide to Finding and Fixing Usability Problems.

It's a great read. He writes well, has appropriate sarcasm and wit at the right places, and delivers everything you need to know about usability testing. I bet that even usability professionals will find information in there that's of use even though it's not targeted at them. He also gets extra points for quoting Douglas Adams of Hitchhiker's Guide fame: "I love deadlines, I love the whooshing noise they make as they go by."

Right at the beginning of the book he gives you a link to an online video of someone taking a usability test and him guiding them through the test. This would really suck if you didn't have internet access close to you when you were reading the book so watch the usability video as soon as you get the book and don't even wait to get to that part.

One of the cool things that I learned (and I have no idea why I didn't think this would exist) is that you can outsource your usability testing and there are sites on the web that will do it for you. This is a no-brainer if you don't have the resources to set this up yourself and it sounds like a reasonable price to pay for the information that you'll get out of it. Just point the candidate at your site (or your beta site) and let them at it.

I thought he'd listed several online usability sites but while paging back through the book I can only find www.usertesting.com.

 

Monday, June 21, 2010

Building Quality Links to your Site

Great blog post by Google Webmaster Central Blog about building quality links to your site.

This paragraph sums up why you shouldn't engage in buying links or link exchange programs:

It's important to clarify that any legitimate link building strategy is a long-term effort. There are those who advocate for short-lived, often spammy methods, but these are not advisable if you care for your site's reputation. Buying PageRank-passing links or randomly exchanging linksare the worst ways of attempting to gather links and they're likely to have no positive impact on your site's performance over time. If your site's visibility in the Google index is important to you it's best to avoid them.

From the Buying PageRank link:

Buying or selling links that pass PageRank is in violation of Google's webmaster guidelines and can negatively impact a site's ranking in search results.

There is no point or reason to be trying to game Google's Page Rank system. It's going to be a ton of effort and it's not going to work. Just don't waste your time with it.

Saturday, June 19, 2010

Request.Params == QueryString + Form + ServerVariables + Cookies

I'm doing some work with the ASP.NET Request object and have just discovered that I can get all of the "params" from one property on the Request object.

If I'm not already in a context where the Request object is a member of that context (such as an MVC Controller or a codebehind page) then I will usually access the Request object through the HttpContext.Current object. I always wrap this access in a double are-you-null before trying to access it:

if (HttpContext.Current != null && HttpContext.Current.Request != null)
{
}

These four objects:

HttpContext.Current.Request.QueryString
HttpContext.Current.Request.Form
HttpContext.Current.Request.ServerVariables
HttpContext.Current.Request.Cookies

Can be accessed through a single collection if you reference the:

HttpContext.Current.Request.Params

which gets a combined collection of the other four.

 

Thursday, June 10, 2010

Google home page has images




I noticed this morning that Google has take a leaf from the Bing book and added images as a background to their home page. I think this is great as the images are always fascinating and high quality. However, the one thing that irks me with both Google and Bing home page images is that there doesn't appear to be an easy way to find out where the picture was taken. I would love to know where this was taken, but it may always be a mystery to me unless someone posts a comment here letting me know the source.

Wednesday, June 2, 2010

Upgrading Silverlight project from v2 to v3 (with .NET 4 and MVC2)

I have a setup a site for Online Calculators that I created about 2 years ago to get up-to-speed on Silverlight 2. I've been going through and converting all of my web sites to VS2010, MVC 2, .NET 4, and jQuery 1.4.1.

I anticipated that the calculators were going to be a bit difficult because I'd also included some Codeplex Silverlight Futures (or whatever it was called) charting libraries in the original project and because I was also upgrading the hosting site from .NET 2.0 to .NET 4.0 and well as MVC1 to 2.

I was very impressed with the ease of the upgrade. The VS2010 wizard did it all and it probably took me less that half an hour to complete everything from end to end including setting up the web site on a new server.

The part that I was most concerned with were the charts in the Mortgage Calculator because of the origin (Codeplex) of the libraries and that they may have changed for Silverlight 3. However those went swimmingly. The host site is very basic and so the conversion to .NET 4 and MVC 2 was not a great challenge for the wizard.

Monday, May 24, 2010

Google TV and Advertising Commercials

I was pretty excited when I watched the Google TV announcement last week.

Today I read a couple of commentaries about it and they went on about the features etc. but nobody mentioned the advertising. This surprised me as my initial guess is the reason that Google have entered this market is to gain more surface area for their advertising machine. In my opinion this is great news for all of us.

Everybody complains about the commercial breaks and if you ask someone with a Tivo why they like it they almost always tell you it's because they can fast forward through the commercials. If we don't have commercials then there's not going to be anyone to pay for the movies that we watch. That means smaller budgets for the studios and lower quality for us. We have to have adverts or we will end up moving to a subscription based viewing platform such as Netflix and that might kill off competition for great series and movies and lower the quality as well.

The perfect compromise is show commercials that you want to see. This, I believe, can be achieved with Google TV. By registering yourself and your interests on your device you make it easy for Google (or whatever content provider) to target you with relevant adverts. If I'm in the market to buy a car then I can change my profile settings to reflect that. I can give them my location, price range and model types and dealers in my area can then show ads to me during the breaks. The local dealers aren't going to pay much for that ad space because it will only be delivered to a few people like myself who are in the market for their product. National campaigns probably hit deaf ears 95% of the time. Targeted campaigns should be hitting willing-to-buy consumers 100% of the time. In reality this will of course be lower because there will also be car enthusiasts etc. who will be tuned into that type of ad because of their fanatical nature.

Once you have bought your car and are no longer in the market for one you can change your profile and pick something else that interests you. The question "how do we get consumers to change and/or set their profiles?" then arises. Again, this is easy. Constantly remind them that if they want to see adverts about products that interest them then they should update their profiles.

 

Friday, May 21, 2010

Dropbox File Synchronization

I've been using DropBox for a couple of months now and I'm very impressed. I keep my crucial source code files on there and sleep well at night knowing that they are backed up "off-site." That was my primary goal in using DropBox. However, now, I'm using more of the features and I'm using it to keep my DEV server synchronized with my PROD server with respect to files that have been uploaded by users. This has always been a pain for me to do but now it happens automatically. I've also started investigating the possibility of using it for Continuous Integration in a Cruise Control build environment. I haven't set that up yet but I don't see any reason why that wouldn't work.

Thursday, May 13, 2010

Identify duplicate IP address use to the public

The Domain

You have a website that allows the public to add comments.

For example

  1. You have a forum that users can post to.
  2. You have a product listing that visitors can post reviews against.
  3. Any other listing that the public can comment on that would allow shills to manipulate public opinion for or against a topic/product.

The Problem

Public comments can appear to be from multiple independent sources when in fact they are the same person posting multiple times under different user name's or email addresses.

A Solution

Changing your email address is easy. But posting from multiple different IP addresses is more difficult. One solution to the problem is displaying the user's IP address next to each posting. However, this is often considered non-acceptable as IP addresses are sometimes considered private.

You could generate an image for each unique IP address and display that. This would allow people reading comments to link together identical IP images and know that the comments came from the same source. This is fairly easily done by using Gravatar. You can pick any domain to base your naming convention on, Gravatar doesn't care. Let's use example.com. If two visitors used the same IP address of say 96.125.6.87 then you would generate a fake email of [email protected] and use the Gravatar algorithm to generate an image URL and display this as an image next to the post. This protects the exposure of the IP address to the public on your web page.

This would have the added benefit of also allowing users (or Admins) to quickly visually spot identical IP addresses being used. If you were serious about this though you would probably just write an admin report that picks out duplicate IP addresses on pages.

Friday, April 30, 2010

VS2010 "cannot create the window"

I installed the release-to-market (RTM) version of Visual Studio 2010 and it worked well for a few days and then suddenly it started giving me a "cannot create the window" error and refused to start.

I did a whole bunch of searching and discovered that there were some incompatibilities with Office 2010 beta (an incompatible DLL) that was causing most of the problems. Not in my case though because I'd never installed Office 2010. If, however, you are reading this and you have installed a version of Office 2010 then you might want to search further as that might be your problem.

I contacted the developers at Microsoft and they were outstanding in the help that they provided me. I downloaded a debugger and created some memory dumps of what was happening in VS2010 when it started and sent them off to Microsoft. They isolated that the problem was happening when the machine.config file was being loaded. I know that I had edited my machine.config and so I became suspicious of my edit.

A quick aside about machine.config's. On a 32-bit operating system with .NET 2 installed there's only one machine.config and that usually lives here:

C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG

If you have installed VS2010 then that means that you have also installed .NET 4 so there's another machine.config for .NET 4 and you can find that here:

C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config
(On my machine this is the one that VS2010 is using even though I'm running Windows 7 x64.)

If you have a 64-bit OS then you have 64-bit versions of these machine.config's as well taking the total to four. These can be found here:

C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config

(I don't know why one is in capitals and the other in proper case.)

An analysis of my machine.config's (all 4 of them) showed that I have incorrectly edited 2 of them. I had added a new section to the machine.config but had failed to add that section name to the sectionGroup. So I was missing this part of the machine.config:

<sectionGroup name="myNewSectionGroupName" ... > <section ...> </sectionGroup>

Which goes inside the <configSections> ... </configSections> part of the file.

In the end it was a PIBKAC* and not a VS2010 bug.

*PIBKAC = Problem Is Between Keyboard And Chair

 

Tuesday, April 20, 2010

.Net 4 VS2010 Conversion Notes

I'm in the process of converting a bunch of projects from .NET 3.5 to .NET 4.0 and know that I will be converting plenty more in the future so I thought I'd jot down some notes as I went along. This list of notes should increase over time.

 

Machine.Config

.NET 3.0 and 3.5 were really an extension of .NET 2.0 so the machine.config remained constant here C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config or in the 64-bit equivalent here C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG

With .NET 4 we have new 32-bit and 64-bit locations here C:\Windows\Microsoft.NET\Framework\v4.0.30319\Config and here C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config

If you've made modifications to your machine.config file for .NET 2.0, 3.0, or 3.5 then you'll have to propagate those changes to the .NET 4.0 version.

 

Links to conversion resources

Official What's new in ASP.NET MVC 2

ASP.NET 4 Breaking Changes

 

Breaking Changes that I've bumped into

<httpRuntime requestValidationMode="2.0" /> needs to be added to web.config to prevent the "A potentially dangerous Request.Form value was detected from the client" error.

 

Tuesday, April 13, 2010

Merging in Mercurial

I'm using TortoiseHg and trying to get the hang of merging changes from multiple contributors so I setup a small test and did some changes and commits and pulled each one into a main repository without doing any merges (except one at the beginning). This is what the tree looked like.




 Then I tried to do a number of different merges and found that nothing really worked (or in the end made sense) except for merging from the bottom of the graph to the tip. When I started doing this things started flowing smoothly. I had lots of conflicts because I made sure that each time I changed a file I changed the same line in different locations to force conflicts. The conflicts were easily resolved in the default KDiff3 text merge tool. This is what the graph ended up looking like in the end.



Saturday, April 3, 2010

Windows Home Server - the good and bad

About ten months ago I bought and setup a Windows Home Server on my home network. Since that date I've used it twice for recoveries.

I follow the three rules of "backup reality", i.e. if these 3 criteria aren't met then I don't consider it a real backup:

  1. If must be automated.
  2. It must be redundant.
  3. It must be duplicated off-site

This is a must for any business (even though many don't do this) but it's a pretty tall order for a home network. I find that Windows Home Server with Amazon S3 syncing gives me all this.

They say that an untested restore of a backup is no backup at all.

The first time I got to try out the restore feature of the server was when I upgraded the hard disk on a Server 2003 server from 250GB to 2TB. This was the perfect opportunity to try out the WHS restore feature. I let the nightly backup complete and in the morning I swapped out the drives, put in the restore disk and rebooted the machine. WHS recognized the machine and picked the correct backup and target and a few hours later the server was back up and running on a new hard disk. I was a bit surprised at how long the restore took, I remember it being several hours. I know that it also had to format the hard disk but still thought that it was longer than I would have expected.

The second time I got to try the restore was when I was unable to sign on to a Windows 7 Ultimate machine. Win7 just told me that my password was invalid. I could boot in safe mode and sign in but not in regular mode. I signed in using safe mode and moved the most recent data set to a USB drive and tried a number of troubleshooting recovery steps. Searches showed that this type of problem was being experienced by many Windows 7 users but no fix has come from Microsoft yet and none of the suggested workarounds had worked for me. So I attempted to do a restore from WHS. On booting with the recovery CD it wasn't able to correctly mach up the disk. That machine has 128GB Solid State Drive (SSD) as the boot drive and a 2TB drive as the secondary drive. The restore automatically picked to restore the boot drive to the 2TB drive which was the wrong one. That was easily changed using the restore UI but after the restore had been running for a couple of hours and completed about 80% of the restore it failed with a networking error. That networking error was also found all over the net with plenty of people experiencing the same type of error almost at the end of their restore. None of their workarounds worked for me. So my next and final recovery step was to re-install the OS. I did this and it still had problems booting.

Now the one thing that I didn't tell you about this machine was that when I put it together I had inadvertently wired the SSD boot drive as drive 1 and the 2TB data drive as drive 0. After putting the case back together and starting the machine I saw the problem and when into the bios and changed the boot drive to be the non-default 1 (from the default 0). This I believe was the source of most of my problems. My theory is that deep in the bowels of Windows 7 and many other pieces of device software written by many companies the boot drive is frequently assume to be 0 and this is often hard coded. Because the boot drive is rarely anything else this type of bug is hardly ever seen.

I opened up the case and switched the wires around and made the boot drive 0 and the data drive 1 and that was the first time in days that the machine had operated in a normal manner and this time it finally booted with my re-installed OS. Since then it has been running way smoother and I attribute this to the non-standard drive wiring that I had done. Even though it's supposed to be able to work the other way I just don't think that it's worth the risk to do that again. Drive 0 will always be my boot drive in the future if I have any choice.

 

Tuesday, March 30, 2010

Number of TFS checkins

I was trying to work out how many times I have checked code into TFS for each project that I've worked on in a given date range. I still haven't managed to work out how to break it down by project but did get the total count.

  • Ran Team Foundation Sidekicks.
  • On History tab select my User name in the drop down.
  • Clicked the "Save list to file" button.

Problem with this CSV list is that I have line breaks in many of the comments so it's not a true CSV file.

  • Using Notepad++
  • Ctrl+F for search and change the Search Mode to Regular Expression.
  • Enter "^\d+," without quotes in the Find what field.
  • Click the "Find all in Current Document" or the "Count" button to get the number of lines with that pattern.

That will be the number of items that you checked in.

The regex ^\d+, can be roughly interpreted as find all lines starting with (^) a string of one or more numbers (\d+) and ending with a comma (,).

This is probably one of the most unwieldy and ugliest solutions to a problem I have recently come up with. Also, I haven't managed to work out how to breakdown check-ins by project. Any ideas?

Wednesday, March 24, 2010

Mercurial Commit Push Pull Update

I'm just starting to wrap my head around Mercurial and feel that this is the best diagram that I've seen that explains the push/pull commit/update model.
 To get or retrieve changes from your working directory to the "server" directory is always a two step process.
To "check-in" you will commit your changes to your local repository and then push them to the server.
To "get latest" you will pull from the server and then update to your working directory.

 

Source: http://www.ivy.fr/mercurial/ref/v1.0/Mercurial-QuickStart-v1.0-300dpi.png

Wednesday, March 17, 2010

Make Better Software: The Training Series

The company that I work for recently bought my team the Make Better Software: The Training Series done by Joel Spolsky at Fog Creek Software. This is my report on this course.

I've listened to all the Stack Overflow podcasts with Jeff Atwood and Joel Spolsky and so I now "get" that Joel is only pretending to be arrogant and his manner is intended to provoke debate. He does it well.

The six one-hour videos in the course are very well done. They were easy to watch and packed with good content and extremely well edited. Every now and then you felt like it was teetering on the edge of a sales pitch for either the product that they sell or for candidates to apply for a job at their company but considering that it was all about working at Fog Creek and creating Fog Bugz it would have been very difficult to not do this now and then.

The manual (Student Guide) that comes with the videos was not as well prepared as the videos. It is essentially a collection of Joel's blog post over the last 10 years divided into six sections. I found at least two blog posts that were repeated verbatim in two different sections and there were paragraphs that were repeated and some of the blogs seemed to have text missing off the end. This was slightly unprofessional and a bit irksome. I'm not sure if anybody proofread or edited the compilation of these.

In the table of contents he has a "chapter" titled "Incentive Pay Considered Harful", italics and bold are mine. I've heard Joel criticize candidates for submitting resumes with spelling mistakes in them with comments like "there's no excuse for that, don't they know how to use a spell checker."

Despite my criticism of the spelling and lack-of-proof-reading I consider the content to be good. It duplicates what is said in the video and in some parts goes a bit deeper and has the names and references that you might miss when watching the video of that section.

The course starts off with The Joel Test. We score fairly high on the Joel test with a 10. We don't do point 3 (daily builds) but instead we build on every check-in which is more rigorous than he is proposing.

After the Joel Test the course is divided into 6 modules:

  1. Recruiting
  2. Team Members
  3. Environment
  4. Schedules
  5. Lifecycle
  6. Design of Software

From a management point of view all modules are relevant but more so modules 1 to 4. I felt that Program Managers were the main focus in the Team Members section but all members were covered and some myths dispelled.

Fog Creek appear to have the best environment for almost anybody to work in let alone developers. It's interesting that the offices given to developers are not only to increase productivity but are a huge sales tool in recruiting talent.

My point of view is that of a developer. As such I found the second half of the course more interesting. Joel did an excellent job of convincing me about the value in specifications and I no longer feel that they are a waste of time but rather something that needs to be continuously worked on while a project is alive.

Would I recommend this course? Absolutely.

 

Tuesday, March 2, 2010

LINQ secondary sort using extension method syntax

You have a list of objects that you want to sort using LINQ. You want to sort by a primary and secondary key from the fields (properties or members) of the object. How do you do this?

The answer is to do two sorts. The first by the secondary key and the second by the primary key. Here's a little test console app that you can run to verify.

class Widget
{
    public int First { get; set; }
    public int Second { get; set; }
}
class Program
{
    static void Main(string[] args)
    {
        List<Widget> widgets = new List<Widget>();
        widgets.Add(new Widget { First = 1, Second = 6 });
        widgets.Add(new Widget { First = 2, Second = 6 });
        widgets.Add(new Widget { First = 3, Second = 5 });
        widgets.Add(new Widget { First = 4, Second = 5 });
        widgets.Add(new Widget { First = 5, Second = 4 });
        widgets.Add(new Widget { First = 6, Second = 4 });

        var firstSort = widgets.
          OrderByDescending(a => a.First).
            OrderBy(a => a.Second);
        foreach (Widget widget in firstSort)
        {
            Console.WriteLine("First {0} Second {1}",
                widget.First, widget.Second);
        }

        Console.ReadLine();
    }
}

The results of this run are:

First 6 Second 4
First 5 Second 4
First 4 Second 5
First 3 Second 5
First 2 Second 6
First 1 Second 6
Hit return to continue...

So we initially reverse sort by the First property, this will be the secondary sort. We then sort by the Second property, this will be the primary sort.

Wednesday, February 10, 2010

HTML Submit button still submits even when JavaScript returns false

Here's something that's caught me out a couple of times in the past so I'm documenting it to remind myself.

My web page has a submit <input> tag that looks like this:

<input type="submit" value="Create" />

I want to do some client side validation before I POST the page back to the server so I modify it to call my client side JavaScript function by adding the onclick attribute:

<input type="submit" value="Create" onclick="SubmitCreate();" />

The JavaScript does a bit of validation:

function SubmitCreate() {
    var selectedNodes = tree.getSelectedNodes();
    if (selectedNodes.length < 1) {
        alert('Please select at least one node.');
        return false;
    }
}

The problem is that even though the SubmitCreate() function is returning false this is not preventing the page from being submitted.

The reason, when shown, is obvious. The onclick attribute should return the JavaScript and not just call it.

<input type="submit" value="Create" onclick="return SubmitCreate();" />


 

Friday, February 5, 2010

Viewing connections to a server

I keep on forgetting how to do this so here it is documented for my convenience.

Start Performance Monitor (PerfMon)

Click + to add new monitor.

Enter computer's name

To view connections for a web service

From Performance object drop down select "Web Service"

From Select counters from list drop down select "Current Connections"

From Select instances from list select "_Total"

To view connections for a web app

From Performance object drop down select "ASP.NET Apps v2.0.50727"

Tuesday, February 2, 2010

Combine, compress, and update your CSS file in ASP.NET MVC

After doing some analysis using Google's web master tools I discovered that I needed to make some improvements to how the CSS file(s) on a busy site were being delivered. This post follows on from and improves on Automatically keeping CSS file current on any web page.

  • The CSS files need to be combined into a single file.
  • It needs to be compressed.
  • It needs to be cached, but only until it's changed.
  • Image references need to be dynamic.

I'm using ASP.NET MVC so the first thing that I did was to add an Action to a Controller that was to be the new CSS file. Instead of referencing a physical file on disk the CSS file has now become a resource that follows this pattern:

~/Site/Css/20100201105959.css

Site is the controller, Css is the action, and 20100201105959.css is the single parameter that this action accepts.

This is what the SiteController class looks like:

[CompressFilter]
public class SiteController : Controller
{
    static ContentResult cr = null;

    [CacheFilter(Duration=9999999)]
    public ActionResult CSS(string fileName)
    {
        try
        {
            if (cr == null)
            {
                StringBuilder sb = new StringBuilder();
                foreach (string cssFile in Constants.cssFiles)
                {
                    string file = Request.PhysicalApplicationPath + cssFile;
                    sb.Append(System.IO.File.ReadAllText(file));
                    sb.Append("!!!!!");
                }

                sb.Replace("[IMAGE_URL]", StaticData.Instance.ImageUrl);

                cr = new ContentResult();
                cr.Content = sb.ToString();
                cr.ContentType = "text/css";
            }

            return cr;
        }
        catch (Exception ex)
        {
            System.Diagnostics.Trace.TraceError(ex.Message);
            return View();
        }
    }
}

Notice that we have a couple of filters on the class and method. The [CompressFilter] on the class will check the browser's capability for gzip or deflate and activate compression for any action/method in this class. The [CacheFilter] will add a 3 month "cache this file" direction to the HTTP header of the HTML that's returned.

The Constants.cssFiles returns a list of root-relative CSS files that need to be combined into the single response.

The Replace() function on the StringBuilder allows us to put a tag in the CSS files so that any references to images can be dynamically determined at runtime. This allows us to run the images off a url such as http://localhost:1234/images/ during development and http://images.mysite.com/ in production. Storing images in a subdomain instead of the main domain will improve performance because more images will be able to be downloaded in parallel by the browser.

The single fileName parameter is a dummy parameter that is needed to make the URL different when one of the underlying CSS files is modified but still allow optimum caching in the browsers. i.e. by changing this value in the HTML we return we force the browser to re-fetch the CSS but only when one of the underlying CSS files change.

The ContentResult is a static variable so that the building of this CSS file is a single hit after the App Pool has been recycled.

The following snippet of code is placed in the code behind page for the Site.Master. Normally you would not have a code behind page in the MVC model but I have not been able to work out how to dynamically inject the CSS file name into the master file any other way.

protected void Page_Load(object sender, EventArgs e)
{
    HtmlLink css = new HtmlLink();
    css.Href = String.Format("/Site/CSS/{0}", StaticData.Instance.CssFileName);
    css.Attributes["rel"] = "stylesheet";
    css.Attributes["type"] = "text/css";
    css.Attributes["media"] = "all";
    Page.Header.Controls.Add(css);
}

The CSS file name is generated with the following snippet of code:

string _CssFileName = null;public string CssFileName
{
    get
    {
        if (String.IsNullOrEmpty(_CssFileName))
        {
            DateTime dt = new DateTime(2000,1,1);
            foreach (string cssFile in Constants.cssFiles)
            {
                string file = System.Web.HttpContext.Current.Server.MapPath(cssFile);
                FileInfo fi = new FileInfo(file);
                DateTime lastWriteTime = fi.LastWriteTime;
                if (lastWriteTime > dt)
                {
                    dt = lastWriteTime;
                }
            }
            _CssFileName = dt.ToString("yyyyMMddHHmmss") + ".css";
        }
        return _CssFileName;
    }
}

We iterate through each of the CSS files and extract the most recent modified date. Using this date, we generate the the CSS file name. That way a different file name will be injected into the HTML whenever one of the CSS files is modified and this will force the browser to reload the new CSS file keeping it always up-to-date.

It may seem like a lot of work for a CSS file at first glance but the benefits are enormous and the added flexibility will allow you to change it on a whim.

I haven't shown you the Constants.cssFiles but that's just an array of file names. I was also thinking of implementing this as a loop that found all the *.css files in a folder. That way if you added a new CSS file it would automatically be included without a code change. However, the disadvantage of this is that you cannot predetermine the order in which the CSS files are combined into the single file and the order is usually important.

If, however, you did want to use that all-css-in-one-folder approach you could adopt a naming convention such as 01_myfile.css, 02_myfile.css and then sort the file names before combining them.

Google to help kill IE6

I got an email from Google this morning which in part said:

"...over the course of 2010, we will be phasing out support for Microsoft Internet Explorer 6.0..."

This is great news. I have a couple of sites that I work on that just don't work in IE6 and it's the bane of my life. I really don't want to be wasting investing time in getting IE6 to work when I could be adding new features and improving performance. 

With Google behind this I am hoping that this will accelerate users upgrading from IE6.

Also related are the IE6 Update and the IE6 No More sites.

Saturday, January 30, 2010

Ultra-Fast ASP.NET

Ultra-Fast ASP.NET

Just started reading Ultra-Fast ASP.NET by Richard Kiessig (Build Ultra-Fast and Ultra-Scalable web sites using ASP.NET and SQL Server).

So far very good. Will post more here.

Richard Kiessig on twitter: http://twitter.com/UltraFastASPNET

 

Saturday, January 23, 2010

Automatically keeping CSS file current on any web page

The problem: You update your site's CSS file but your users aren't seeing your latest crazy colors and styles that you've selected for your web site.

The reason: Your users' browsers have cached the CSS files and it could take days before those caches expire and your new CSS file is requested from your server.

The solution: Change the name of your CSS file that's linked in the header of your page each time you change the contents of the CSS file.

The new problem: You don't want to change the name manually each time because (1) it may need changing in more than one place, (2) you might forget to change it, (3) you're lazy, (4) you might miss somewhere it needs to be changed, (5) each change increases the risk you might do something wrong.

The new solution: Have it done automatically for you.

This is how I implemented it using ASP.NET and C#. I did this on a hybrid ASP.NET MVC and WebForms web site that has two base master pages; one for MVC and one for WebForms. All other master pages inherit from these two master pages so there were just two locations that need to be changed.

In the <head> tag I originally had something like this:

<link href="/site.css" rel="stylesheet" type="text/css" media="all" />

and I wanted something like this:

<link href="/site.css?v=1" rel="stylesheet" type="text/css" media="all" />

with the 1 changing each time the site.css file changed.

In any major project that I'm working on I usually have a class called something like StaticData. This class holds arbitrary bits of data that are loaded or calculated once and then never or rarely change. It's like a hybrid of a constants file and a cache.

In this class I added the following property and private variable:

string _CssVersion = null;
public string CssVersion
{
    get
    {
        if (String.IsNullOrEmpty(_CssVersion))
        {
            try
            {
                string cssFile = System.Web.HttpContext.Current.Server.MapPath("~/site.css");
                FileInfo fi = new FileInfo(cssFile);
                DateTime lastWriteTime = fi.LastWriteTime;
                _CssVersion = lastWriteTime.ToString("yyyyMMddHHmmss");
            }
            catch
            {
                return "1";
            }
        }
        return _CssVersion;
    }
}

So what I'm doing is generating a version number based on the time stamp of the CSS file. If we update the CSS file then that time stamp will automatically change and we'll only have to load it once because after that it's in the "constants cache."

To load the CSS name dynamically in ASP.NET we add the following code snippet to the Page_Load() function of the code behind file of the master page. This applies to both WebForms and MVC applications.

HtmlLink css = new HtmlLink();
css.Href = String.Format("/site.css?v={0}", StaticData.Instance.CssVersion);
css.Attributes["rel"] = "stylesheet";
css.Attributes["type"] = "text/css";
css.Attributes["media"] = "all";
Page.Header.Controls.Add(css);

The Instance property of the StaticData class is a public static property of type StaticData making this class a singleton.

Tuesday, January 19, 2010

Graffiti CMS now open source

This blog runs on Graffiti CMS. I have wined in the past about this not being open source and have on ocassion thought about moving it to an open source blog engine. Today I read that Graffiti CMS is now open source. I have never done a blow by blow comparison of .NET blog engines so I don't know if Graffiti is the best but I can say that it has worked very smoothly for me and has some clever built in features which I like.

This is a great contribution to the open source community by Telligent - thank you.

Source code is available here: http://graffiticms.codeplex.com/

Monday, January 18, 2010

jQuery drag and drop tree plugin

I've been researching a jQuery drag and drop tree plugin for a project that I'm working on and so far I've found the following:

1. jsTree

So far this is my favorite. It does almost everything that I want including in place editing of the tree items. One drawback is that it itself requires a ton of plugins to work and is complex to setup and this makes it brittle in my opinion. The creator, however, is frantically working on a new release which I think will simplify things and dramatically improve this already excellent plugin. I've decided that this is the one that I'll probably use but will wait until the next major release comes out.

Fantastic set of demo pages - this is what "sells" plugins - if you don't have a create demo page then you will dramatically reduce your chances of getting users to use your plugin.

2. SimpleTree

Good but does not support in place editing. Big plus that it has a demo page but the demo is fairly simple and I suspect that it has more features that have not been demo'd.

3. Drag Drop Tree

The demo looks reasonable but no work has been done on this (so it seems) since 2007 and there's not much other documentation about this plugin.