What is TypeScript and why should I care ?

This tuesday, I watched Anders Hejlsberg present Microsoft's new language bet, TypeScript. This is basically JavaScript with types. It compiles to JavaScript, so, at runtime, it is JavaScript, nothing else. And therefore, it will run whereever JavaScript can run: In any browser, on node.js and so on.

TypeScript adds types as a first class citizen to JavaScript. This means you can use classes and interfaces in your code, and have the compiler do type checking. TypeScript is compiled to the idiomatic JavaScript, we would have to write ourselves, if we want to do object oriented programming in JavaScript. It also adds modules, and a bit nicer anonymous function syntax. Already available is some really nice tooling for Visual Studio 2012.

This is an example of how a snippet of TypeScript looks:

class Greeter {
    greeting: string;
    constructor (message: string) {
        this.greeting = message;
    }
    greet() {
        return "Hello, " + this.greeting;
    }
}   

var greeter = new Greeter("world");
var greeting = greeter.greet();

Why should I care ?

We are building larger and larger applications in JavaScript. This is true both to support great user experiences on the web, but JavaScript is also used more and more for many other purposes: You can create standalone applications or servers in node.js, you can use HTML+JavaScript for Windows 8 programs and other platforms. The lack of types in JavaScript means that development and debugging time can be slowed down, and bugs can be hard to find.

You might not agree with this, but consider:

  • Have you ever misspelled a method in JavaScript, and not found out until that exact method ? This does not happen with TypeScript.
  • Ever had to look some method signature up in the jQuery documentation, then spending time on Google to find the right one - with the TypeScript tools, you get auto completion right in the editor (Visual Studio for now).

Basically you get all the safety nets of a statically typed language, while still having all the benefits from JavaScript. The static typing also allows for features like auto-completion and refactoring support. One of the core benefits from using TypeScript is added tool support, and in turn, developer productivity.

Is this not just a rip-off of CoffeeScript ?

No, I don't think so. CoffeeScript is about fixing the syntax of JavaScript, but it does not touch the type system. This is the core difference betweeen the two. You might like or dislike the CoffeeScript syntax, but the CoffeeScript compiler is not aware of types.

I think TypeScript does a good job of keeping a familiar syntax, while reducing the amount of typing needed to create, for instance, a class. TypeScript also does automatic capture of the this variable for anonymous functions, which is very neat.

Go try it out

You can download the bits for VS or node.js here, and there is also an online playground, where you can run TypeScript directly in your browser. It's a preview, but is seems pretty stable to me.

Maybe it is not for you, maybe you are the dynamic language type of guy - But, I am excited about this and for fans of static typing, this is great news.

I might even get excited about the prospect of building a node.js app now.


A new home for the Chrome Password Recovery Tool

In 2008, I created the Chrome Password Recovery Tool. However, the download links on that page have been lost in a server migration.

Since I have been getting some email about the absence of a download of the tool, I decided to release it as open source on CodePlex. So go get it from here, if you need it.

This latest version supports the latest Chrome version (10), and enables reading the passwords while Chrome is running.


This blog now running MVC3 and RavenDB !

From today, this blog will be running a home-grown blogging system built on MVC3, Razor and RavenDB.

It had been running Sitecore Express until now, but i decided to ditch it. A Sitecore installation is simply too much of a hassle for a simple site like this. Also, the rich text editor in Sitecore was not really fit for posting code snippets. I will be refining this new blogging solution over the next few weeks, and hopefully it will give me a renewed interest in actually posting content on this blog :-)

I built this blogging system myself (no CMS or other framework base) to learn more about MVC3 and RavenDB. Conlusion: MVC3 is nice, Razor view syntax is extremely cool.

RavenDB is also easy to get started with and to learn. It is what I would call a no-frills NoSQL document database. So if one has data storage needs that fit into the NoSQL camp, and is building on .NET, i think the choice is a no-brainer. You do have to be aware that it is still a young product, which is changing rapidly.


A Relative Path Facility For Castle Windsor

At work, we use Castle Windsor for Dependency Injection. In Castle Windsor, as with any dependency injection framework, you can configure components identified by an interface, that can be resolved at runtime using the dependency injection framework. Components can have dependencies, which can be yet other components and so on. In this way, you can have your dependency injection framework create a whole graph of objects for you.

One limitation we run into now and then, is with components, that depend on a file path to work. Typically, we need to know the full path of the file to load it. But hardcoding the full path in the configuration file is generally a bad idea, it will create problems when you move your web application between environments. Also, we cannot just pass the path as a virtual path to the component and then have the component call Server.MapPath to map the path - since that would mean changing the interface of the component just to accomodate the injection framework, which is not a good idea. And, what is worse, you would create a dependency on System.Web in a place where it probably isn't needed.

Now, one way to get around this would be to create a wrapper interface, IFilePath, which only should exist in order to be passed into the component and being able to convert the path. This also involves changing the component and generally feels like a bad idea.

Luckily, the Windsor IoC container offers a large variety of extension points - one being facilities. So I wrote a facility, that allows paths configured in Castle Windsor to be relative. The way this works is by registering an ISubDependencyResolver in the IKernel instance. When resolving a dependency, Windsor will ask the ISubDependencyResolver whether it can resolve the dependency using the CanResolve method. By examining the passed ComponentModel and in particular it's configuration node, I look for a custom attribute on the dependency, pathType. If found (and the dependency is of type string), then we can easily resolve the dependency by taking the relative path in the configuration tag and making it absolute.

This will allow you to have your Windsor configuration look like this (notice the one-line facility registration - this is what registers the custom facility in Windsor, and makes us able to register the path dependency as a virtual path):

         1:   <castle>
         2:     <facilities>
         3:       <facility id="pathResolver" type="dr.Castle.WebPathFacility.RelativePathSupportFacility, dr.Castle.WebPathFacility" />
         4:     </facilities>
         5:     <components>
         6:       <component id="dummy"
         7:                  service="dr.Castle.WebPathFacility.Test.IDummy, dr.Castle.WebPathFacility.Test"
         8:                  type="dr.Castle.WebPathFacility.Test.Dummy, dr.Castle.WebPathFacility.Test" >
         9:         <parameters>
         10:           <path pathType="Relative">App_Data/test.xml</path>
         11:         </parameters>
         12:       </component>
         13:     </components>
         14:   </castle>
 

The valid values for pathType are:

         1:         private enum PathType
         2:         {
         3:             /// <summary>
         4:             /// The path is absolute (we will do nothing to it).
         5:             /// </summary>
         6:             Absolute = 0,
         7:             /// <summary>
         8:             /// The path is a virtual path to a web application resource.
         9:             /// </summary>
         10:             Virtual,
         11:             /// <summary>
         12:             /// The path is relative to the current directory.
         13:             /// </summary>
         14:             Relative
         15:         }

The code for the facility it self is really simple, since it simply registers our dependency resolver to the Kernel. The advantage of using a facility, is that it can be declared in the config, and Windsor will automatically initialize for all containers you create:

         1: 
        using Castle.MicroKernel.Facilities;
         2:  
         3: 
        namespace dr.Castle.WebPathFacility
         4: {
         5:     public class RelativePathSupportFacility : AbstractFacility
         6:     {
         7:         protected override void Init()
         8:         {
         9:             Kernel.Resolver.AddSubResolver(new PathParameterDependencyResolver());            
         10:         }
         11:     }
         12: }

Finally, the implementation of ISubDependencyResolver, that makes this possible:

         1: 
        using System;
         2: 
        using System.Collections.Generic;
         3: 
        using System.IO;
         4: 
        using System.Linq;
         5: 
        using System.Web;
         6: 
        using Castle.Core;
         7: 
        using Castle.MicroKernel;
         8:  
         9: 
        namespace dr.Castle.WebPathFacility
         10: {
         11:     /// <summary>
         12:     /// Custom dependency resolver, that will inspect the parameters collection for the pathType attribute, and, if found, convert the dependency to 
         13:     /// a absolute path based on the path type.
         14:     /// </summary>
         15:     class PathParameterDependencyResolver : ISubDependencyResolver
         16:     {
         17:         /// <summary>
         18:         /// Holds the supported conversion operations.
         19:         /// </summary>
         20:         private static readonly Dictionary<PathType,Func<string, string>> conversions = new Dictionary<PathType, Func<string, string>>
         21:                                                                                            {
         22:                                                                                                {PathType.Absolute, path => path},
         23:                                                                                                {PathType.Relative, path => Path.Combine(Environment.CurrentDirectory,path) },
         24:                                                                                                {PathType.Virtual,  path => HttpContext.Current.Server.MapPath(path)}
         25:                                                                                            };
         26:  
         27:         /// <summary>
         28:         /// Cache of the type path parameters.
         29:         /// </summary>
         30:         private readonly Dictionary<string,PathParameter> typePathParameters = new Dictionary<string, PathParameter>();
         31:  
         32:         /// <summary>
         33:         /// Resolves the specified dependency.
         34:         /// </summary>
         35:         /// <param name="context">Creation context</param>
         36:         /// <param name="contextHandlerResolver">Parent resolver</param>
         37:         /// <param name="model">Model of the component that is requesting the dependency</param>
         38:         /// <param name="dependency">The dependcy to satisfy</param>
         39:         /// <returns><c>true</c> if the dependency can be satsfied by this resolver, else <c>false</c>.</returns>
         40:         /// <returns>The resolved dependency</returns>
         41:         public object Resolve(CreationContext context, ISubDependencyResolver contextHandlerResolver, ComponentModel model, DependencyModel dependency)
         42:         {
         43:             PathParameter parameter = GetPathParameter(model, dependency);
         44:             if (parameter == null) 
         45:                 throw new ApplicationException(String.Format("Cannot resolve dependency {0}", dependency));
         46:             if (!conversions.ContainsKey(parameter.Type))
         47:                 return parameter.Value;     // Unknown conversion
         48:  
         49:             return conversions[parameter.Type](parameter.Value);
         50:         }
         51:         /// <summary>
         52:         /// Determines whether this sub dependency resolver can resolve the specified dependency.
         53:         /// </summary>
         54:         /// <param name="context">Creation context</param>
         55:         /// <param name="contextHandlerResolver">Parent resolver</param>
         56:         /// <param name="model">Model of the component that is requesting the dependency</param>
         57:         /// <param name="dependency">The dependcy to satisfy</param>
         58:         /// <returns><c>true</c> if the dependency can be satsfied by this resolver, else <c>false</c>.</returns>
         59:         public bool CanResolve(CreationContext context, ISubDependencyResolver contextHandlerResolver, ComponentModel model, DependencyModel dependency)
         60:         {            
         61:             if ( dependency.DependencyType == DependencyType.Parameter && dependency.TargetType.Equals(typeof(string)) )
         62:             {
         63:                 PathParameter parameter = GetPathParameter(model, dependency);
         64:                 return parameter != null;
         65:             }
         66:             return false;
         67:         }
         68:  
         69:         /// <summary>
         70:         /// Finds the parameter by looking at the cache, then in the model configuration.
         71:         /// </summary>
         72:         /// <param name="model"></param>
         73:         /// <param name="dependency"></param>
         74:         /// <returns></returns>
         75:         private PathParameter GetPathParameter(ComponentModel model, DependencyModel dependency)
         76:         {
         77:             if (!typePathParameters.ContainsKey(model.Name))
         78:                 typePathParameters.Add(model.Name, GetPathParameterInternal(model, dependency));
         79:  
         80:             return typePathParameters[model.Name];
         81:         }
         82:  
         83:         /// <summary>
         84:         /// Finds the parameter by looking at the model configuration.
         85:         /// </summary>
         86:         /// <param name="model"></param>
         87:         /// <param name="dependency"></param>
         88:         /// <returns></returns>
         89:         private PathParameter GetPathParameterInternal(ComponentModel model, DependencyModel dependency)
         90:         {
         91:             var parametersContainer = model.Configuration.Children.SingleOrDefault(n => n.Name == "parameters");
         92:             if ( parametersContainer != null )
         93:             {
         94:                 var parameterNode = parametersContainer.Children.SingleOrDefault(n => n.Name == dependency.DependencyKey);
         95:                 string pathType = parameterNode.Attributes["pathType"];
         96:                 if (pathType != null)
         97:                 {
         98:                     PathType type;
         99:                     if (!Enum.TryParse(pathType, true, out type))
         100:                         throw new ApplicationException(
         101:                             String.Format("Configuration error: Invalid pathType value '{0}'", pathType));
         102:  
         103:                     return new PathParameter {Type = type, Value = parameterNode.Value};
         104:                 }
         105:             }
         106:             return null;
         107:         }
         108:  
         109:         /// <summary>
         110:         /// Holds a path parameter
         111:         /// </summary>
         112:         private class PathParameter
         113:         {
         114:             /// <summary>
         115:             /// Value as entered in config
         116:             /// </summary>
         117:             public string Value { get; set; }
         118:             /// <summary>
         119:             /// Type of path.
         120:             /// </summary>
         121:             public PathType Type { get; set;}
         122:         }
         123:  
         124:         /// <summary>
         125:         /// Defines the types of paths supported by <see cref="PathParameterDependencyResolver" />
         126:         /// </summary>
         127:         private enum PathType
         128:         {
         129:             /// <summary>
         130:             /// The path is absolute (we will do nothing to it).
         131:             /// </summary>
         132:             Absolute = 0,
         133:             /// <summary>
         134:             /// The path is a virtual path to a web application resource.
         135:             /// </summary>
         136:             Virtual,
         137:             /// <summary>
         138:             /// The path is relative to the current directory.
         139:             /// </summary>
         140:             Relative
         141:         }
         142:     }
         143: }

Now, I am finally able to use virtual paths in my configuration files, with a minimum of noise. Great. Please notice, that the "Relative" path type might not make sense for a real application (since it uses Environment.CurrentDirectory as base), but it can be really helpful in test configurations. The primary reason for creating this is pathType="virtual", which maps to Server.MapPath.


Using Expression Trees To Break The Law Of Demeter

I am sure most programmers have heard about the Law Of Demeter, which is the principle that a classshould only have limited knowledge about other classes, and only talk to objects closely related to the current object. This is sometimes presented as "you should not have more than one dot in each expression". In other words, this would be breaking the law:


string
name = order.Customer.Name;

 

While I do appreciate the idea behind the Law Of Demeter, specifically that individual classes should not know too much about each other; I think the above code would often be perfectly acceptable. Phil Haack has a blogpost going into further details about this: The Law of Demeter Is Not A Dot Counting Excercise, and others agree. I think Martin Fowler explains it best: "I'd prefer to call it the Occasional Useful Suggestion of Demeter".

So, most of us will probably (hopefully) agree, that it is OK to use more than one dot in a statement, when appropiate. One such place might be when doing UI in a ASP .NET application, and one needs to display information about an order and it's details. But here arises a problem, we will need to check each of the expression parts for null to ensure that we do not accidentally cause a NullReferenceException. This leads to ugly code, especially in a data-binding scenario, such as:

<%# order == null ? null : order.Customer == null ? null : order.Customer.Name %>

 

This question on StackOverflow asks about exactly that, how do we get rid of such explicit and repeated null checking ? It got me thinking, it must be possible to solve this using expression trees. It turns out, it is in fact possible, as I state in my answer on StackOverflow. We can in fact build an extension methods, which looks at an expression tree, evaluates each part of it seperately, checks for null each time, and ultimately returns the correct value; or null if one of the expression parts where null. This is my implementation of such a method:

 1: using System;
 2: using System.Collections.Generic;
 3: using System.Linq.Expressions;
 4:  
 5: namespace dr.IfNotNullOperator.PoC
 6: {
 7:     public static class ObjectExtensions
 8:     {
 9:         public static TResult IfNotNull<TArg,TResult>(this TArg arg, Expression<Func<TArg,TResult>> expression)
 10:         {
 11:             if (expression == null)
 12:                 throw new ArgumentNullException("expression");
 13:  
 14:             if (ReferenceEquals(arg, null))
 15:                 return default(TResult);
 16:  
 17:             var stack = new Stack<MemberExpression>();
 18:             var expr = expression.Body as MemberExpression;
 19:             while(expr != null)
 20:             {
 21:                 stack.Push(expr);
 22:                 expr = expr.Expression as MemberExpression;
 23:             } 
 24:  
 25:             if (stack.Count == 0 || !(stack.Peek().Expression is ParameterExpression))
 26:                 throw new ApplicationException(String.Format("The expression '{0}' contains unsupported constructs.",
 27:                                                              expression));
 28:             
 29:             object a = arg;
 30:             while(stack.Count > 0)
 31:             {
 32:                 expr = stack.Pop();
 33:                 var p = expr.Expression as ParameterExpression;
 34:                 if (p == null)
 35:                 {
 36:                     p = Expression.Parameter(a.GetType(), "x");
 37:                     expr = expr.Update(p);
 38:                 }
 39:                 var lambda = Expression.Lambda(expr, p);
 40:                 Delegate t = lambda.Compile();                
 41:                 a = t.DynamicInvoke(a);
 42:                 if (ReferenceEquals(a, null))
 43:                     return default(TResult);
 44:             }
 45:  
 46:             return (TResult)a;            
 47:         }
 48:     }
 49: }

There are some caveats though, in the current version it will only work with simple member access, and it only works on .NET Framework 4, because it uses the MemberExpression.Update method, which is new in v4.

It works by examining the expression tree representing your expression, and evaluating the parts one after the other; each time checking that the result is not null.

I am sure this could be extended so that other expressions than MemberExpression is supported, and I might update it at a later point to support more complicated expressions. Consider this as proof-of-concept code, and please keep in mind that there will be a performance penalty by using it (which will probably not matter in many cases, but don't use it in a tight loop :-) ). I have not done any measurements on the performance yet, and I am also sure that one could make some optimizations to it.

Here is a zip containing the code as well as a few unit tests: IfNotNullExtension.zip.

What do you think about this approach to null checking ? Would you consider this extension method useful (provided that it performs adequately for the scenario) ?


Last day at TechEd

It's friday, and TechEd is over for this time.

My first session on friday was about little-known secrets in Microsoft Silverlight 3. This was a really good session for advanced Silverlight development, and I took away many tricks - including the ability to download and use assemblies dynamically and asynchronously; and to use the OS client stack instead of the browser stack for network access.

Second session was about extending Visual Studio 2010's architecture modelling tools. This was a code-rich session, where we were walked through creating 3 extensions for the modelling tools. With VSIX packages, deployment of Visual Studio extensions are now much easier. The coding experience when creating extenions has also been made much nicer in the new verison of Visual Studio 2010. It is a no-frills experience, where you only need to work in the problem domain, and not jump through hoops to make Visual Studio do what you want.

The last session of this year's TechEd is about Pex and Code Contracts. I am writing this while waiting for the session to be begin - it's a very interesting topic, and I might do a full length blog post about Pex and Code Contracts at a later time.

This has been a very educational and interesting week. I have learned about architecture and design, new tools and techniques. In general, the quality of the talks has been very high (there were a few misses, but it's been an overall good experience). The only problem has been to select the right session, when there were multiple interesting selections in the same time slot, which happened to me a lot. For instance, I never got to see a talk about the Concurrency Runtime (CCR), because there were always something more interesting on the menu. Now, I need to get home and get into the gym - it's been a week with good foods, eggs and bacon each morning at the hotel, so I need it :-) I Might be coming back next year !


Day Four at TechEd over

TechEd is coming to an end, day four is now over. There are three sessions on friday, then it's over.

I started the day with a session on C# 4.0 Dynamic: The why’s and hows. It was being done by Alex Turner, who is Program Manager for the C# compiler. This was a very interesting walk through why C# should have dynamic features, and why it has been designed as it is. There has gone a lot of design thought into the dynamic design, and I certainly think that the final design they’ve chosen is the right one. He demoed creating your own dynamic types from C# which can respond to any method call – very cool. I can certainly see some good use cases for the C# dynamic keyword.

Next, I went to see a talk about Windows Communication Foundation: Developer’s Guide to Windows Communication Foundation, SOA and success. Interesting, and with some very good thoughts on interoperability. My most important take-away from that session, is that if you need to be interoperable, try to do REST.

In the afternoon, I went to see Tess Ferrandez present on ASP .NET post-mortem debugging (well, the techniques apply to any .NET process, I think, but it was presented towards ASP .NET). This is the kind of debugging you get to do when your process consumes too much memory, hangs, or explodes; in the production environment, without you being able to reproduce the issue locally. When this kind of debugging is needed, something is on fire, and you will get stress fixing it. But apart from that, I do find this kind of debugging challenging and kind-of-fun ;-) Tess demonstrated using WinDbg, SOS.dll (Son-of-Strike, someone please explain the name to me), Debug Diag and other tools. She demonstrated detecting a memory issue, a poor performance issue, and a crash issue using these tools. She also demonstrated doing these using Visual Studio 2010, with its new ability to open memory dumps, and do debugging on them. With this cool new feature, you can do almost everything you can do in a normal debug session, but in a memory dump, that you might have obtained from some production server. You can see the stack trace, the locals, and examine the value of objects. The only thing you cannot do, is run/step back/forward, of course, the dump is an image of the process at a specific time. Very neat is the Parallel Stacks feature, where Visual Studio will visualize the stack of each thread for you, which makes it easy to identify contention in your locking, as well as other thread sync issues.

Last session of the day was by Magnus Mårtensson. This was an architecture talk about design with dependency injection and ensuring extensibility. Very interesting.


Third day at TechEd

Once again, I attended some very interesting talks at TechEd. This mornings sessions was entitled “The daily Scrum”, about doing Scrum and agile development. This was mostly a Q&A session with answers to many of the practical problems one might encounter when trying to be agile.

Next, I had a real hard time deciding between staying on the agile track and attending the “Tools and Agile Teams” talk versus hearing Don Syme speak about F#. I chose the F# session, which I think was a good choice. Don is one of the primary architects behind F#, so it would have been a shame not to hear him speak about it. This talk really drove home some points about F#, and why it helps you do parallel programming, with immutability, the async language construct and Agents. Another good point is that F# should not be used for anything, and in a large application, Don suggested that only a small DLL might be written in F# - it should be used as a tool, where you needed. Don also showed some really impressing demos, using Direct3D from F#.

After lunch, I attended Roy Osherove’s talk about unit testing. His main points where to write maintainable, consistent and readable unit tests, and proceeded to show this can be done. He suggested using test reviews in order to get started writing good unit tests, which I think is a very good idea. Very insightful talk.

The last session of today was about cloud computing: “ Deep Dive into Developing Line-of-Business Applications Running in the Cloud ”. I don’t think this was a good session. There was too much demoing of an app in the cloud, and too little talk about the actual architecture behind it. Also, the presenters neglected to do any introduction to the Azure tools, I guess they expected everyone attending to know about those in advance.


Day Two At TechEd Europe

Today started off fresh with 2 Sharepoint sessions The first one was an introduction to Sharepoint 2010 for developers, and while I haven’t done any development on Sharepoint before, based on the feedback, it will be tons easier to do Sharepoint development with 2010. The second session on Sharepoint was somewhat relevant and somewhat a  miss. While it did provide some good information, there was not really anything new, if you had attended the first session.

During lunch, I had been invited to a lunch session by Microsoft Denmark on IIS 7.5. The speaker was  a real expert on the subject, Bernhard Frank. Very interesting and good  food, but had to cut the session short in order to make it to the next session.

Next was a presentation by Brian Harry about TFS 2010 and its new version control features. There are some real goodies coming ,, in 2010, and Brian demonstrated better branching and branch visualization, support for rollback and improved labeling. Very nice, and something I can really see the need for in my own organization.

I also attended a session on software architecture by Ralf Westphal. He discussed architecture at a high level, and you should not view the architecture as UML class diagram, layered architechture diagram or something like that. Instead he advocated functional  building blocks, or functional units as he called it; which recursively consists of yet another set of functional units. This way, you get a hierarchy of functional units from the one application, through synchronous components till methods in a class. While surely one of the most abstract talks today, I took some very good points with me from the talk.

Lastly today, there was an ASP .NET MVC2: “What’s new” session that I attended. It really competed with the “Pumping Iron” session (about IronRuby/Python), but as it turns out, that session was overbooked, so I made the right choice. There is some really great improvements in MVC2, which boils down to improving productivity on the framework. This means support for partial renderings based on invoking of controllers, and templated views. A cool demo was demonstrating the validation features, where you can define your validation rules in the model (as annotations out-of-the-box, but it’s extensible, so you can store your rules wherever you like). I think MVC2 might just be the release that is mature enough to be tried out on a real project – I am sure our frontend developers will love it.


Got a Twitter account

Oh, BTW, I got a twitter account. Follow me on http://twitter.com/dennisriis, if interested.