Tuesday, March 18, 2014

When WP_DEBUG is set to true, output buffer always includes an ob_end_flush() notice.

When WP_DEBUG is set to true, the output buffer always includes an error. Specifically, the error I see is :

<b>Notice</b>:  ob_end_flush() [<a href='ref.outcontrol'>ref.outcontrol</a>]: failed to delete buffer zlib output compression.

Since I'm making sure my plugins are PHP 5.2+ compatible, hence testing in PHP 5.2, I'm not sure if this is what's causing the notice in my output buffer.

This additional output is included along with my response and is a deal breaker for me. Traced the problem to be coming from an action that is triggered on shutdown. Specifically, a shutdown calls the following code which in turn gives the notice.

function wp_ob_end_flush_all() {
$levels = ob_get_level();
for ($i=0; $i<$levels; $i++)

So, i just remove the action in all my ajax callback functions before I echo the output back to the client eg:

add_action('wp_ajax_prefix_readABC', array('prefix_readABC'));

public function prefix_readABC(){
remove_action( 'shutdown', 'wp_ob_end_flush_all', 1);
echo '{"result":["a", "b", "c"]}';

The above appears to work without problems. Note the inclusion of remove_action( 'shutdown', 'wp_ob_end_flush_all', 1);

It does feel a little clumsy but I'm only doing it in the ajax request and we're only outputting a piece of json back to the client with a shutdown, so I don't seehow this will have any drastic affects anywhere else and the issue is resolved for now. Lastly, this is only done when WP_DEBUG is true, i.e. your in development. I don't have any problems when in production and WP_DEBUG is turned off (false).

If you are using classes then it's just a matter of having a base class with the remove_action in the constructor.

Saturday, September 28, 2013

WordPress: Removing the default 28px margin forced into the page by default

By default, WordPress adds a 28px margin to your pages by outputting the following block of css in the head element, for a logged in user :

<style type="text/css" media="screen">
 html { margin-top: 28px !important; }
 * html body { margin-top: 28px !important; }

add_theme_support( 'admin-bar', array( 'callback' => '__return_false') ); 

Just make sure you add the above method call as is. Don't include it in an action hook such as init and any other action that is called at a stage past the init phase. The documentation too makes this observation:

If attached to an action hook, it should be after_setup_theme. The init action hook may be too late for some features.

Reference: http://codex.wordpress.org/Function_Reference/add_theme_support

Saturday, September 29, 2012

Backbone.js View Events not Firing?

In backbone, setting the el element in the view's initialize method doesn't ensure that the events are bound correctly. Apparently the events are wired only when the el element is set via the views constructor as an option, which is not always what you'd like to do.

To workaround, if your setting the el element in the initialize method of the view, then make sure you call this._ensureElement() right after.

_ensureElement() methods description says this :

Ensure that the View has a DOM element to render into. If `this.el` is a string, pass it through `$()`, take the first  matching element, and re-assign it to `el`. Otherwise, create an element from the `id`, `className` and `tagName` properties.

var MyView = Backbone.View.extend({
       'events': {
                'click .next': 'next',
                'click .cancel': 'cancel'
        'initialize': function() {
            this.el = $('#step1');
            _.bindAll(this, 'render');
             this.template = _.template($('#step1-template').html());
        'render': function() {
            return this;

       'next': function(){
       'cancel': function(){

Sunday, October 16, 2011

An Ant process driven build for Javascript --Make your own Javascript IDE

What we are going to setup is an IDE that can assemble Javascript code spread out in multiple script files into a single file (mimicking physically structured class files found in class based languages),compile the code getting useful warnings and errors in the process, minify the code, create documentation via jsdocs and zip+deploy . All through a build script that can be called through the push of a button F6 (Just like in visual studio).

First a quick preview of what we are building :

As you can note we used jsdocs annotations to tell the compiler what type the parameter is since javascript does not have support for declaring types like in strongly typed languages. These annotations are stripped out during minification so we don't incur the cost of verbosity in our code. We simply compile the code and dutifully get the above warning.

Another quick example :

As you can note above, we told the compiler that the shape object is an interface, we then specified that the square object implements the shape interface. We did all this using jsdocs annotations. However in our Square object we didn't provide the implementation for the draw method. This violates the contract and we are dutifully warned. Note how we've used annotations again and our code does not depend on any third party libraries. It's at a minimum.

These are simple examples. For a whole slew of compiler warnings provided, you can read the documentation provided by google since our build uses google closure compiler.

 Getting started

Firstly to get such a setup in place we use Ant as our build tool. If you don't already have Ant installed, you may download it from the following location and set it up on your system : http://ant.apache.org/bindownload.cgi 

You also need the java jdk, which you can download from : http://www.oracle.com/technetwork/java/javase/downloads/index.html 

You may continue now after installation, but do note the location where you unzipped ant and the location of the jdk bin folder. You'll need it when setting up NppExec ( you'll find out later what this is, keep reading).

Now that you have both the Java jdk and Ant, you have the necessary tools to build. You simply call the build file from console. However, working in this manner is quite difficult and you'll find yourself switching too much between a console window, your favorite code editor and perhaps the windows explorer.

What we need is a way to integrate our build into an existing code editor. For this we will use Notepad++, not only is it open source and has no costs involved but also it's in continuous development and has a good community backing it. Moreover, there are many plugins already written that will make our integration work to a minimum. 

You may download notepad++ from the following url :

Now that we have notepad++, lets integrate it with our Ant build script. Our expectations are simple. What we want to do is sit in notepad++ as we edit our code and then, without leaving the editor we'd like to simply press F6 when we are ready to compile. Lastly get a status report in console within notepad++ ; In this manner we never leave our editor. In addition it would be nice to browse our files within notepad++ itself. There are quite a few nice add-ins that supply a file explorer. I use Light explorer. You'll find it in the plugin manager explained below.

Setting up Notepad++ 

First, you'll need to add a plugin capable of processing our build file in a console that is displayed inside notepad++ itself. The plugin is called NppExec and it can be added through the plugin manager :

When your plugin manager loads, it will list all available plugins. Select NppExec form this list, in addition you may also want Light Explorer and JSLint. JSLint is a good tool and will aid you in writing and debugging your js code, you will be happy to run this tool every once in a while on your *.debug.js file generated by the build.

As you can note from the plugin manager in the screenshot above, i have already installed all 3 plugins we'll need. You'll have to select them from the “Available” tab. Once you have NppExec installed, you need to tell it about your build.xml file.

Before we do that, try to find a way to organize your projects, because for each project you create which may not be very often, you'll end up doing the follow steps manually each time. Basically once per project. So organization is quite important.

The following screenshot demonstrates how I organize projects, if you come up with something more clever, feel free to share it :

As you see, we have Project1 and Project2, each has it's own copy of the ant build script. The version file is where you can store the current version number. All your builds will deploy to a dst folder (even to a path of your choosing) using the version number.

The tools folder you see in the root folder contains the tools necessary that the build uses such as YUICompressor, Google closure compiler, jsdocs-toolkit and ant-contrib (we use some extensions to ant so we need this too). It's all provided along with the build script on codeplex.

You may later download newer versions of these tools from their respective authors and replace them as you see fit keeping everything up to date.

Now, lets proceed to tell NppExec about Project1.

Setting up NppExec plugin in Notepad++ 

Before we start, we need to configure the build file manually and tell it the name under which it needs to build the folder structure. You start with a vanilla version of build.xml, in our example from the above screenshots, for project1's build file, we'll chose “MySite”.

Open the build file under Project1 and run a search replace for all instances of “projectname” to the actual name you'd want to use. Now save it. OK, we are ready! After installing the plugin, we need to set it up to process our build file which we have organized in Project1 and Project2 folders as in the previous screenshots above. In notepad++, go to the menu bar, under plugins – NppExec – Execute

The execute dialog will show up :

Now we are ready to include a piece of code that will prepare the console environment for us.
The code to include is :
 ENV_SET PATH = $(NPP_DIRECTORY);C:\apache-ant-1.8.1\bin;D:\Program Files (x86)\Java\jdk1.6.0_22\bin
cd c:\HTML5\Project1

As you can see, we have told it where to find Ant executables that will process our build file and of course, Ant requires Java so also the Java jdk executables path is included. There are many different ways to set this up, I found doing the above the most straight forward and explicit.

You may experiment differently because for each project you will be copying and pasting this same piece of code. The part that will change is the Project name and perhaps the path. Right now we have setup the build for Project1

Next, we don't want to be doing this repeatedly everytime we open notepad++ so we'are going to save it under a name, in this case “Project1”.

And that's it. Save it and from now on, every time you open notepad++, you will hit your F6 key to execute your project and then you will find Project1 in the dropdownlist to select from. The script you wrote in the execute box will run and prepare your console for you as follows:

Now you are ready to simply build hitting your F6 key every time you want to build. You'll be doing this after you've written a piece of code, so quite often. It's important to be able to do this in a single click, F6 is awesome that way.

 The execute dialog will look like this now :

In the exit message you write the commands you want to execute on your build file. Above we execute the build using Ant, this will run all tasks.

All tasks means, it will concatenate script files, compile your js through google closure, your css files also undergo the same procedure and will be processed by YUICompressor, if you have documentation, then js docs will also be generated. If the build is successful without errors, it will be packaged in the dst folder.

That's quite a lot of tasks the build is doing and as your script files grow you'll waste quite a bit of time just waiting for it to finish. So instead of calling your build using the ant command, you can call individual tasks. For instance minifyjs or minifycss etc. It's detailed out in one of the sections below, for now, let's see what happens the first time we run the ant command.

As you can note from the above screenshot, calling ant alone without any arguments will run all tasks which is fine the first time because we have nothing, What's going to happen is that it will create the folder structure for us and we end up with the following.

Note how everything is under com.MySite folder. This is because prior to starting, if you still remember, we ran a search and replace in the build.xml file and changed all instances of “projectname” to “MySite”. We're using a domain name convention for this but if your a little ant savy, you can change this to whatever you prefer by editing the build manually. It's all declarative code and very friendly. I love it and so you get my very biased opinion.

However, if you are new, it won't hurt if you can get a book on Ant, there's quite a lot of good material as it enjoys a lot of popularity. Before we proceed, let's look at what happens after we close notepad++ and try to open it again. After re-opening notepad++, hit the F6 key and you get the execute dialog again. But this time the project is saved, so you don't have to enter the commands to warm up the console.

As you add projects and save them, they will appear under the dropdownlist you see in the screenie below. You simply select the project from this list and click OK, then F6 again.

The first time you plan to load a projects build, you may have noticed that's it's a 2 step process because the first time you have to select the project and it will prepare the console environment executing the script you see in the dialog above.

The second time onwards you hit F6, you get the Exec dialog that will take your ant command. And this same dialog will appear every time you hit F6 from now on, unless you decided you wanted to change project. You basically have to try it 2 or 3 times to get accustomed to it. It's quite difficult to put in writing and may seem too complex but really, it's not.

OK, we are ready. Now lets write a bit of code and then Build.

Making your javascript or css code part of this build process is quite simple. Your code has to go in either the Scripts or Css directory the build created for you. These are the directories that will get processed by the build. We'll discuss javascript now, however keep in mind that the same procedure is available for css as well. For css you get concatenation and minification too. Lets pretend we want to create a script file named “MyScript”.

We start by creating a “MyScript” folder under the Scripts folder. Within MyScript, we can create an unlimited number of subfolders and within each we can add script files. How you arrange your script files is up to you. The important thing to consider is that the build file will process the script files starting with the folder then filename in ascending order, so if order is important to you, it certainly is to me, you will find a way to arrange your files alphabetically or numerically.

The way I do it is I start in alphabetic order so I name the first folder i want processed “a”, and the second “b” etc. But lets do a code example. I don't like to keep my code in the global namespace so I start with a closure, this is simply going to be an anonymous outer function that will embody all the code I write eg:

(function(window, document)
    //everythign else will go in here
}(window, document));

The anonymous function above is called by passing the window and document as parameters. In order to ensure that all the code is enclosed within a closure as the code above, we can arrange the scripts as in the following screenshot:

Note the 3 folders under MyScript. Folder “a” has a script file that contains the opening parts of the anonymous function, and folder “z” contains the closing, and since our code will be processed and assembled into a final single file in ascending order, anything in between will get embodied within. Make sense? I hope so because if you understood that then you are ready to write your code and build it!

The final script file is going to be created when you build and it will be deposited in the MyScript folder under the same name so, you'll end up with MyScript.debug.js and MyScript.min.js respectively. The “debug” file contains your code without minification, while the “min” file contains optimization's and minification produced by google closure compiler. So, we end up with this after a successful build (F6) :

Note above, that apart from the stuff that went into dst folder (your deployment folder), the build also generated 4 new files. MyScript.debug.js, MyScript.min.js, MyScript.extern.local.in and MyScript.extern.thirdparty.in. We'll get to the last 2 files that were never mentioned before soon enough, for now focus on *.debug.js and *.min.js files produced.

The first thing to note is that the name of the files has been acquired from the root folder under Scripts. And the contents of these files is the combination of all script files within the MyScript folder.

If we inspect the contents of MyScript.debug.js, we see :
(function(window, document)
 //everythign else will go in here
}(window, document));

If you remember from our earlier setup of script files, each line of the code above was in a separate file.

 C:\HTML5\Project1\src\com\MySite\scripts\MyScript\a\a.start.js contained :
(function(window, document)

C:\HTML5\Project1\src\com\MySite\scripts\MyScript\b\a.myObject.js contained :
//everythign else will go in here

C:\HTML5\Project1\src\com\MySite\scripts\MyScript\z\a.end.js contained:
}(window, document));

Now it's all combined into a single file. Perfect! We now have everything working. Let's introduce a small error in our js code, for instance let's add a function in a.myObject.js and deliberately miss a closing brace. Just something simple to see if everything is working. Now lets build and see what happens:

If you open debug.js and go to line number 5, you will see that your missing a closing brace. For a complete list of compiler warnings that you will find useful, visit the following online document : http://code.google.com/closure/compiler/docs/error-ref.html 

About *.extern.local.in and *.extern.thirdparty.in 

Remember the *.extern.local.in and *.extern.thirdparty.in files that were generated when you first build the project? They are useful when compiling in Advanced mode. These are simply files where you can include methods from getting renamed when using google closure in dvanced mode. This is necessary because the closure compiler will rename everything in the project and you wont be able to access any methods in your objects using their original names you provided.

The minification process wants to give you a very tiny hyper crunched file. So it's your job to tell it what method names to leave in tact.

The other *.extern.thirdparty.in can contain a third party library's extern file. Most third parties will provide you with one. For example, jquery has an extern at the following url :

You just need to copy that into your third party extern file. The third party extern is useful when your calling methods from external libraries like those in jquery. The extern will contain all method names which will result in the compiler not renaming any of these methods in your code.
I don't want to repeat what the google documentation says about this so go ahead and have a good reading at the following url :

 Externs are only a small part of it. There are many other things to keep in mind when using google closure in advanced mode. Why is this the default? I don't know, good question, I guess because I work in the advanced mode. I encourage you to do the same.
Just make sure you read the following document as well :

Advanced mode can be turned off in the build. There are instructions on how to do this in one of the sections below.

Don't forget about strong typing in javascript. You can achieve similar functionality using jsdocs annotations to decorate your code and tell the compiler some extra information about your types, arguments, constructors and so forth.
Google closure will warn you when you do something silly and in this way you can make use of strong typing in javascript. Read all about it at the following url:

In this way you can tell the closure compiler what arguments in a function are optional, what the return type of a function is, the type of a variable and so forth. The above document is complete and a good reading. By using these annotations not only do you make use of strong typing but also the build can use this to generate your documentation. Two birds with one stone. Perfect!

Selectively excluding script files from the build

You can add your third party code in the script folder too. But by doing so the build will process it and try to minify that code as well. Third-party code is already minified by their respective publishers so it makes sense wanting to exclude these from the final build. But why put third party code in the script folder? The reasons can vary, I put third party in there because I unit test within the src folder and if my code relies on third party, I'd like the code there.

Another reason you may want script files excluded is if you have two different versions of the same js object ( in a separate class files *.js) that does a particular task but targeting different builds. So before a build you can exclude one, build, then exclude the other and produce two different versions of your code that behave a little different, perhaps one version with particular functionality against jquery and the other against dojo lib or what have you. In order to exclude a script file, open your build.xml and look for the following property :

<property name="scripts.exclude" value=”” />

In the value attribute, provide the name of your script file you want excluded. For instance if you wanted to exclude files foobar1.js and foobar2.js, from being assembled and processed you can specify this as follows :
<property name="scripts.exclude" value=”foobar1.js, foobar2.js/>

The same applies for css files, you simply exclude them by providing the filename in the css.exclude property. Look for it in the build file.

<property name="css.exclude" value=”foobar1.css, foobar2.css/>

Excluding scripts from documentation Script files in the scripts folder after getting assembled, the *.debug.js is then processed for documentation. If you wish to not document a certain *.debug.js file, you may specify this in the build.xml manually as follows :

<property name="docs.exclude" value="foobar.debug.js"/>

Jsdoc-toolkit Templates Note that jsdocs is very flexible and you can create your own custom templates in the templates folder. For the meat and potatoes about how it works you may reference the jsdoc-toolkit project templates section :

As far as our build is concerned, if you want to use a specific jsdocs template, you can specify this in the following property : By default, it's using the jsdocs template as you can see from the setting in the value attribute above. You'd replace jsdocs with the name of your custom template. Controlling compilation level : By default the build will compile in advanced mode. If you want to go into the simple mode, you can change this in the following property :

<property name="closureoptimizations" value="ADVANCED_OPTIMIZATIONS" />

Just change the contents of the value attribute to “SIMPLE_OPTIMIZATIONS” Pretty Print the minification: Sometimes when i'm debugging the minified code, I'd like it to be nicely formatted, it's so much easier, that way I can inspect the output. At some point you will appreciate this feature of the google closure compiler, so if you want the minified code to be nicely formatted, add the following line :

<arg line="--formatting=PRETTY_PRINT" />

You can add the above line right below this :

<arg line="--compilation_level=${closureoptimizations}" /> 

If you have other google closure compiler arguments to set, this is how you do it. Other advanced options can be set in their respective tasks, you'll need to look at the build yourself and make sense out of it. I have tried to make it as clear and obvious as possible, but when in doubt, just follow how I coded each task and replicate, in case you want to make your own custom task. 

In addition some tasks like minifyjs and minifycss can take additional arguments, for example in case you want to specify the compilation level for google closure compiler or formatted pretty print after minification, you can do so by adding a custom argument or modifying an existing one. 
You'll need to look at their relevant documentation too : http://code.google.com/closure/compiler/docs/compilation_levels.html 

 Build Tasks 

We can supply build tasks invidually in the execute dialog :

Calling ant alone, will run all tasks.

 Executing tasks individually 

 To build, compile and minify, supply : 
 ant minifyjs 

To concatenate our script files only, supply : 
 ant concatjs 

To concat and minify our css, supply :
ant minifycss

To concatenate css files only, supply : 
 ant concatcss  

To generate documentation only, supply : 
 ant jsdocs 

Final Notes
HTML5 is all the rage now. We've started moving most of our code to the clientside and Javascript is more important now than ever. The productivity of an automated build is hard to ignore. As you have seen, our solution involves using Ant to drive our build process. While there is quite a bit of competition in this space Ant enjoys a lot of popularity and there is plenty of adequate documentation written through the years.

We also made use of notepad++ due to it's popularity as a light weight free code editor. One hiccup might be that notepad++ is windows only, but the build itself can be called from console so if notepad++ is not available for your platform, you can very well try to integrate it into your favorite code editor as we did in this article or simply call the build script from console.

Lastly, while I come from a .NET background, the urge to use MSBuild and integrate it into Visual Studio was great. One reason I never pursued this route is because I've been actually using Visual Studio .NET over the years to edit and debug my Javascript, and I'm not impressed. It's too heavy.

I wanted to enjoy more freedom on how I write Javascript without the limitations of a single IDE, platform and extra costs. Writing an Ant build opened up a lot of possibilities.

How good is this build?
I've wrote this build somewhere in November 2010 and I've been using it since. I must say that I've never had any real problems and its worked flawlessly for me. I release it in the hope that you too have the same experience or are capable of adapting it to your own setups and environments.

The project can be downloaded from http://jsbuild.codeplex.com/
accompanied with a MIT license, feel to use it in anyway you see fit.

Sunday, October 2, 2011

Abmho --A syntax highlighter that runs online and offline

Introducing Abmho –the syntax highlighter wizard 

The first stable release of Abmho is out. It is by no means the first or only syntax highlighter on the web. There are quite a few out there but the vast majority aren't usable. Abmho can run on the web or as a standalone silverlight application on your desktop.

Since I like my code colored and styled when pasting code snippets in blog posts, I decided to write an app that did only that. I hardly blog by the way, hopefully this will motivate me more.

Now, lets look at how Abmho works and what it can do. There are four steps in the process.

You start by pasting your code, as you see in the screenie below, this is done in the first step.

Step 1 - Paste your code

Next, you need to state what language your code is in. This step also lets you set a language at the line level. What this means is that you can mix languages in a single snippet. No other syntax highlighter can do this by the way.

Step 2 - Compose the language

Mixing languages has always been problematic for me because in asp.net you can mix code and markup in the same template file. This meant that I had to copy the code separately and the markup separately. Quite the work it used to be but not anymore.

By default, the selected language will apply to all lines. If you want to mix languages, you must first “Uncheck” the checkbox and then select line/s and apply at the line level.

Step 3 - Settings and live preview

One thing to note in the above step is that when setting the background color, the code highlight will try to contrast the background automatically. This is not a predefined scheme and there are some edge cases where the contrasting might not be perfect. Luckily you have many colors to choose from.
Lastly, you collect the output. You simply copy the code by either right click (select all – copy) or use the apposite copy button. The code is then embeded in your page manually. The generated output as you can see is clear css/html where you can optionally specify that classes be used instead of inline css for styling the layout.
Step 4 - Copy the output and embed directly on a web page

The application is written in silverlight and hosted online, but you can opt to run it in OOB mode (out of browser mode). In this mode, the application will run as a desktop application, meaning it will be available to you in offline mode and you don't have to navigate to our website. Instead the application gets installed locally on your desktop and you will be able to launch it from either your start menu or desktop shortcut.

Currently I don't run ads and there is no business model to support this project. Creating the application in silverlight allowed me to keep server costs at a minimum as I have no code running serverside (The only cost to maintain is the domain Abmho.com). The silverlight application itself is hosted on google app engine.

Why did I make it?
While I find it helpful as an application for my own purposes, whether my requirement is to provide highlighted syntax in my applications or in my blog posts, having writing this application has also permitted me to experiment in silverlight/mvvm pattern/prism. Simple applications like this help me to improve my coding efforts on larger projects and make better decisions overall. So why not?

What's next?
Not much really. Next I will be porting this application to metro style on windows 8. Hopefully with minimum effort ( or I like to think so). If I feel like it, ofcourse!

10/10/2011 --A new version of abmho is out. For those of you who installed the oob mode (desktop version) must first uninstall the previous version (control panel, find abmho and uninstall). And then proceed to our install page and reinstall it. Subsequent releases will update automatically.

10/11/2011 --A new version is out. Abmho v1.3 Thanks to everybody reporting problems through the app itself. For OOB mode, we are unable to provide automatic updates because it requires a digital signed code trusted certificate which costs an arm and a leg.

This is a free app, so follow the instructions in the install page. Basically simply right click on the install page, choose to "Remove the application", then proceed to re-install using the same page ( The install button will re-enable itself).

10/11/2011 --A new version is up. Abmho v1.4. Why so many versions in one day? Because I'm making fixes and pushing them out as they are available. This will probably the last one in a while. We now do not require elevated permissions for OOB. What a relief since we don't want to invest in a code signing digital certificate. I'm happy!

10/14/2011 --A new version is up. Abmho v1.6. This fixes an unexpected background worker error when using the wizard then navigating within the site and returning to the wizard again. Many of you reported this problem and you were all very kind. Thanks! We were not disposing the wizard correctly when navigating out of the home page. This is fixed now and I am happy!

Thursday, May 5, 2011

Binding data from the ViewModel to the Wpf Menu control

Databinding the Menu Control in wpf to a viewmodel can require a little bit of effort. As a menu control can contain hierarchical data, you must think of a data model to represent your menu items upfront, so I've decided to write a little about it. But before we dive into the meat and potatoes, lets try the basics first.

Lets see what a simple statically defined menu control looks like and then we'll transform this into a data model we can bind to from our viewmodel.

The statically defined menu :
    <Border VerticalAlignment="Top" Height="25" BorderThickness="0,0,0,1" 
            Background="#FFE4E2E2" BorderBrush="#FFA0A0A0">
        <Menu Height="25" VerticalAlignment="top">
            <MenuItem Header="_File">
                <MenuItem Header="New/Open project"/>
                <MenuItem Header="Save"/>
                <MenuItem Header="Exit"/>
            <MenuItem Header="_Help">
                <MenuItem Header="About us"/>

When you run the above piece of code, this is what you will see:

Notice that each MenuItem is a HeaderedItemsControl. This is simply an ItemsControl with a Header.

Because each MenuItem is a HeaderedItemsControl, this allows our Menu to have an unlimited number of nesting as each Item has a header ( a label, the title you see displayed in the menu) and then an items collection (the sub menu items). If we wanted to see a qiuck example, this can be easily represented declaratively as follows :

      <MenuItem Header="_File">
           <MenuItem Header="New/Open project"/>
See how each MenuItem can in turn contain nested MenuItems, and a Header property is set to the title we want to display.

To bind the Menu control from our viewmodel, we need to be able to set the bindings for each menu items header and additionally we'll need to set a command argument so that we can identify the item clicked.

In order to achieve this we'll have to set bindings through the ItemContainerStyle. This will allow us to target the element generated for each MenuItem so that we can pass the Header value and additionally a command argument value to identify the item. By using a style element in ItemContainerStyle, we can set the TargetType to MenuItem as in the example code below, and the property setters will apply the value bindings to each MenuItem. Perfect.

<Menu ItemsSource="{Binding MenuItems}">
        <Style TargetType="MenuItem">
            <Setter Property="Command" Value="{Binding Command}" />
            <Setter Property="CommandParameter" 
                       Value="{Binding CommandParameter}" />
            <Setter Property="Header" Value="{Binding Header}" />
            <Setter Property="ItemsSource" Value="{Binding Items}"/>

Let's quickly walk through the piece of xaml above. As you can note, the Menu control itself is bound to a MenuItems collection exposed in our viewmodel.

Xaml :
<Menu ItemsSource="{Binding MenuItems}">

public ObservableCollection<FileMenu> MenuItems
  return (_menuItems = _menuItems ??
   new ObservableCollection<FileMenu>());

Xaml :
<Style TargetType="MenuItem">
            <Setter Property="Command" Value="{Binding Command}" />
            <Setter Property="CommandParameter" 
                       Value="{Binding CommandParameter}" />
            <Setter Property="Header" Value="{Binding Header}" />
            <Setter Property="ItemsSource" Value="{Binding Items}"/>

The above xaml binds to each item in the MenuItems ObservableCollection our viewmodel exposes.

public class FileMenu : ModelBase
 public string Header { get; set; }

 public string CommandParameter { get; set; }

 public ICommand Command
   return (_command = _command ??
    new DelegateCommand<string>(
     OnMenuItemClick, (x)=> IsEnabled));
 public ObservableCollection<FileMenu> Items
   return (_items = _items ??
    new ObservableCollection<FileMenu>());

The FileMenu class above represents each item in the MenuItem's collection. As you can see, it's a very simple object with just the needed properties. Note also that each item can also contain child items, this allows us to build a hierarchical menu dynamically in code. If we go back to look at the previous xaml listing above, this is exactly what the following line does :

<Setter Property="ItemsSource" Value="{Binding Items}"/>

The ItemsSource property is exposed by the MenuItem control. In this manner, if each FileMenu Item contains submenu items in turn, they will be bound. Nice!

We now have a perfectly bindable hierarchical menu and what more, it follows the MVVM pattern with clear separation.

Be sure to check out the sample application since code speaks for itself!

Download the sample code

Friday, April 29, 2011

Getting the Most Out of the Winforms PropertyGrid Control in WPF

The PropertyGrid control has been missing in WPF since it's release and sadly it's still missing! There have been various attempts by third parties to provide one and I cant speak for the commercial offerings but I find the ones in the open source space to be quite lacking and incomplete. Most projects I found on codeplex are still in beta!

I really like the api exposed by the original PropertyGrid which I have been using for years now but it's a Winforms control. This presents some problems. The main issue for me with the Winforms version of the PropertyGrid is that it's not styleable. So it means that I'll have some inconsistency in my UI and it will stick out as shabby and odd.

Below is a screenshot of the PropertyGrid bound to a simple “Person” object, that exposes a complex type Address and a vehicle collection property.

As old as it may look, it works so well and does it job so nicely! I love this control. In the end, I've decided to use this in my Wpf application regardless of the oddity it brings. Clearly, functionality is a lot more important and since beauty lies in the eyes of the beholder, it's a subjective matter. And writing a brand new native Wpf PropertyGrid control is out of the question.

Looks apart, think of the great things this control can do. You practically bind your objects to it and it will list them in a neat grid, categorized, with many builtin editors for color editing, navigating for images, browsing and editing collections using the builtin CollectionEditor etc. And if this is not enough you can write simple extensions with your own custom editors. Indeed this control is a gem!
Update 4/30/2011
It seems since we are referencing a Winforms library, in particular System.Windows.Forms, our client app is forced to reference the fatter .NET 4.0 version versus just the slimmer .NET 4.0 client profile which is the default. 
It's not such a big con, but most certainly something to consider. I've left some more instructions at the bottom of this post.
One thing in particular that I've done is to wrap it up in a Custom Wpf Control because in order to use the Winforms version in Wpf, we'll need to :
  1. Interop via WindowsFormsHost (as easy as eating cake)
  2. We'd like to bind our ViewModel to the PropertyGrid directly from the View declaratively. This will allow us to avoid tight coupling with our ViewModel.
Here's what the custom controls template looks like :

<ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    <Style TargetType="{x:Type local:WpfPropertyGrid}">
        <Setter Property="Template">
                <ControlTemplate TargetType="{x:Type local:WpfPropertyGrid}">
                        <WindowsFormsHost x:Name="host">
                            <o:PropertyGrid x:Name="propertyGrid1"/>

Minus the namespaces and the common style declarations you'll likely do for any templating requirement, it's fairly simple. All we need is to nest the Winforms PropertyGrid inside a WindowsFormsHost.

    <WindowsFormsHost x:Name="host">
        <o:PropertyGrid x:Name="propertyGrid1"/>

That's pretty much all of the code. Simple indeed. The Wpf Control itself consists of a single dependency property minus a small plumbing effort. Attached to this article I include a sample application containing the control, so you can get a hang of how it works. Our View itself that consumes the PropertyGrid, just comes down to a single line of code that includes the Custom control binding to a property in the ViewModel declaratively eg:

 <local:WpfPropertyGrid SelectedObject="{Binding PersonItem, Mode=TwoWay}" />

See! Now powered with such a fantastic control you can provide easy editing of objects in your application and maintain your MVVM pattern. I love it.

Make sure you also read the following resource on msdn if your new to the PropertyGrid. It provides all the basics you'll need to know to get you up and running quickly.

http://msdn.microsoft.com/en-us/library/aa302326.aspx (Basics about the PropertyGrid)
http://msdn.microsoft.com/en-us/library/ms742875.aspx (About hosting winforms in wpf)

The sample application included is a basic example using a single person object that in turn has a complex property and a collection property. Once you edit the properties and hit the Ok button, it will display the changes in the object via a messagebox.

Nothing fancy, but you can see some simple mvvm, creation of a simple custom value converter for the complex type “Address” exposed as a property and a the creation of a simple collection editor to allow editing the “Vehicles” collection property. Just enough to get you started.

Update: 4/30/2011
A gotcha I forgot to mention is that you'll have to reference System.Windows.Forms which resides in the System.Windows.Forms assembly. The PropertyGrid control resides in this dll. When trying to reference this library from the project references dialog, you won't find it in the list of available dlls. That's because by default your project is using the .Net 4.0 client profile, so go in your project properties window and change it from .Net 4.0 client profile to .Net 4.0. After this step you can try referencing the dll again and it will be in the list.

The Microsoft .NET Framework 4 Client Profile provides a subset of features from the .NET Framework 4. The Client Profile is designed to run client applications and to enable the fastest possible deployment for Windows Presentation Foundation (WPF) and Windows Forms technology. Application developers who require features that are not included in the Client Profile should target the full .NET Framework 4 instead of the Client Profile.
Update: 5/25/2011
I was able to change the target framework back to the default Client profile after referencing System.Windows.Forms ; It seems this is already in the Client profile. What will throw you off is if your referencing System.Design.dll which will require the Full version of .NET. One typical requirement will arise for you when developing custom TypeEditors because most of the existing type editors are in the System.Design dll. Still thankfully, my needs for custom type editors was pretty basic and I got away developing one from scratch ( inheriting TypeEditor).

I found that not setting height and width explicitly for the control makes its load time slow and at most awkward even. The fix is to be explicit with the height and width. This must be due to layout differences between WPF and Windows Forms. The following article on msdn has the meat and potatoes.


So, in case its not clear, when using the control, this is what you want to attempt :

<my:WpfPropertyGrid SelectedObject="{Binding PersonItem}" Width="290" Height="350">

Notice the explicit Width and Height above. Now the speed should be super fast! Keep reading the article on msdn I link to above, it has some pretty good information.

And a small correction to the article, it's not entirely true that you cannot style the property grid. You can do some basic styling of the Winforms PropertyGrid. What you cannot do is enjoy a complete designer experience like you can currently with WPF controls. So, it's not as traggic as I made it sound.

Download the sample application