Windows 8 Live Tiles and NotificationsExtensions

May 16, 2013

One of the ways to make your Windows 8 Store App stand out from the crowd is to include a Live Tile. There are different ways to get a tile to update but the easiest is to create a local update of your tile. A local update happens when your app changes the tile while it is running.

There are plenty of blog posts that show the basics of creating and using a live tile like Ged Mead’s blog on adding a live tile (

But I found that working with the XML can get tedious at times. Turns out Microsoft thought the same thing and created a wrapper for doing live tile work without having to play with XML. The library is called NotificationsExtensions.WinRT and is published through NuGet. After getting the package add an Imports NotificationsExtensions.TileContent to your code.

The nicest thing about it in my mind is that you can treat the tiles like normal objects with properties and methods. So instead of creating a tile using the TileUpdateManager.GetTemplateContent you use the TileContentFactory. This doesn’t seem like such a big deal, but the resulting object is no longer an XmlDocument but an object with properties like Image or textBody1 that can be set.

So as an example, lets take the TileSquareBlock tile template and see how it changes the code. (a complete example is in Ged’s blog post referenced above).

Dim tileXml = TileUpdateManager.GetTemplateContent _

Would look like this.

Dim tileXml1 = TileContentFactory.CreateTileSquareBlock()

Not that big of a change. Now if we want to set the text for the first text tag we would have to find the tag, then add text to it like this.

Dim textElements = tileXml.GetElementsByTagName("text") 
Dim tileText1 = textElements(0)

But using NotificationsExtenstions the same code would look like this.

tileXml1.TextBlock.Text = "10"

Clearly this is easier to read and understand. This really shines when it comes to adding images. For example, if we were working with the TileSquarePeekImageAndText01 template and wanted to add an image to the tile all we would have to do is this.

tileXml2.Image.Src = "ms-appx:///images/image1.jpg"

The only other code you have to change is when you are creating the notifications. Instead of this.

Dim notification = New TileNotification(tileXml)

It would look like this.

Dim notification1 = tileXml1.CreateNotification()

I’m not sure why this isn’t built into the SDK for the store apps but it isn’t. But with NuGet it is easy to add and certainly worth the few minutes it takes.

The extensions also have code in it to handle badges and toast content.


XAML, Blend, and Snapped View

April 29, 2013

I was working on a single page Windows 8 store app using XAML and I knew I needed to create a snapped view. The problem was that I didn’t know how to get this to work. When you use the built in templates Microsoft does a lot of the work for you, but if you are doing it all by hand it can be difficult to find out exactly what you need to do to get it to work.

I decided I’d put together a little walkthrough of setting up an app that handles the snapped view to help others in my situation. This example is in VB.NET but I’ve also done it with C#, both final examples are available for download from my SkyDrive at

The walkthrough will create a simple app with 3 horizontal buttons that will change orientation when in snapped mode.

First create an app

Open VS and select new project. Under Visual Basic select the Windows Store option. Then select the Blank App (XAML). I’ve named my app TrafficLight but you can name it whatever you want.


The App.xaml.vb code window should open. We won’t need this so you can close it.

Making the App do something

We will be working with the MainPage.xaml file from your solution. Double click it to open it in the VS designer. You should see a blank page. If you haven’t done so, make sure the split view turned on (I prefer Horizontal).

In the XAML code for the page you should see a <Grid… definition. Change the Grid definition by adding a Name (mainGrid), it will look like this:

    <Grid x:Name="mainGrid" 

Now lets add a StackPanel to hold our buttons:
    <Grid x:Name="mainGrid" 

        <StackPanel Orientation="Horizontal"

and now lets add some buttons inside the StackPanel:

            <Button x:Name="redLight" 
Height="200" Width="200"

You will want to add two more buttons just like this one but with the x:Name set to yellowLight and GreenLight and the Content set to Yellow and Green.

After doing this your page should have 3 boxes on it and look something like this:



Just to make the app actually do something we can add some code to the MainPage.xaml.vb code that will change the background of the main grid to the button color we select. find the Click attribute in any of the buttons and right click on it and select Navigate to Event Handler. This will open the MainPage.xaml.vb code and position your cursor inside the click event handler. Add this code to it.

        Dim bColor As SolidColorBrush = New SolidColorBrush()
         Select Case sender.Content
             Case "Red"
                 bColor.Color = Colors.Red
             Case "Yellow"
                 bColor.Color = Colors.Yellow
             Case "Green"
                 bColor.Color = Colors.Green
         End Select
         mainGrid.Background = bColor

You will have to fix the errors under Colors by importing the Windows.UI namespace.

At this point you should be able to run the code and be able to change the background color of the form by click on the buttons.

You should also notice that if you snap the app to one of the sides you will only see a portion of the Red and Green buttons.

Handling Snapped

There are two reasons why the App looks this way. The first is that we didn’t define how the app should behave when in snapped mode, the other is that we didn’t tell the app how to see  that the size of the window had changed.

First lets decide how we want the App to look when in snapped mode.

Stop the app if it is still running. Then open the MainPage.xaml file and add the following code after the StackPanel:

             <!– Visual states reflect the application’s view state –>
             <VisualStateGroup :Name=”ApplicationViewStates”>
                 <VisualState x:Name=”FullScreenLandscape”/>
                 <VisualState x:Name=”Filled”>
                 <VisualState x:Name=”FullScreenPortrait”>
                 <VisualState x:Name=”Snapped”>

This code defines our different Visual States of the app. The x:Name attribute can be anything.

Now that we have setup our different states we need to define how they will look. We will actually be setting up a storyboard for the snapped view and to do this we use Blend.

Right click on the MainPage.xaml file in your solution explorer and select Open in Blend… You can do all this without using Blend, but Blend makes it much easier to get started.

Once Blend starts and opens your file we can start changing the views. At the top left you should see four tabs that look like this:


We are going to be working with the States and Device tabs. Click on the States tab an you should see the different states listed that we defined at the bottom of our page:


We will only be modifying the Snapped view in this walkthrough but you can certainly experiment with the other views.

I like to set the device first, so go to the device tab and select the snapped view then come back to the states tab.

Once back in the States tab, click on the snapped item. You should see a red dot next to it, the rendered view should have a red line around it and the words Snapped state recording is on should be showing.

Now comes the fun part, in the Objects and Timeline tab open the mainGrid by clicking on the triangle to its left and you should see the StackPanel showing. Click on the StackPanel, your display should look like this:


Now on the right side of the screen you should see the properties of the StackPanel. Go to the Layout section and change the Orientation to Vertical.


This will change the display so that you can now see all the buttons. You will also see a red dot next to the StackPanel.

Save the MainPage.xaml file and close Blend. When you go back to Visual Studio you will see a dialog that says the file was changed outside of visual studio. Select Yes to All.

Now if you look at the XAML code for MainPage.xaml you will see that inside the visualState for Snapped there is a storyboard and instructions on how the page gets modified.

We still need to add one more thing to make it all work. We have to let the page know that the windows size has changed. We will want to add an event and handler to the VB code. Go to the top of the XAML code and put the cursor above the Grid definition. Now go to the properties panel on the right and click on the lighting bolt (event handlers) and find SizeChanged. Enter Current_SizeChanged and hit enter. You will be taken to the VB code where you will need to add the following code:

Select Case Windows.UI.ViewManagement.ApplicationView.Value
Case ApplicationViewState.Filled
  VisualStateManager.GoToState(Me, "Filled", False)
Case ApplicationViewState.FullScreenLandscape
  VisualStateManager.GoToState(Me, "FullScreenLandscape", False)
Case ApplicationViewState.Snapped
  VisualStateManager.GoToState(Me, "Snapped", False)
Case ApplicationViewState.FullScreenPortrait
  VisualStateManager.GoToState(Me, "FullScreenPortrait", False)
Case Else
End Select

Now you should be able to run the app and the StackPanel will change orientation when you put the app in snapped mode.

The one thing that I had trouble with was making sure I was recording while in Blend. If you aren’t careful then you will be modifying the base layout of your app. Other properties could be modified as well, I choose to simply change the StackPanel. I’ll leave it up to you to make more change.

Saving Data in a Windows 8 JavaScript App

February 23, 2013

Recently I started to experiment with ESPN’s APIs. You can find them at You can sign up for an API key for free at the site. I wanted to keep the results being brought back from ESPN by saving the data to disk so I wouldn’t have to continuously hit the ESPN site while I was playing. I figured I would need a settings flag to tell me the last time I hit the ESPN site for data and use file storage to save the results.

The data I was working with was all the sports teams that ESPN had in its database. I didn’t limit it, I just asked for all the sports with this URL: followed by my API key. I decided to use the JavaScript Grid App for my testing. This would allow me to return the data and group it by league.

The Grid App gives you a data.js file with some sample data in it. I used that as my template and modified the returned data from ESPN to fit it. Inside the data.js file I replaced the generateSampleData call with my call to the ESPN data using a WinJS.xhr call to retrieve the data into something that I can parse using JSON.parse.

So I changed this:

    // TODO: Replace the data with your real data.
    // You can add data from asynchronous sources whenever it becomes available.
    generateSampleData().forEach(function (item) {

To This:

    WinJS.xhr({ url: " your api code here" }).then(function (xhr) {
        var items = JSON.parse(xhr.responseText);
        items.sports.forEach(function (item) {
            if (item.leagues != undefined) {
                var leaguesarray = item.leagues;
                var leagueSport = {
                leaguesarray.forEach(function (item) {
                    if (item.shortName != undefined) {
                        switch (item.shortName.substring(0, 4)) {
                            case 'MLB':
                                item.backgroundImage = '';
                            case 'NFL':
                                item.backgroundImage = '';
                            case 'NCAA':
                                item.backgroundImage = '';
                            case 'NBA':
                                item.backgroundImage = '';
                            case 'WNBA':
                                item.backgroundImage = '';
                        item.title = item.shortName;
               = leagueSport;


I also added some code to grab logos for some of the leagues from Wikipedia. It wasn’t necessary and they don’t look pretty but it helped me while I was debugging.

So far so good. I can run the code and it will return back teams with their leagues and display them in the grid app. I can even select leagues or individual teams and I will get a page showing up for that league or team.

Now I wanted to add my code to save the data. It appeared that all I would have to do is add a call to create a file and then write to it. First I added the namespaces to my code:

    var applicationData = Windows.Storage.ApplicationData.current;
    var localSettings = applicationData.localSettings;
    var localFolder = applicationData.localFolder;

The I created a function to create the file and write to it. I told the createFileAsync method to overwrite the file if it already existed. The function looked like this (I added some line breaks so you can see most of the code).

    //Store the returned data locally
 function storeESPNData(items) {
         .then(function (sampleFile) {
         return Windows.Storage.FileIO.writeTextAsync(sampleFile
, items); }).done(function () { ; }); } }

I added a call to this routine right after the call to JSON.parse.

I tested the code and looked at stored location to see what it had written to the file. The file will be stored in the C:\Users\userid\AppData\Local\Packages\packageguidinfo\LocalState directory. But when I looked at the file in notepad the only thing in it was [object Object].

I was a bit confused for a second until I realized that I had told the system to write out an object but I didn’t tell it to serialize the data. So being the VB/C# programmer that I was, I went looking for the serialize method in JavaScript. That turned out to be a little fruitless but I did come across something called stringify. This appeared be the correct call so I modified my storeESPNData function to call stringify when I called the writeTextAsync method.

The return line now looks like this.

 return Windows.Storage.FileIO.writeTextAsync(sampleFile
                                  , JSON.stringify(items));

After rerunning the code I now have what I was looking for, the entire JSON data stored in a local file. I can now experiment with this data to my hearts content with or without a network connection. The entire code sample (including the extra code to read the local file) can be found on my SkyDrive using this link:

Remember to get your own API code from ESPN and add it to the code.

Convert DOC to DOCX using PowerShell

July 6, 2012

I was tasked with taking a large number of .DOC and .RTF files and converting them to .DOCX. The files were then going to be imported into a SharePoint site. So I went out on the web looking for PowerShell scripts to accomplish this. There are plenty to choose from.

All the examples on the web were the same with some minor modifications. Most of them followed this pattern:

$word = new-object -comobject word.application
$word.Visible = $False
$saveFormat = [Enum]::Parse([Microsoft.Office.Interop.Word.WdSaveFormat],”wdFormatDocumentDefault”);

#Get the files
$folderpath = “c:\doclocation\*”
$fileType = “*doc”

Get-ChildItem -path $folderpath -include $fileType | foreach-object
$opendoc = $$_.FullName)
$savename = ($_.fullname).substring(0,($_.FullName).lastindexOf(“.”))
$opendoc.saveas([ref]”$savename”, [ref]$saveFormat);

#Clean up

After trying out several I started to convert some test documents. All went well until the files were uploaded to SharePoint. The .RTF files were fine but even though the .DOC fiels were now .DOCX files they did not allow for all the functionality of .DOCX to be used.

After investigating a little further it turns out that when doing a conversion from .DOC to .DOCX the files are left in compatibility mode. The files are smaller, but they don’t allow for things like coauthors.

So back to the drawing board and the web and I found a way to set compatibility mode off. The problem was that it required more steps including saving and reopening the files. In order to use this method I had to add a compatibility mode object:

$CompatMode = [Enum]::Parse([Microsoft.Office.Interop.Word.WdCompatibilityMode], “wdWord2010”)

And then change the code inside the {} from above to:

$opendoc = $$_.FullName)
$savename = ($_.fullname).substring(0,($_.FullName).lastindexOf(“.”))
$opendoc.saveas([ref]”$savename”, [ref]$saveFormat);
$converteddoc = get-childitem $savename
$opendoc = $$converteddoc.FullName)$opendoc.SetCompatibilityMode($compatMode);

It worked, but I didn’t like it. So back to the web again and this time I stumbled across the real way to do it. Use the Convert method. No one else seems to have used this in any of the examples but it is a much cleaner way to do it then the compatibility mode setting. So this is how I changed my code and now all the files come in to SharePoint as true .DOCX files.

$word = new-object -comobject word.application
$word.Visible = $False
$saveFormat = [Enum]::Parse([Microsoft.Office.Interop.Word.WdSaveFormat],”wdFormatDocumentDefault”);

#Get the files
$folderpath = “c:\doclocation\*”
$fileType = “*doc”

Get-ChildItem -path $folderpath -include $fileType | foreach-object
$opendoc = $$_.FullName)
$savename = ($_.fullname).substring(0,($_.FullName).lastindexOf(“.”))
$opendoc.saveas([ref]”$savename”, [ref]$saveFormat);

#Clean up

Solar Power

February 12, 2011

I’ve always liked the idea of solar power. The thought of getting clean power from the sun just seemed so logical. But like many of us I didn’t really have the time to look into it in detail so I did nothing.

My wife got tired of hearing me mention it so when she saw a flyer for a solar energy talk at a big box home store she signed us up. We knew it would be mostly an advertisement but we figured we might learn something. Long story short, we did get solar installed…but it isn’t as easy (or as hard) as you might think.

Where to put them?

If you are thinking about installing solar panels the very first thing you should think about is where to put them. There are two options, on the roof or on the ground. Most people think of the roof first. In order to put them on the roof you need to have a house pointing in the correct direction. If you have a roof that points in a southern direction then you are in good shape. Even one that points slightly off from south is okay.  If you’re good to go on the direction the second thing to look at is how old is your roof?  If it is more than 10 years old you will need to install a new roof. Most installers want as new a roof as possible. If you are still okay with the roof mount, the last thing you will need is lots of space. If you’ve got a ranch style house then you are probably good to go. If you have angles or dormers on the roof then you probably don’t have enough space.

The alternative to the roof is to have a ground mount installed. For this you need a space that is in the open (no trees to block the sun). We had to go with a ground mount because our house did not point in a good direction and we have odd angles on the roof that limited the number of panels we could install. We have the space, but I was still surprised by the size of the installation. It is 88 feet long and about 10 feet high – that’s 3 rows by 16 panels with each panel 3.5’ by 5.5’.

You will probably have to get permits from your local municipality for the installation.

Solar Panels

If you take the cost of new roof into account, the two installations are about the same  cost.

The Cost

Solar is not cheap. It is a significant investment but I believe the payoffs are worth it. Having said that, you do get help from the federal government and depending on where you live, you may get money back from the state. From the federal government you get a tax credit based on the cost of the system. The credit is about 30% of the cost. We live in Pennsylvania, so we also get a rebate from the state. The rebate is based on the amount of KwH we generate with a ceiling just over $17K. Even with this help you will probably have to cover the remainder of the cost with some sort of loan.

Paperwork, Paperwork, Paperwork

One thing we found out early was that there is a ton of paperwork involved with installing solar. And  it doesn’t end when the system is turned on. You need to go over a site plan as to where the panels will be placed – this after someone comes out to look at possible locations and your latitude and longitude. More forms for the state rebate (which has to be approved before you can start installing – at least several months). Forms for the electric company, the township, the loans. And after all that is done, and the panels are installed, forms for getting yourself setup on one of the energy exchanges. From beginning to end it took us 9 months to get everything done.

The Connections

I never liked the idea of being completely off the grid. We use too much power and all our appliances are designed to run on the grid. I also didn’t want to have to deal with batteries or to get a generator for when the sun just doesn’t produce enough to keep us going. So a system that lets us stay connected to the power grid was the logical way to go. This allows us to “sell” the excess power we generate during the day back to the power company.

Solar panels are Direct Current (DC) so in order to use the power or send it back to the grid means we would need an inverter. Our installation is actually two systems so in our case we would need two inverters. This is all part of the installation so we didn’t have to go out and investigate what to get, but we did have to pay for them. We have two Sunny Boys installed. They have displays that cycle messages that have the total power generated, total carbon saved, and other interesting facts.

Sunny Boys

From the inverters the lines run through a meter that measures how much total energy you make. This is used by the energy exchanges (more on that later). From there it runs through another box with a meter on it. This is the connection between the grid and our panels. From there it goes into the house. This box also has the meter on it that measures the amount of excess power going back into the grid.

When we built our house we had 400 amp service installed. In hind site this was a good thing because the panels generate 300 amps. Most houses have 200 amp service so an extra piece of hardware is required.

Obviously an electrician is needed to install this part of the system. It also needs a second electrician to inspect it and in our case another inspection from the electric company. It is a little frustrating to see the solar array sitting there ready to go but without the final inspections. Luckily the wait was only about 2 weeks.

The Exchange

The last piece of financial information has to do with Alternate Energy Credits (AEC) or Solar Renewable Energy Credits (SREC). For each 1000KwH you generate 1 SREC. We expect to make about 12 SRECs in a year. SRECs can be placed on an exchange and sold. We are using Flett Exchange to sell our SREC’s. The price for SRECs in PA is down due to an oversupply of SRECs from other states but we can sell them in other locations. With the current prices we should generate somewhere between $2400 and $3600 in SRECs per year.

Bottom Line

So far we have had the system running for just over a month. We have gotten one electric bill. We don’t have electric heat so the cold doesn’t matter. It is a little early to tell but so far the bill is down about 30%. This is not counting the SRECs. 

So is it worth it? I will know a lot more come summertime when the air conditioning is running but if the trend with the bill continues then yes. And although I’m not much of a tree hugger, there is also the feeling that I’m doing a little something for the planet. 

WebMatrix and SQL

September 26, 2010

I was looking into using WebMatrix to rebuild a website. This particular website connects to SQL Server. Its very easy to connect a WebMatrix site to an SDF type database file. You simply use the Database.Open(sdffilename) command. But a SQL server isn’t just a file – so how to do it.

One way is to use the Database.OpenConnectionString(connection string) command. But then I would have to put this in code in at least one page. I have a problem putting a constant string in a program to connect to a database.

The way I would do it using web forms would be to create an entry in a web.config file. So I thought if I added a connection through the WebMatrix Databases page it would add the connection string to a web.config file and I could use it – I was wrong. Adding a connection only gives access to the database, it doesn’t update any files or create a web.config file. So I created my own looking like this:

<?xml version="1.0"?>
                connectionString= "server=ServerDNS;database=databasename;uid=userid;pwd=Password"
                providerName="System.Data.SqlClient" />



So according to the documentation, I should be able to use the name of dbname in the openconnectionstring method. Again I was mistaken.

It turns out that the Database.Open command will work with a name from the web.config file. So the code was simply Database.Open(“dbname”);

This all worked fine and I was able to call a stored procedure (using the Query method), however, the records in my database had HTML codes in it and they were not being displayed correctly. The data was being encoded before being displayed – so my <br> turned into &lt;br&gt;. This meant that the <br> was actually being displayed on the page – not exactly what I had in mind. I was also using the WebGrid helper to display my data which only made things more complicated.

Looking around the web I found a great article by Mike Brind called Looking At The WebMatrix WebGrid where he talked about some of the settings and options available for the webgrid. One of them was the format parameter for a column in a grid. Using this I was able to use HtmlString to output the data correctly. My Format parameter looked like this:
format: @<text>@(new HtmlString(item.notes))</text>

Fun with VHDs

November 29, 2009

Looking around I found that VHD support is quite extensive inside Win7 but I didn’t realize how much. I also found a few commands that you can run at the cmd prompt to help with VHDs.


First thing I found was that creating a VHD was very easy. You can use the Computer Management console to create a VHD using the GUI. Simply right click on Disk Management and the Create VHD menu item should be there. Clicking on it starts the dialog windows that will help you create a fixed or dynamic VHD.


For those of you who yearn for the days of DOS, you can also use a command at the cmd prompt. Go to an administrative command prompt and enter Diskpart. Diskpart is not just for VHD’s so be careful, you could do some damage to the partitions on your hard drive. Once you’ve started Diskpart, you can create a VHD by issuing the Create command – Create vdisk file=”drive:vhdame.VHD” maximum=max size of vhd type=expandable|fixed


This will create a VHD on the drive you specify with the name you specify – in the example above, I’ve created a VHD called blog.vhd on the D: drive. The maximum tells the system the maximum size of the drive in MB and the type lets you create an expandable or fixed size VHD.

Using the Diskpart command you can automate creation of VHDs if you need to. The VHD will be uninitialized and unattached.


Attaching a VHD lets you use it like a normal hard drive on your system. You can again right on Disk management inside Computer Management and select Attach VHD. You can enter the location and name of the VHD you want to attach or browse for it. If the VHD has been initialized and formatted it will be assigned a drive letter. Otherwise you will have to initialize the VHD and format it.

You can use the Diskpart command to attach a virtual disk. You first have to use the Select command to select the VHD then you can attach the vdisk

Select vdisk file=drive:vhdname.VHD
Attach vdisk

Once the VHD is attached it will show up in Computer Management.

But there’s more…

Attaching and creating VHD’s was what I was expecting, but I didn’t realize you could install an OS to a VHD without using Windows Virtual PC. You can even install a 64bit OS to a VHD. The trick is to use the Diskpart command when installing the OS. lets say you want to create a bootable VHD – one that you could use to boot your physical machine. We’ll assume you have Windows 7 installed on your computer first. Now put in the DVD of the OS that you want to put on your VHD and boot from the DVD. When windows gets to the first screen you can drop to a command prompt (using Shift-F10) and use the Diskpart commands to create and attach a VHD. Once you’ve done this you can exit the command prompt and continue with the installation. When it gets to the part where you are asked to choose where to install the OS you will see an entry for your VHD. Select it (you will get a message saying it can’t install but ignore it) and continue with the installation.

Once the OS is installed you will have a multi-boot system. The first entry in your multi-boot menu will be the OS on the VHD. If you want to change the menu or add a preexisting VHD to your boot menu you can use the BCDEdit command..yes another command prompt command.  BCDEdit lets you modify what shows in your boot menu and also allows you to rearrange things if you want. 

If you have a VHD that is bootable, you can use the BCDEdit command to install it into your boot menu. Unlike Diskpart, BCDedit is a run and done type command (like DIR). Running BCDedit by itself will present you with a list of your current boot menu and the type and location of the OSs you have on your system. The easiest way to add a new entry you in your boot menu is to copy an existing entry. When you copy an entry BCDEdit will create a new entry with a GUID. You use the assigned GUID to modify the new entry to point it to your VHD. Use the Set option to point the OSDEVICE to your VHD. The command would look something like:
BCDEDIT /set {guid} osdevice vhd=[d:]\blog.vhd

You can get help with BCDEdit by passing in /? as the first parameter.

There is another way to work with your boot menu. You can use the MSCONFIG command. This will not give you all the options that BCDEdit will but you can select a default OS from this menu. One thing I’ve learned is that you should not go into MSCONFIG when you have booted into the VHD. Every time I used MSCONFIG when booted into the VHD the system had problems booting back into my main OS.