Take the Roast Out of the Oven with Rock Framework

As the saying goes, there is no problem which can’t be solved by adding another layer of abstraction.  If you’ve ever sweated making a choice between two products—loggers like Loupe or Splunk, SimpleDB or DynamoDB, for instance—and one of the main drivers of making the “right” choice was the pain of switching, maybe you should have spent some time looking into a layer of abstraction.  Slow starts to projects are often due to paralysis-via-analysis.

A framework is just such a layer of abstraction.  Frameworks are well designed set of code which allow you implement or switch relatively easily between different different choices of the same thing.  Concerns about the “right” choice can be answered with “don’t sweat it, we’ll implement a factory so we can use any log provider, or even different log providers based on severity”, or “no sweat, we’ll encapsulate all our data calls in a data provider, so we just need to replace the one class if we switch databases”.  With the right layers in place, you’re liberated to try a few different options, easily implement the best tool for the job, and not worry too much about future changes.

Frameworks might be the high level of abstraction we need, but how does this usefulness get here?

Where Do Frameworks Come From?

We developers all start somewhere, and aside from prodigies, we all start at level of “procedural code”.  We write big long procedures that get the job done.  Very quickly we learn how to break chunks of code into methods, and then classes.  This is the basis of OOP and confers all the benefits OOP is known for.

Library abstraction comes from working with a number of similar applications, seeing commonalities between these applications, and creating a set of classes of only the commonly used code.  This set of classes is a library, and managing libraries in several applications creates problems while solving others.  The hassle of managing libraries is why NuGet, npm, pip and other “package managers” were created.  Libraries are usually tied closely to the set of applications they were developed for.

Nearing the top level of developer thought development is framework abstraction.  Frameworks employ design patterns (such as provider, factory and abstract factory) which enable components to be very easily swapped around.  Frameworks aren’t supercharged libraries, they’re really meant to be super-generic libraries, encapsulating very common activities (such as logging) into a generic form.  Good applications will use one or more generic frameworks in addition to one or more libraries specific to that application set.

I’ve illustrated this all with a handy-dandy PowerPoint Smart Art:


Note: there is no scientific basis for the above diagram, I totally made it up.  But I believe it to be as accurate as anything else I make up.  If you don’t know what “gunga galunga” means, see https://www.youtube.com/watch?v=TkLH56VlKT0.

Why Use a Framework?

Gaining the experience to develop a framework can take a lot of time, in addition to the time it takes to actually develop the framework.  Starting with an existing framework (especially an open source one) allows you to leverage common experiences (i.e., someone else already crossed the bride you’re about to) and speeds your time to SOLID code.  Your application will implement best practices from the start, leading to faster maturity of your application.  The flexibility a framework provides sets you up for success by making change easy.

Using an existing framework means you’re participating an ecosystem which welcomes contributions, and becoming a contributor moves you up a level or two on the pyramid above, and helps ensure the longevity of the project.

Why Rock Framework?

The Rock Framework is literally “developed by dozens, used by hundreds”.  We use Rock Framework internally in hundreds of applications, and have open-sourced the parts we can share.  We have a saying at QuickenLoans—“take the roast out of the oven”.  It means don’t spend too much time thinking about a problem, it’s better to try some things out.  The Rock Framework gives us all the basic plumbing to easily try things out, plus some nice syntactic sugar we like to use in our applications.

Rock Framework is available as several NuGet packages, and the source code is hosted on GitHub, both of which you should access via http://rockframework.org/.  Here, I’ll describe the packages available now.  Other features and packages will be added in the future so be sure to refer to http://rockframework.org/ for the most up-to-date information.


This is the base package for the Rock Framework, and is a dependency for the other RF modules.  It contains XSerializer (a non-contract XML serializer), a wrapper for Newtonsoft’s JSON.NET, a dependency injection container, a wrapper for hashing, a number of extension methods, and more.


This is probably the module with the most immediate use.  All logging methods are encapsulated, and there is a provider model with several interfaces for different types of log messages.  You’re encouraged to extend both your internal implementation as well as our repo with providers for popular logging platforms.


If you’re planning to implement  message queuing between applications (using MSMQ, RabbitMQ or named pipes, for example), this library contains message primatives as well as routers, parsers and locator classes to get a full featured messaging system up and running in very little time.  If you use the interfaces you’ll be able to easily swap providers if you’re taking some for a test drive.


Last, but not least, here is a DI framework which forms the basis for swappable parts in the Rock Framework libraries. Applications have entry points (like Main()) where dependencies can be wired up when the application starts.  Libraries, on the other hand, don’t have entry points, meaning libraries need to be created and have values set in a constructor or other composition root by the application which uses the library.

Rock.StaticDependencyInjection enables libraries to automatically wire up their own dependencies, even with the ability to automatically find the proper implementation of an interface and inject that.


Get Involved with Rock Framework

This post has just been an overview of the Rock Framework.  There are more to come, both from myself and other members of the community.  Follow https://twitter.com/rockframework for announcements.  Even, better, get involved!  As an open source project, Rock Framework has many needs:

  1. Contribute providers for your favorite logging tool
  2. Create an example
  3. Implement the framework in one of your projects
  4. Write or update documentation

In today’s market, there is no better way to level up your career than to contribute to open source projects like this.  We’re looking forward to working with all of you!

Posted in Open Source Tagged with:

Two New OSS Library Releases from QuickenLoans

Today is our annual Core Summit.  The teams led by Keith Elder are showing off what they’ve created for the rest of us to use, and Keith made a couple of really exciting announcements about some of our core libraries–QL Technology has now open-sourced two of the frameworks we use to build our amazing applications!  We’ve pulled out all the proprietary and internal-use-only bits, leaving all the core goodness you can use to “engineer to amaze” also.  While Keith is still attending to his duties at the summit, I thought I’d provide a small amount of clarity on what we’ve done.

One note: QuickenLoans hasn’t been part of Intuit since 2002, we just have a long term agreement to use the name.  Please don’t ask me about TurboTax, QuickBooks, etc.  However, if you need a mortgage, I’ll be more than happy to get you $500 back at closing and refer you to the best bankers in the business, just ping me.


The name is a tip-of-our-hats back to our original name, Rock Financial (in 1999, Intuit bought Rock Financial and rebranded it as QuickenLoans, then sold QL back to the original Rock Financial group in 2002).

Internally, we use RF for serialization, queue-based messaging, service creation, centralized logging (don’t see your favorite provider–please contribute!) and dependency injection, all of which are now open sourced.  We have a bunch of internal extensions which we won’t release, and you should also do the same for your applications.  The Core, Logging, Messaging and DependencyInjection libraries are all available as different nuget packages, so you can pick and choose as you need.  DI deserves a special shout-out, since Brian Friesen has been speaking for years on DI and has created a wonderful library.  Brian’s XSerializer XML serializer (so flexible, such fast, much XML) and our JSON.NET wrapper also have their own packages.


QL has dozens of websites, all of which need to comply with our look-and-feel standards.  Based on holidays and promotional events, the look and feel may need to be updated throughout the year.  Yay, CSS updates!  For those, Dave Gillhespy developed Scales, which allows you to easily standardize your UI elements across your responsive web applications (be it one, or many).

Scales uses SASS for a CSS preprocessor, and implements a number of best practices and simplifies a bung of pain points.  Scales right now is available as a Bower package, but nuget and other options coming in the future.  If you’re interested, contribute themes, enhancements and help resolve issues.

At QuickenLoans, we use a lot of OSS tools, and we’re committed to giving back to the OSS community.  You’ll see blog posts and conference sessions from many of us at QL Technology.  Meantime, follow the team members below for announcements and the latest info.  And if you’re really interested in engineering to amaze, let me know, at the time of writing we have a lot of open positions.

Thanks due to:

Posted in Open Source

Blinking an LED with Raspberry Pi 2 and C# Mono

This should work with either a Raspberry Pi B+ or a Raspberry Pi 2.  The B+ and 2 identical, save for the faster processor and increased RAM on the Pi 2.  I’m assuming you’ve gone through the setup and can boot to a command prompt or the GUI, and are using the Raspbian distro.  For most of this post, you’ll need the command line to install the different libraries, although Monodevelop is a graphical IDE.  We have to use an older version of Monodevelop (3.x), but it’s all good enough.

I have a CanaKit Raspberry Pi 2 Ultimate Starter Kit, which includes a nice breadboard and pinout connector, but greatly lacks for manuals.  This made it really tough for me to get started.  As I found out later, the pinouts are the same as other connectors, so their examples will work also.  The CanaKit does have the nice extra sets of 3.3V and 5V pinouts, which should come in handy for some uses.  Overall it’s a great kit, and I’m glad I bought it, and I hope this post helps others in the same situation.

The flashing LED is the Hello, world of GPIO (General Purpose Input Output), but it’s still pretty exciting the first time the light flashes.  Here’s how I got the LED to flash with C# and Mono.

Step 1: Install Mono and Monodevelop

At the command line, issue the following commands

sudo apt-get update

sudo apt-get upgrade

sudo apt-get install mono-complete

sudo apt-get install monodevelop

Update is used to update all of the package sources for Raspbian, and upgrade brings all your installed packages to their latest versions.  The  first install command installs just the mono runtime, and the second one installs the actual IDE.  You can develop Mono without Monodevelop, but the IDE makes life easier.  Collectively these commands install a lot of stuff, so this all could take several minutes to run. Apt-get is an application/package manager, and is part of the inspiration for nuget and chocolatey.

Once this is done, open Monodevelop and make sure it starts.

Step 2: add nuget to Monodevelop

Nuget, if you don’t know already, is a package manager for .NET.  It makes adding and maintaining dependencies much easier.  The dependencies we need are hosted on nuget.org.  The API which Monodevelop will use to search and retrieve packages is HTTPS, so we need to update the certificate store.

mozroots –import “sync

Next, install the nuget add-in by following the instructions at https://github.com/mrward/monodevelop-nuget-addin for Monodevelop 3.0.  This will now allow you to add nuget references for solutions.

Step 3: Write the program

Start by opening Monodevelop and creating a new project.  If you’re familiar with Visual Studio, this will seem very familiar.  Name your project whatever you want.

In order to access the GPIO pins of the Raspberry Pi in C#, we’ll use the Raspberry.IO.GeneralPurpose library.  We’ll reference this package from nuget by right-clicking the References node, and choosing Manage Nuget Packages (see below; if you don’t see this option, something went wrong in Step 2, look back and make sure you followed the installation completely).


In the Manage Packages window, search for Raspberry, select Raspberry.IO.GeneralPurpose and click Add.


The code sample we’ll use is based on the example at https://github.com/raspberry-sharp/raspberry-sharp-io/wiki/Raspberry.IO.GeneralPurpose.  Since there are two ways to number the GPIO pins (physical numbering, and CPU address), and since only some pins are actual i/o, it can be a little confusing when coding and wiring.  Sticking to the physical pins numbering is probably easiest, and your connector board should  have shipped with a decoder card which shows the pins.  If not, most boards have the same numbering, so anyone’s should do.  For more details, see Appendex 1 at http://www.raspberrypi.org/documentation/usage/gpio/.  Raspberry.IO.GeneralPurpose limits us to addressing only the i/o pins, so that can be a useful guide, too.

Below is the complete main.cs for our project.  If you’re copying and pasting, don’t forget to change the namespace to match your solution.

using System;
using Raspberry.IO.GeneralPurpose;
using Raspberry.IO.GeneralPurpose.Behaviors;

namespace blinky
	class MainClass
		public static void Main (string[] args)

		// Here we create a variable to address a specific pin for output
		// There are two different ways of numbering pins--the physical numbering, and the CPU number
		// "P1Pinxx" refers to the physical numbering, and ranges from P1Pin01-P1Pin40
		var led1 = ConnectorPin.P1Pin07.Output();

		// Here we create a connection to the pin we instantiated above
		var connection = new GpioConnection(led1);

		for (var i = 0; i<100; i++) {
			// Toggle() switches the high/low (on/off) status of the pin



Step 4: Wire the breadboard

Do this part with your I turned off and power disconnected!  Also, touch some metal and try to get rid of any static electricity you’ve built up.  Here’s what you’ll need:

  • LED
  • 220-ish Ohm resistor
  • Two jumper wires, preferably different colors

The resistor is needed as a precaution, so we don’t accidentally burn out a pin.  A Raspberry Pi is capable of producing output currents greater than its inputs can handle.  Normally, a bunch of things like LEDs and other peripherals wired together will use enough current that it won’t matter, but for this simple task it’s better to be safe than sorry.  My kit has 220 Ohm resistors, yours may have different ones, just as long as you have something in the same range.  For a great explanation and refresher on resistors, watch https://www.youtube.com/watch?v=UApKArED3JU.   The while video is 3:23 but explains what I’ve just said even better and shows you how to calculate and read a resistor.  My Canakit also included a nice decoder card for reading resistor codes.  If you don’t have a card, check out http://en.wikipedia.org/wiki/Electronic_color_code.

The jumper wires are so you can move the stuff to a different part of the breadboard.  You can probably bend and twist the LED and resistor so you can directly wire the components, but using jumpers allows you to spread out a little more on another part of the board.  If you want a little background about a breadboards, here’s a 6 minute video: https://www.youtube.com/watch?v=q_Q5s9AhCR0.

Here’s a photo of my board, and the wiring steps.  Do this while the board is not connected to the Pi.

  1. The red wire runs from Pin 7 (GPIO 4) to an empty row on the breadboard
  2. The resistor connects the red row to another empty row.  Make sure to orient the resistor correctly.
  3. The long end of the LED is in the same row as the output end of the resistor.  The short end of the LED is in yet another empty row.
  4. The white wire connects the end of the LED to Pin 6 (GPIO GND), but you can use any GND.


Step 5: Run the program!

Connect the breadboard to the Pi, boot up to a command prompt, and change directories until you’re in the same folder as your .EXE (remember Linux paths are case sensitive).  Access to the GPIO pins requires superuser level, so you’ll need to run the binaries from the command line, using sudo:

sudo ./blinky.exe

You should be good to go!



A book I found to be very helpful is Make: Getting Started with Raspberry Pi.  Highly recommended if you don’t have a book already.

I owe a huge debt of gratitude to the authors of the blog posts listed below, in addition to any links above.  I am very lucky people more knowledgeable than I am are paving the way for my curiosity.






Posted in Raspberry Pi

NoSQL Datastores of Interest to the .NET Developer

(apologies for the crappy layout, this is a stock theme, I suck as a WordPress themer so I’m looking for one which will display better)

The world of NoSQL is vast, and this is in no way a comprehensive list of NoSQL datastores (just see http://en.wikipedia.org/wiki/NoSQL for how vast the NoSQL universe is).  After spending a lot of time researching, this is just a list of ones that have officially supported C# libraries, or are written in .NET.  I thought it would be a little easier to start learning the ins-and-outs of the different types of datastores without having to learn new languages also.  Pretty much every NoSQL datastore has some sort of REST-ful API, which you can work with regardless of your language choice.  Don’t let the presence or absence of a system on this list make your decision for your application choices–this is more a list of systems I think would be fairly simple and interesting to experiment with.  I have not yet worked with all of these datastores, but as I do I’ll add posts to this blog.

Every category of NoSQL is designed to solve a particular problem, and each option in each category has its ups and downs.  There are many options, so you really need to know what you want to do.  Do you need an embedded solution, or a scalable cluster?  Are you trying to discover relations between populations, store profile data in a flexible schema, or cache the results of API requests?  What types of indexing are supported?  Can you live with eventual consistency?

I tried to note some easy and low cost ways to get started with each datastore.  If there isn’t direct DBaaS support, you can always deploy Azure or AWS VMs, and even the Google Cloud Platform has some hosting options.  Try not to install everything on your local machine, but have fun!

Category Datastore .NET Support Notes
Graph neo4j http://neo4j.com/developer/dotnet/ Neo4j is perhaps the best known graph database.  Although the clients are community supported, they are maintained by two amazing developers.  It’s easy to get started, especially since GrapheneDB offers a free œHobby account.  There is also a simplified Azure VM deployment.
Titan none This is on the list as œsomething to watch.  Datastax (Cassandra) recently acquired the company behind Titan, and Datastax has a good history of .NET support.
OrientDB https://github.com/orientechnologies/OrientDB-NET.binary There are also community supported clients.  OrientDB is a hybrid datastore, supporting both document and graph features.
VelocityGraph Written in C# An open source hybrid (graph/document) datastore, written in C# which can be embedded or distributed.  There is a paid model for the distributed version also.
Document MongoDB Official and community clients listed at http://docs.mongodb.org/ecosystem/drivers/csharp/ One of the best known and most used document datastores, MongoDB is backed by 10gen, and mongolab.com offers a free sandbox account to get started.  MongoDB and Mongolab are available via Azure Marketplace, so you can spend free Azure credits if you have them.
Azure DocumentDB It’s Microsoft, no worries.
This is still in Preview at the time of writing, but it looks very promising.  Being Azure, if you have Azure credits you can spend them on this.  It’s another DBaaS, so you won’t need to mess with VMs.
Amazon DynamoDB http://aws.amazon.com/sdk-for-net/ Another high performance DBaaS datastore, DynamoDB supports both document and key-value modes.  This is included in AWS’s œfree tier for a year.
OrientDB https://github.com/orientechnologies/OrientDB-NET.binary There are also community supported clients.  OrientDB is a hybrid datastore, supporting both document and graph features.
VelocityGraph Written in C# An open source hybrid (graph/document) datastore, written in C# which can be embedded or distributed.  There is a paid model for the distributed version also.
CouchDB https://wiki.apache.org/couchdb/Getting_started_with_C%23 Another popular datastore, this is the open source project from Apache.  This is supported by Couchbase (see below)
Couchbase http://docs.couchbase.com/couchbase-sdk-net-1.2/ A œnext generation of CouchDB, in a way (see http://www.couchbase.com/couchbase-vs-couchdb).  Has both open source and commercial licenses.  Popular as an in-memory cache by some big name companies.  There is an Azure VM image in the Azure VM Depot.
RavenDB Written in .NET Open source and commercial licenses.  Has a embedded and scalable server options.  A RavenHQ hosted plan is available through the Azure Marketplace.
NinjaDB Pro Written in .NET Commercial, embeddable document datastore which is also compatible with Xamarin.  Supports either document or relational modes.  There is also a version for WinRT.
NDatabase Written in .NET An open-source in-memory object database.
Big Table Cassandra Official driver from Datastax,
Datastax is the company supporting Planet Cassandra and largely supporting the Apache Cassandra project.  Datastax provides the commercial licensing for Cassandra.  Cassandra is very similar to HBase, but because of Datastax’s backing is the better choice, IMO.  Cassandra can be run on Azure (DataStax Guidance for Azure), and there is an older VM available from the Azure VM Depot.  As part of their wonderful “Succinctly” e-book series, Syncfusion also has Cassandra Succinctly.
Apache HBase community SDKs are just wrappers for the REST API.  Most of the .NET SDKs you’ll find are for HDInsight and aren’t guaranteed to work with Apache HBase. I really just put this here for comparison purposes.  HBase can be a real pain.  Seriously, look at Cassandra or HDInsight instead.
HDInsight It’s Microsoft, no worries HDInsight covers a lot of the Hadoop ecosystem, the HBase specific bits are introduced at http://azure.microsoft.com/en-us/documentation/articles/hdinsight-hbase-overview/.
Key-Value Couchbase http://docs.couchbase.com/couchbase-sdk-net-1.2/ A œnext generation of CouchDB, in a way (see http://www.couchbase.com/couchbase-vs-couchdb).  Has both open source and commercial licenses.  Popular as an in-memory cache by some big name companies.  There is an Azure VM image in the Azure VM Depot.
Redis There are a number of community supported clients, listed at http://redis.io/clients#c.  Two of the more popular ones are from ServiceStack and StackExchange. A very popular choice as a cache layer.  Durable persistence isn’t the strong suit, and multiple-node sharding is only in beta.  You can add Redis to Azure from the Azure Marketplace.
Azure Tables It’s Microsoft, no worries Perhaps one of the top choices, especially if the rest of your application is on Azure.  Crazy scalable and very performant.  Table storage was one of the original features of Azure, and is very well vetted by now.
Amazon DynamoDB http://aws.amazon.com/sdk-for-net/ Another high performance DBaaS datastore, DynamoDB supports both document and key-value modes.  This is included in AWS’s œfree tier for a year.
Riak https://github.com/basho-labs/riak-dotnet-client/wiki An open-source distributed datastore.  There is also a commercial offering.  An Azure VM image is available via the Azure VM Depot.
Posted in NoSQL

Hosting Without Limits: A Review of 1&1 Internet’s Unlimited Package

Back in the days before the ‘tubes, when all items were analog and knowledge printed, “unlimited” used to mean there were no limits. Cable companies and cell phone providers have tortured the definition to be one of fine print, term limits, caps and rate increases. When I was asked to review 1&1 Internet’s Unlimited Package (tagline: “Hosting without limits”), I entered into it with a high amount of trepidation; worried I might be reviewing “Crazy Eddie’s House of Electrons (and fine print)”. Happily, that was not the case, and I was pleasantly surprised at the ease of use, features, and price.

Earlier in my career, I developed a number of websites for small businesses and individuals. Along the way, I found myself wondering how “normal” (i.e., non-technical) people could manage. The truth was, between the complexities of hosting and the knowledge needed to build a site, they just didn’t, and that’s why they called me. Even pre-built applications meant matching requirements to the offerings of a particular host. As IT professionals, we take a lot of work upon ourselves because it’s just easier to do it ourselves than it is to explain how to do something.

Some hosting companies strive to improve the hosting experience, and over the past 27 years, 1&1 has become one of the most successful and popular hosting companies in the world by offering simple, inexpensive ($0.99/month for three months, then $8.99/month thereafter) and feature-rich hosting plans. In fact, 1&1 has over 13.5 million customer contracts and the group has more than 19 million domains registered. After working off and on for a month with my trial account, I can say this may be the hosting I could help set up for my parents or in-laws, and they could manage most things themselves.

According to the company, 1&1 has over 7,000 employees and 70,000 servers in seven data centers around the word. Beyond website hosting, 1&1 also offers domain registrations, email hosting and GeoTrust SSL certificates. This company is far more than you first imagine.

1&1’s hosting plans are all-inclusive (with unlimited space, unlimited bandwidth and unlimited sites—check that out, really unlimited!) and also include a free domain name registration. These plans can be either Windows or Linux, with MS SQL Server and MySQL as database options. Linux hosting supports PH, Perl, Python, Ruby and Zend, while Windows hosting supports PHP, .NET and Perl. These are just some of the features that appeal to the more technical user.

From the moment you sign up and first log in, you can tell right away that 1&1 is also reaching out to the less technical users. Each time you log in, a helpful tip about some aspect of your plan is displayed. 1&1 has built a custom control panel, which gives you full control over all of your services in a very simple way. If a company pays this much attention to how you interact with its services, it’s a good sign they’re paying attention to the other details of their business.


All new accounts are given a temporary URL so that you can begin development right away. You can upload your own code via FTP or SSH (WebDeploy does not appear to be an option), choose an application from the 1&1 App Center, or you can build a custom website using the 1&1 Website Builder and 1&1 Mobile Website Builder tools.

If you want an easy way to get a site online, and don’t mind starting from one of over 50 highly customizable layouts, the website builder tools are a great option. These tools are aimed at the Wix and SquareSpace crowd and provide an easy way to build a multi-page static website without having to know HTML, CSS or JavaScript (although you can edit these if you want to). The process is very simple: choose a page layout, select a color scheme, font and background, then add content. Create as many pages as you need, ad tweak the HTML or CSS as needed. If you find you need help with the website builders, there is a Contact button right on the menu. Live chat and phone help is available 24/7. I found the Website Builder to be simple, intuitive and complete. Again, “here you go older generation and former clients, I’ll help get you started, but you can make this happen.”

If you need a more dynamic site, or a blog, CMS or shopping cart, the 1&1 App Center features over 140 of the most popular applications, including WordPress, Joomla, Drupal and phpBB. The 1&1 App Center is a great example of how much effort 1&1 has put into simplifying the hosting experience. Every application has a detailed information page (see below), and installation requires only a couple clicks and minimal information.


Applications can be installed in “Safe Mode”, which is a default install of the application for which 1&1 handles all the updates and patches, or you can install an application in “Free Mode”, which gives you more flexibility, but you’re responsible for updates and patches. The Safe Mode installer is simpler than the Free Mode installer (shown below for Joomla), but if you plan on adding themes or plugins to your application, you’ll need to use Free mode. You can change a site from Safe Mode to Free Mode, but you cannot change from Free Mode to Safe Mode.


Regardless of how you built your site, once it’s online, it’s replicated across multiple data centers. This geo-redundancy provides failover protection, but not load balancing. Also, since the replication is nearly instant, whatever you do to destroy your main site will affect the failover before you can dial support. Fortunately, 1&1 also has daily backups of your site and data. To aid performance and security, 1&1 offers a CDN with CloudFlare traffic monitoring and Railgun™ caching, plus storage and global distribution of large libraries.

Getting yourself online is just the start. To monitor your site’s traffic, 1&1 has their own site analytics package which reports on referring sites, search engine terms and more. Additionally, there are tools to create sitemaps for Google Analytics. To keep your customers engaged, 1&1 also offers email marketing tools. Considering the cost of most email marketing services, this makes the hosting fee even more of a bargain.

For a heck of a lot of sites, I think 1&1 would be a great solution, even for the more technical crowd. It’s obvious the services and control panel are very well thought out, the services offered are feature-rich and a great value. It was pretty hard to find something to criticize, mainly that this didn’t exist 15 years ago. More advanced sites may miss uptime monitoring tools or load balancing, but anything with those needs is probably looking at a different place in the market. Although simple enough for non-technical user, there are enough features to interest a more technical crowd, especially if you have multiple domains and are paying more than $8.99/month total.

Posted in Reviews

Transactional vs. ODS Talking Points

When considering implementing an operational data store, discussion always includes the differences between an ODS and a transactional database.  Transactional databases store the data for an application.  An ODS’s purpose is to consolidate from one or more transactional systems, to serve as a source of master data, or for reporting, or one source to a data warehouse.  While the purposes are pretty clear, how they differ at a design level is less clear.  Here are the talking points I’ve used in the past to describe the differences.

Transactional databases

  • are optimized for write performance and ensuring consistency of data
  • mainly inserts and updates, no table rebuilds
  • Low level of indexing, mainly primary keys and the lookups needed 
  • high use of foreign keys
  • use of history and archive tables for no longer current data
  • index and data fragmentation are a concern due to updates, and maintenance jobs need to be utilized
  • data are normalized
  • but, frequently updated data are often separated from less frequently updated data to reduce table fragmentation
  • data are raw

Operational data stores

  • are optimized for reads
  • mainly inserts and table rebuilds via ETL from transactional systems, few updates
  • high level of indexing to support querying
  • low use of foreign keys, since relations are maintained in the transactional databases
  • no history or archive tables–ODSs are for current data
  • low level of normalization, since updates are usually on the same schedule and in a batch process
  • data are sometimes calculated or rolled-up (rather than saving a birthdate, use a demographic age)
  • data may be bucketed

Exactly when to use an ODS and how the schema is designed is a discussion about balancing data duplication vs application architecture.

The update schedule of an ODS is determined partly by the needs of the ODS data consumers, and partly by what the transactional databases can tolerate.  Usually ODS updates are a batch job which runs once or several times a day.  For more frequent updates, commanding could be used.

Posted in Database Design Tagged with: ,

Saving Windows RT

I consider the release of Windows RT to the consumer market to be one of the worst decisions Microsoft has made in recent years, and I have an $853MM writedown to back me up.  RT shipped primarily on a Surface RT, which isn’t an attractive personal device—it’s small, relatively costly, difficult to connect to the usual suite of peripherals and doesn’t sit well in your lap.  Additionally, here was a version of Windows which wouldn’t run any previous Windows program.  Consumers were used to getting a new computer with a new version of Windows and simply reinstalling their favorite old greeting card maker or photo editor.  Months later, when Windows 8 was released, confusion multiplied—now there were two versions of Windows—a “right one” and a “wrong one”, and your average consumer couldn’t tell the difference by looking.  Consumers literally needed someone with technical knowledge to tell the devices apart.  Add to that an a store which had few desirable apps and it’s no wonder interest was really low for RT.  The release of the Surface 3 running only Windows 8 puts the future of RT into greater doubt.

Having said that, RT could still be one of the greatest versions of Windows of all time.  How?  Improve the concept of enterprise application stores, and make RT the next Windows Embedded.  It’s not as crazy as it sounds.  I’ve helped manage installations of WinTerms for sales teams, and hundreds of handheld and lift mount devices in multiple warehouses, and this idea is a bit of a dream come true.

Windows 8 ships with a hard-coded attachment to the Microsoft store.  Make it simpler for enterprises to set up their own internal app store, and control the store setting via group policy.  Enterprises could easily distribute their in-house apps, or those supplied by ERP/WMS/etc vendors to the issued devices.  At a previous employer—a warehousing company—we had to manage hundreds of devices in multiple warehouses around the country.  We had to have someone onsite manually dock each one, and we had to go through a complicated set of steps to update the wimpy onboard apps.  If we could have posted an updated app on our internal store and have every device update itself automatically in seconds, that would have been a dream come true.  Intermec and Symbol should be all over this idea.

Take this one step further.  Remember the fires in Tesla Model S?  A software fix to correct how the car rides at freeway speed was downloaded to all the Model Ss.  Now imagine Ford replacing Sync with RT, and being able to do the same for control or entertainment systems.  Speaking of entertainment systems. keep the linkage to the movies and music stores so movies can be downloaded while parked at a McDonald’s.  The capabilities in RT would put Ford years ahead of its competitors in regards to onboard systems.  This could be extended into on-board systems for trucks as well.

Take this one more step.  Imagine battlefield updates to combat systems, downloaded via AWACs or properly equipped drones from a secure DOD app store.  It’s not too far-fetched.

Vehicles and warehouse equipment alone offers the potential of millions of devices running RT.  By looking at RT as a new Windows Embedded, Microsoft thinks big by thinking small.

Posted in Uncategorized

An Updated Simple Passphrase Generator

(Note: The original version of this work is published at http://aspalliance.com/703_A_Simple_Passphrase_Generator.all, this is a long overdue update)

Just about 9 years ago I was building a partner-facing reporting website, and I needed a way to generate passwords when partners were added by customer service (no public registration) as well as to generate new ones easily when a password needed to be reset.  I wanted to generate a passphrase, which is usually easier to remember than a random string of gibberish.  Some of the “more experienced” among us will recognize this format as AOL-style passwords, which were printed on the 3.5” floppies we received in the mail or our PC magazines of the day.

In 2014, we have nearly a decade of breaches and crappy passwords being stolen.  Even today, weak and obvious passwords are some of the most popular choices.  My hope is passphrases may become more of a standard, but I doubt it.  Some of the text of the original article is republished below; some of the links are broken, and I’ve replaced them where I could find a suitable alternative.

You can find the updated code in my BitBucket Git repo, at https://bitbucket.org/rjdudley/passphrase.  It’s pretty simple—one library project, a few tests in another project, and a console app to display the passphrases.  Use the library wherever it will run if you so wish, or fire up the console app anytime you need a good passphrase yourself.

Why Passphrases?

Perhaps first we should ask “What is a passphrase?”  Wikipedia may say it best:

A passphrase is a collection of ‘words’ used for access control, typically used to gain access to a computer system.

Passphrases were first proposed in 1981 by Sigmund Porter. Passphrases are distinguished from passwords by their virtue of being comprised of several words separated by spaces. Passphrases can satisfy even stringent security requirements, while being easier for the users to remember (http://technet.microsoft.com/library/cc512613.aspx). It’s this combination of complexity and ease of remembrance that make passphrases a good part of a password policy.

Our decision to use passphrases included another reason. By using passphrases when a user’s account is set up, we hoped to set an example to our users to use passphrases as well. We hoped that users would follow our example and choose passphrases they could remember easily, and that would be more than their dog’s name concatenated with a number 1. As a precedent, I cited that AOL has for years used multiple word passphrases as the login associated with all those floppies and CDs they send out. PGP and its variants also require using secure passphrases as your private key.

Recommended Passphrase Best Practices

With the intrinsic strength of some of the modern encryption, authentication, and message digest algorithms such as RSA, MD5, SHS and IDEA the user password or phrase is becoming more and more the focus of vulnerability. (http://www.totse.com/en/privacy/encryption/passch.html)

String passphrases are only one part of a comprehensive security policy. For additional security, you should include other best practices in your application’s login components. Microsoft makes a number of recommendations for Windows networks which are also applicable for ASP.NET applications (http://technet.microsoft.com/library/cc162924). These recommendations include:

  • Enforcing strong passwords
  • Ensure regular password changes
  • Maintain a history to prevent immediate reuse
  • Lock out accounts after a certain number of failed attempts

In a very good series of articles, Jesper Johansson reiterates many of these recommendations (http://technet.microsoft.com/library/cc512624.aspx), but disagrees about using account lockout policies. Several myths surrounding Windows passwords are addressed by Mark Burnett (http://online.securityfocus.com/infocus/1554/), and although focused on Windows passwords, some of the information is also applicable to ASP.NET applications. Designing a component that includes these recommendations is beyond the scope of this article, but you should familiarize yourself with these recommendations and incorporate the pertinent ones into your application.

Generating Passphrases

FAQ: How do I choose a good password or phrase?

ANS: Shocking nonsense makes the most sense. (http://virtualschool.edu/mon/Crypto/PGPPassPhraseFAQ.html)

There are a number of methods for generating passwords and passphrases. In this article, we’ll modify a method known as Diceware (http://world.std.com/~reinhold/diceware.html). This method consists of a numbered word list and five dice. Each word is assigned a 5-digit number, with only numbers 1-6 at each position, and covering every combination of numbers. The five dice are rolled, and the numbers are read from each face to form a 5-digit number. This number is cross-referenced with a word in the word list, which is then the first word in the passphrase. This process is repeated until the requisite length or number of words has been reached.

Instead of rolling dice, we’ll use pseudo-random number generators to simulate dice rolls. To make cross referencing easier, we’ll use the original wordlist loaded as a Dictionary object.  This version of the generator uses the RngCryptoServiceProvider to simulate the rolls of the dice.

Posted in Security

A Simple Unit Test Script for SQL Server

As we develop applications, it often makes sense to put some functions in a SQL CLR assembly.  As these assemblies are deployed through test, beta and production environments we need an easy way to ensure the assemblies have been updated correctly and are functioning as designed.  Many things can go wrong—incorrect permissions, deploying the wrong version of an assembly, even assumptions we made in the business logic.  I’m a big fan of RedGate’s SQL Test plugin—it’s an attractive SSMS plugin which organizes and runs tests via an easy to read panel—and the tSQLt (a unit testing framework for SQL Serve) which SQL Test is based on.  My next post will show recreate the testing strategy shown here using SQL Test.

One downside to tSQt is that you have to install a number of stored procedures and functions into your database.  Depending on your choice of database, permissions in your database and willingness to add additional objects to your database which aren’t drectly related to the data, using tSQLt may not be an option.  Although we’re testing a user defined function, we could also test stored procedures and SQLCLR assemblies.  This technique is also not limited to SQL Server—I use this in both SQL Server and VistaDB.  As you build out your application, you may want to consider adding diagnostic pages (or screens) where you can execute these testing procedures should your application start throwing errors.

The function we’ll be testing will be simple—we’ll pass in an nvarchar, and return the same nvarchar with the word “wombat” added to the end.

Start by creating a custom function:

( @phrase NVarChar(50)
return @phrase + ' wombat'

Now it’s time for the test harness.  I’m using VistaDB, creating a stored procedure and using table variables, syntax supported by more recent versions of SQL Server.  You may need to optimize for the database you’re using.  The first table holds our test cases—the test name, the value we want to pass into our function, and the expected result.  We then loop through all the tests, calling the function we want to test, and updating the result.  Our result comparison is very simple, similar to an Assert.AreEqual.  With a little more work, additional test types could be added, and the test indicated in the test case.

create procedure TestAddWombat as

TestName NVARCHAR(100),
TestCase NVARCHAR(100),
Expected NVARCHAR(100),
Result NVARCHAR(20));

DECLARE @testname VARCHAR(100);
declare @testcase nvarchar(MAX);
DECLARE @result NVARCHAR(max);

DECLARE @testCaseCount INT;
DECLARE @counter INT;

SET @counter = 1;

INSERT INTO @testcases
( TestName ,
TestCase ,

VALUES&nbsp; ( N'should_return_hello_wombat' , -- TestName - nvarchar(100)
N'Hello, ' , -- TestCase - nvarchar(100)
N'Hello, wombat'&nbsp; -- Expected - nvarchar(100)

INSERT INTO @testcases
( TestName ,
TestCase ,

VALUES&nbsp; ( N'should_return_null' , -- TestName - nvarchar(100)
NULL , -- TestCase - nvarchar(100)
NULL -- Expected - nvarchar(100)

SELECT @testCaseCount = count(*) FROM @testcases

WHILE @counter <= @testCaseCount


SELECT @testname = TestName,
@expected = Expected,
@testcase = TestCase
FROM @testcases
WHERE TestNumber = @counter

SELECT @result = dbo.AddWombat(@testcase)

IF(@expected is NULL)
IF(@result is NULL)
UPDATE @testcases SET Result = 'True' WHERE TestNumber = @counter
UPDATE @testcases SET Result = 'False' WHERE TestNumber = @counter
IF(@expected = @result)
UPDATE @testcases SET Result = 'True' WHERE TestNumber = @counter
UPDATE @testcases SET Result = 'False' WHERE TestNumber = @counter
SET @counter = @counter + 1;


SELECT * FROM @testcases




Running this sproc returns the following results:




From here, we have a very basic test harness which we can easily add additional test cases, and easily extend.  TDD is a best practice in any aspect of application development, but database development is often not treated as application development and is far behind adopting practices common to C# developers.  This script is a simple way to introduce some TDD into database development.

Posted in SQL Server, TDD Tagged with: ,

Hosting your own URL shortener with YOURLS, Azure Web Site and WebMatrix

For a couple years now, I’ve been paying for a private-label URL-shortening service.  I’ve amassed hundreds of links, and don’t want to lose any of the shortcuts, but the cost has gone up with no change in features.  I would love to have better metrics, and certainly a lower cost.  Having a small Azure benefit gives me a level of free hosting with an Azure website.  I hunted around and found several open source URL shorteners.  I decided on using YOURLS, a PHP-based application with all the features I want and then some (nice charts, plugins, social sharing, an API and bookmarklets).  Plus, it looked insanely easy to install.  Here’s how I set up YOURLS on Azure (this blog post took longer and is more difficult to read than the actual process, it was that easy).  There are a couple paths you can follow here, yours may differ from mine since I already had some existing services set up.

Step 1: MySQL on Azure

YOURLS uses MySQL as its database.  There are two primary ways you can host MySQL on Azure:

  • ClearDB offers hosted MySQL databases on Azure, with a 20 MB developer instance for free.  This is plenty to play around with, but if you want more you can.  There are two ways to set up CleaDB on Azure: How to Create a MySQL Database in Windows Azure.
  • Spin up a VM (Linux or Windows) and host a MySQL instance on it.  This is probably the more expensive option, but gives you the utmost control.  If you wanted to use this VM to host your YOURLS, you could, but that’s another blog post.

I already had my developer instance set up from an earlier WordPress experiment.  I created this instance as a linked resource for Windows Azure Web Site previously, which is an easy way to get started, but limits you to the 20 MB limit.

Either way, follow one of the sets of instructions at http://www.windowsazure.com/en-us/develop/php/common-tasks/create-mysql-database/ to get started.

If you’re sharing a MySQL database, by default the Yourls tables are created with yourls_ prefix, so you can separate new tables from existing ones.

Step 2: Creating the Web Site

As with the MySQL database, you can do this in one of two ways.  If you have already followed, or plan to follow, the instructions for creating a MySQL database as a linked resource, use that and skip this step.  In that process, you’ll create a site in the Azure portal, then create a CleaDB MySQL instance linked to the site.

If you have a existing MySQL database, but need to create the website, you can create the site from within WebMatrix 3, which is what I did (if you don’t have WebMatrix 3, you will need to update to the latest version).

From the start screen, New >> Empty Site starts the process.


The next step is to find a unique name for your site, and a location to host it.  At this step you’re configuring the long name for your site.  Later you can configure a custom URL for your website.



A local and remote site are created




At this point, you’re read to start working.  You’ll be working in a local version of your website, not directly in the website.

Step 3: Installing and Configuring YOURLS

To get started this way, download the most recent version from https://github.com/YOURLS/YOURLS/tags (cloning and publishing via Git will be discussed in another post).  Unzip the download into the folder for the website created above.

Configuration options are explained at http://yourls.org/#Config.  The main ones to configure are:

  2. YOURLS_DB_PASS (remember this will be stored in plain text)
  5. YOURLS_DB_PREFIX (this is how you can separate YOURLS tables from any others in the same MySQL database)
  6. YOURLS_SITE (use the temporary URL until DNS propagates)
  7. YOURLS_COOKIEKEY (generate at http://yourls.org/cookie)
  8. $yourls_user_passwords (this is how you’ll log into the admin portal, you can encrypt these per https://github.com/YOURLS/YOURLS/wiki/Username-Passwords).

Do not put the config file in a publicly available location!  You have secrets in this file, make sure it stays private.

In order to view admin pages, you’ll also need to add a web.config to the root folder of your YOURLS site; see https://github.com/YOURLS/YOURLS/wiki/Web-Config-IIS for a sample file.

Step 4: Publishing to Azure and Installing Database Tables

If you used WebMatrix to create your Azure Web Site, all you need to do now is click Publish and your files will be transferred automatically.  If you created your database and site via the Azure Portal, you’ll be prompted to either choose an existing Azure Web Site, use WebDeploy, or manually configure FTP.  This is a one-time configuration—every subsequent time you can just hit Publish.

After the site is deployed, there is a one time installation.  Simply go to http://<yoursite>.cloudapp.net/admin, log in with the credentials you saved in the config file, and follow whatever prompts you’re given.

Step 5: Custom URL on Azure

Custom URLs for Azure Web Sites are not technically free, but you can apply Azure credits via an MSDN subscription (which I do) or pony up for a paid subscription.  Instructions for configuring a custom domain name for an Azure Web Site are at http://www.windowsazure.com/en-us/develop/net/common-tasks/custom-dns-web-site/.

You’ll need to have custom DNS for your site, usually this is done by using your registrar’s nameservers and custom DNS settings.  If your registrar doesn’t offer such a service, look into a service like https://dnsimple.com/.  You’ll need to configure both a CNAME and an A record.  Do not forward your domain name.  After you configure your DNS, it’ll take a day or so to propagate completely.  Once the DNS is propagated, you’ll need to edit the config file and set the YOURLS_SITE to the custom domain name and republish.

Step 6: Track those clicks!

You should be fully running, you can extend YOURLS with some of the plugins found at https://github.com/YOURLS/YOURLS/wiki/Plugin-List.

Happy shortening!

Posted in Azure, Open Source