GitHub/BitBucket For The People Who Just Got a Raspberry Pi and Are In Facebook Groups Asking Questions

(work in progress – kids woke up before I was done but I wanted to share what I had, updates will come)

Few things are more awesome than being curious and being able to spend a few bucks on a small board to follow your curiosities.  There is a lot of fun ahead, but there are also going to be frustrations.  I’ve been a professional software programmer for almost 20 years and I am frequently still stumped.  When you hit these times, don’t be discouraged, ask questions and experiment because the success is worth 100 frustrations.  I promise you.

I’m writing this because I’m in a couple Raspberry Pi groups on Facebook, and I see questions about better ways to store scripts and save documents every couple of days.  If you’ve asked that, your instincts are spot-on, and even if you’ve never written a line of code this instinct alone puts you above about 10% of the people I’ve ever interviewed for a job.  It probably also means you’re experimenting with code a bit (experimenting with code is how you get past those frustrations), and you need a better way to keep track of your changes than naming your files “led.py.new.old.old.1.old”.  Been there, done that.  Let me explain the better way.

BTW – this blog post is meant to complement and explain already available documentation, not replace what exists.  So I’ll explain the fundamentals of what you’re doing but link to official documentation.

What are GitHub and Bitbucket?

GitHub and Bitbucket are two commercial services which provide source code management (SCM).  This means they’re meant to keep your code files safe and organized, and keep a history of your changes.  You can use this history to make something work, commit that working file, then screw it all up on your device trying something new, then be able to retrieve a previous version.  Because of the version history, you’ll sometimes hear these called “version control systems”, or just “VCS”.

Both GitHub (GH) and Bitbucket (BB) offer free plans with both public (for sharing) and private (because sometimes my code is embarrassingly bad when I’m tinkering) repositories.  They both offer very similar features, either one is a good choice for you, but GitHub is a lot more popular in the open source world.  I used Bitbucket for my personal work for a long time, but the popularity of other projects made GitHub my main system now.

Both GH and BB are based on an open source version control system called “git“, which was developed in 2005 by Linus Torvalds to use for developing Linux (see https://en.wikipedia.org/wiki/Git for the rest of the story).  GH and BB provide the cloud-based server side of keeping your code safe, you’ll still need something on your computer.  The basic git uses the BASH command-line shell on every OS (Linux, Mac, Windows, Raspbian, etc.), but there are also plenty of GUI clients available.  Because Rapsbian is derived from the same Linux kernel as Ubuntu, almost anyone which runs on Ubuntu will run on Raspbian.  That being said, I really recommend learning the BASH commands so you have a little better understanding of how git works, but also the Raspberry Pi has limited processing and the shell doesn’t take much to run.  Most of the git examples you’ll see use the BASH commands, and the GUIs just put the same commands behind a button and then most just run the shell anyway.

VCS Basics

I’m going to really simplify what you new RPi hackers need to know.  Some of the features are meant for teams of developers who automatically want to push code to a cloud computing environment and run all kinds of tests, while you’re just trying to get an LED to light up when you push a button.  Bottom line–you won’t need a lot of what GH and BB offer for a long, long while, if ever.

First and foremost is understanding a little code management strategy.  Both GH and BB offer unlimited repositories, so make use of that.  At its most basic, think of a repository as a main folder for your work.  If you get the LED to light up, and want to now try making a buzzer sound, you wouldn’t put those in the same folder on your RPi–you should separate the two.  You’d then make a corresponding separate repository (“repo” for short) for each project in GH or BB.  To get started you might copy the code from the LED project into your buzzer project, and that’s cool, but you’d still keep the two things separate.

When you need a new repo, there are two ways to get started.  If you’ve already started your project, your workflow would look like this (and I’ll explain a little more below):

  1. Create the remote repo on GH/BB (called “remote” because it’s not on your machine)
  2. On your device, use “git init” to create the local repo
  3. Link your local and remote repos
  4. add/commit/push your files

If you haven’t started a project yet, your workflow would look like this:

  1. Create the remote repo
  2. “git clone” to your device to create the local repo
  3. do your work
  4. add/commit/push your files

Either way, once you have your repo set up and first code committed, everything works the same.  Neither way of getting started is “wrong”, both exist because sometimes we code first and sometimes we repo first.

Also, one note, I’m kind of lazy, so when I say “GitHub” below, know that everything also applied to Bitbucket.  The web pages are different between the two, but since both are based on git, all the other aspects are almost exactly the same.

Explanation – Code First

Summary–since you already have a folder on your device full of code, we need to turn it into a local repository.  This is called “initialization”, and uses the “git init” command.  We also need to create a repository (aka folder) on GitHub, and then link the two using the “git remote add” command.  Finally, you add the files you want to put under version control using “git add”, commit the changes using “git commit”, and then copy the change from your local repo to your remote repo using “git push”.  You only need to “init” once per repo, and usually “remote add” once per repo, but the add/commit/push you’ll do over and over and over again, it’s how you send the different version to GitHub.

Read this to create a repo in GitHub: https://help.github.com/articles/creating-a-new-repository/.  This creates that “remote folder”.

Once you have your remote repository, you’ll need to initialize (“init”) your local folder to make it a local repo, choose the files you want to add to the repo, commit those files to version control, and finally push the changes to your repo.  Usually, using python on a RPi you just want to add everything in the folder (in professional development our tools–like Visual Studio or Eclipse–add a lot of other files, like settings files for color preferences and font size, we don’t share).  To init your existing code to GitHub, and add/commit/push, follow the guide at https://help.github.com/articles/adding-an-existing-project-to-github-using-the-command-line/.

When you commit changes, you’re prompted to add a commit message to the set of changes.  This is super-important, it’s where you make notes to yourself about what you’re doing, making it easier for you to go back to a specific version of your code.  A sentence or two is pretty good, but don’t slack off.  Trust me, you’ll kick yourself later if you do.

Take the Roast Out of the Oven with Rock Framework

As the saying goes, there is no problem which can’t be solved by adding another layer of abstraction.  If you’ve ever sweated making a choice between two products—loggers like Loupe or Splunk, SimpleDB or DynamoDB, for instance—and one of the main drivers of making the “right” choice was the pain of switching, maybe you should have spent some time looking into a layer of abstraction.  Slow starts to projects are often due to paralysis-via-analysis.

A framework is just such a layer of abstraction.  Frameworks are well designed set of code which allow you implement or switch relatively easily between different different choices of the same thing.  Concerns about the “right” choice can be answered with “don’t sweat it, we’ll implement a factory so we can use any log provider, or even different log providers based on severity”, or “no sweat, we’ll encapsulate all our data calls in a data provider, so we just need to replace the one class if we switch databases”.  With the right layers in place, you’re liberated to try a few different options, easily implement the best tool for the job, and not worry too much about future changes.

Frameworks might be the high level of abstraction we need, but how does this usefulness get here?

Where Do Frameworks Come From?

We developers all start somewhere, and aside from prodigies, we all start at level of “procedural code”.  We write big long procedures that get the job done.  Very quickly we learn how to break chunks of code into methods, and then classes.  This is the basis of OOP and confers all the benefits OOP is known for.

Library abstraction comes from working with a number of similar applications, seeing commonalities between these applications, and creating a set of classes of only the commonly used code.  This set of classes is a library, and managing libraries in several applications creates problems while solving others.  The hassle of managing libraries is why NuGet, npm, pip and other “package managers” were created.  Libraries are usually tied closely to the set of applications they were developed for.

Nearing the top level of developer thought development is framework abstraction.  Frameworks employ design patterns (such as provider, factory and abstract factory) which enable components to be very easily swapped around.  Frameworks aren’t supercharged libraries, they’re really meant to be super-generic libraries, encapsulating very common activities (such as logging) into a generic form.  Good applications will use one or more generic frameworks in addition to one or more libraries specific to that application set.

I’ve illustrated this all with a handy-dandy PowerPoint Smart Art:

frameworks

Note: there is no scientific basis for the above diagram, I totally made it up.  But I believe it to be as accurate as anything else I make up.  If you don’t know what “gunga galunga” means, see https://www.youtube.com/watch?v=TkLH56VlKT0.

Why Use a Framework?

Gaining the experience to develop a framework can take a lot of time, in addition to the time it takes to actually develop the framework.  Starting with an existing framework (especially an open source one) allows you to leverage common experiences (i.e., someone else already crossed the bride you’re about to) and speeds your time to SOLID code.  Your application will implement best practices from the start, leading to faster maturity of your application.  The flexibility a framework provides sets you up for success by making change easy.

Using an existing framework means you’re participating an ecosystem which welcomes contributions, and becoming a contributor moves you up a level or two on the pyramid above, and helps ensure the longevity of the project.

Why Rock Framework?

The Rock Framework is literally “developed by dozens, used by hundreds”.  We use Rock Framework internally in hundreds of applications, and have open-sourced the parts we can share.  We have a saying at QuickenLoans—“take the roast out of the oven”.  It means don’t spend too much time thinking about a problem, it’s better to try some things out.  The Rock Framework gives us all the basic plumbing to easily try things out, plus some nice syntactic sugar we like to use in our applications.

Rock Framework is available as several NuGet packages, and the source code is hosted on GitHub, both of which you should access via http://rockframework.org/.  Here, I’ll describe the packages available now.  Other features and packages will be added in the future so be sure to refer to http://rockframework.org/ for the most up-to-date information.

Rock.Core

This is the base package for the Rock Framework, and is a dependency for the other RF modules.  It contains XSerializer (a non-contract XML serializer), a wrapper for Newtonsoft’s JSON.NET, a dependency injection container, a wrapper for hashing, a number of extension methods, and more.

Rock.Logging

This is probably the module with the most immediate use.  All logging methods are encapsulated, and there is a provider model with several interfaces for different types of log messages.  You’re encouraged to extend both your internal implementation as well as our repo with providers for popular logging platforms.

Rock.Messaging

If you’re planning to implement  message queuing between applications (using MSMQ, RabbitMQ or named pipes, for example), this library contains message primatives as well as routers, parsers and locator classes to get a full featured messaging system up and running in very little time.  If you use the interfaces you’ll be able to easily swap providers if you’re taking some for a test drive.

Rock.StaticDependencyInjection

Last, but not least, here is a DI framework which forms the basis for swappable parts in the Rock Framework libraries. Applications have entry points (like Main()) where dependencies can be wired up when the application starts.  Libraries, on the other hand, don’t have entry points, meaning libraries need to be created and have values set in a constructor or other composition root by the application which uses the library.

Rock.StaticDependencyInjection enables libraries to automatically wire up their own dependencies, even with the ability to automatically find the proper implementation of an interface and inject that.

 

Get Involved with Rock Framework

This post has just been an overview of the Rock Framework.  There are more to come, both from myself and other members of the community.  Follow https://twitter.com/rockframework for announcements.  Even, better, get involved!  As an open source project, Rock Framework has many needs:

  1. Contribute providers for your favorite logging tool
  2. Create an example
  3. Implement the framework in one of your projects
  4. Write or update documentation

In today’s market, there is no better way to level up your career than to contribute to open source projects like this.  We’re looking forward to working with all of you!

Two New OSS Library Releases from QuickenLoans

Today is our annual Core Summit.  The teams led by Keith Elder are showing off what they’ve created for the rest of us to use, and Keith made a couple of really exciting announcements about some of our core libraries–QL Technology has now open-sourced two of the frameworks we use to build our amazing applications!  We’ve pulled out all the proprietary and internal-use-only bits, leaving all the core goodness you can use to “engineer to amaze” also.  While Keith is still attending to his duties at the summit, I thought I’d provide a small amount of clarity on what we’ve done.

One note: QuickenLoans hasn’t been part of Intuit since 2002, we just have a long term agreement to use the name.  Please don’t ask me about TurboTax, QuickBooks, etc.  However, if you need a mortgage, I’ll be more than happy to get you $500 back at closing and refer you to the best bankers in the business, just ping me.

Rock.Framework

The name is a tip-of-our-hats back to our original name, Rock Financial (in 1999, Intuit bought Rock Financial and rebranded it as QuickenLoans, then sold QL back to the original Rock Financial group in 2002).

Internally, we use RF for serialization, queue-based messaging, service creation, centralized logging (don’t see your favorite provider–please contribute!) and dependency injection, all of which are now open sourced.  We have a bunch of internal extensions which we won’t release, and you should also do the same for your applications.  The Core, Logging, Messaging and DependencyInjection libraries are all available as different nuget packages, so you can pick and choose as you need.  DI deserves a special shout-out, since Brian Friesen has been speaking for years on DI and has created a wonderful library.  Brian’s XSerializer XML serializer (so flexible, such fast, much XML) and our JSON.NET wrapper also have their own packages.

Scales

QL has dozens of websites, all of which need to comply with our look-and-feel standards.  Based on holidays and promotional events, the look and feel may need to be updated throughout the year.  Yay, CSS updates!  For those, Dave Gillhespy developed Scales, which allows you to easily standardize your UI elements across your responsive web applications (be it one, or many).

Scales uses SASS for a CSS preprocessor, and implements a number of best practices and simplifies a bung of pain points.  Scales right now is available as a Bower package, but nuget and other options coming in the future.  If you’re interested, contribute themes, enhancements and help resolve issues.

At QuickenLoans, we use a lot of OSS tools, and we’re committed to giving back to the OSS community.  You’ll see blog posts and conference sessions from many of us at QL Technology.  Meantime, follow the team members below for announcements and the latest info.  And if you’re really interested in engineering to amaze, let me know, at the time of writing we have a lot of open positions.

Thanks due to:

Hosting your own URL shortener with YOURLS, Azure Web Site and WebMatrix

For a couple years now, I’ve been paying for a private-label URL-shortening service.  I’ve amassed hundreds of links, and don’t want to lose any of the shortcuts, but the cost has gone up with no change in features.  I would love to have better metrics, and certainly a lower cost.  Having a small Azure benefit gives me a level of free hosting with an Azure website.  I hunted around and found several open source URL shorteners.  I decided on using YOURLS, a PHP-based application with all the features I want and then some (nice charts, plugins, social sharing, an API and bookmarklets).  Plus, it looked insanely easy to install.  Here’s how I set up YOURLS on Azure (this blog post took longer and is more difficult to read than the actual process, it was that easy).  There are a couple paths you can follow here, yours may differ from mine since I already had some existing services set up.

Step 1: MySQL on Azure

YOURLS uses MySQL as its database.  There are two primary ways you can host MySQL on Azure:

  • ClearDB offers hosted MySQL databases on Azure, with a 20 MB developer instance for free.  This is plenty to play around with, but if you want more you can.  There are two ways to set up CleaDB on Azure: How to Create a MySQL Database in Windows Azure.
  • Spin up a VM (Linux or Windows) and host a MySQL instance on it.  This is probably the more expensive option, but gives you the utmost control.  If you wanted to use this VM to host your YOURLS, you could, but that’s another blog post.

I already had my developer instance set up from an earlier WordPress experiment.  I created this instance as a linked resource for Windows Azure Web Site previously, which is an easy way to get started, but limits you to the 20 MB limit.

Either way, follow one of the sets of instructions at http://www.windowsazure.com/en-us/develop/php/common-tasks/create-mysql-database/ to get started.

If you’re sharing a MySQL database, by default the Yourls tables are created with yourls_ prefix, so you can separate new tables from existing ones.

Step 2: Creating the Web Site

As with the MySQL database, you can do this in one of two ways.  If you have already followed, or plan to follow, the instructions for creating a MySQL database as a linked resource, use that and skip this step.  In that process, you’ll create a site in the Azure portal, then create a CleaDB MySQL instance linked to the site.

If you have a existing MySQL database, but need to create the website, you can create the site from within WebMatrix 3, which is what I did (if you don’t have WebMatrix 3, you will need to update to the latest version).

From the start screen, New >> Empty Site starts the process.

image

The next step is to find a unique name for your site, and a location to host it.  At this step you’re configuring the long name for your site.  Later you can configure a custom URL for your website.

 

image

A local and remote site are created

image

image

 

At this point, you’re read to start working.  You’ll be working in a local version of your website, not directly in the website.

Step 3: Installing and Configuring YOURLS

To get started this way, download the most recent version from https://github.com/YOURLS/YOURLS/tags (cloning and publishing via Git will be discussed in another post).  Unzip the download into the folder for the website created above.

Configuration options are explained at http://yourls.org/#Config.  The main ones to configure are:

  1. YOURLS_DB_USER
  2. YOURLS_DB_PASS (remember this will be stored in plain text)
  3. YOURLS_DB_NAME
  4. YOURLS_DB_HOST
  5. YOURLS_DB_PREFIX (this is how you can separate YOURLS tables from any others in the same MySQL database)
  6. YOURLS_SITE (use the temporary URL until DNS propagates)
  7. YOURLS_COOKIEKEY (generate at http://yourls.org/cookie)
  8. $yourls_user_passwords (this is how you’ll log into the admin portal, you can encrypt these per https://github.com/YOURLS/YOURLS/wiki/Username-Passwords).

Do not put the config file in a publicly available location!  You have secrets in this file, make sure it stays private.

In order to view admin pages, you’ll also need to add a web.config to the root folder of your YOURLS site; see https://github.com/YOURLS/YOURLS/wiki/Web-Config-IIS for a sample file.

Step 4: Publishing to Azure and Installing Database Tables

If you used WebMatrix to create your Azure Web Site, all you need to do now is click Publish and your files will be transferred automatically.  If you created your database and site via the Azure Portal, you’ll be prompted to either choose an existing Azure Web Site, use WebDeploy, or manually configure FTP.  This is a one-time configuration—every subsequent time you can just hit Publish.

After the site is deployed, there is a one time installation.  Simply go to http://<yoursite>.cloudapp.net/admin, log in with the credentials you saved in the config file, and follow whatever prompts you’re given.

Step 5: Custom URL on Azure

Custom URLs for Azure Web Sites are not technically free, but you can apply Azure credits via an MSDN subscription (which I do) or pony up for a paid subscription.  Instructions for configuring a custom domain name for an Azure Web Site are at http://www.windowsazure.com/en-us/develop/net/common-tasks/custom-dns-web-site/.

You’ll need to have custom DNS for your site, usually this is done by using your registrar’s nameservers and custom DNS settings.  If your registrar doesn’t offer such a service, look into a service like https://dnsimple.com/.  You’ll need to configure both a CNAME and an A record.  Do not forward your domain name.  After you configure your DNS, it’ll take a day or so to propagate completely.  Once the DNS is propagated, you’ll need to edit the config file and set the YOURLS_SITE to the custom domain name and republish.

Step 6: Track those clicks!

You should be fully running, you can extend YOURLS with some of the plugins found at https://github.com/YOURLS/YOURLS/wiki/Plugin-List.

Happy shortening!

Book Review: The Official Joomla! Book

It’s been a year since I met Jennifer Marriott at the Tulsa Tech-Fest, and I feel bad it’s taken me this long to finish reading The Official Joomla Book.  Last year we talked a little about the strong improvement in PHP/MySQL, and a greater acceptance of these technologies in the .NET world, and that discussion is what put her book in my hands.  One of the shining stars of the PHP world is the Joomla! CMS.  It’s full featured and very customizable, but is very easy to set up and administer.  Joomla! is perfect for many websites of all kinds—business, non-profit, civic, etc.  My friend Tom at Frames and Pixels makes part of his living implementing Joomla! sites for his clients, and his sites are but a few of the millions powered by Joomla!.  It’s been six years since the initial release of Joomla!, and the community shows no signs of slowing down.

Before we get into discussing the book, I should point out that this book is meant for the folks who install, configure and maintain Joomla! websites.  The basics of designing templates and using extensions are covered, but if you’re interested in a source-code level book to help you write extensions, this isn’t it.  In the past, I’ve used other CMSs to build client sites, and always wished there was a manual I could hand over with the site so the client would have a reference.  That this book has several chapters “for the client” is one of its strengths.  Also, if you are about to start your first Joomla! site, don’t expect to go chapter-by-chapter.  Read this book first, because there are things you need to think about before you install all through the book.

Chapter 1 is “All About Joomla”, and I can’t describe it better.  It’s all about the history and philosophy of Joomla! (including what the name means), gives a shout out to major contributors in the Joomlasphere, and suggests important conferences.

Chapter 2 covers decisions you need to make prior to installing Joomla!.  It’s really a guide for the client and business analyst to decide on the branding and audience.  It also covers how to choose a good host.

Chapter 3 covers the installation and configuration of Joomla!.  The authors show us “the long way”, which involves downloading the code and FTP’ing to our server.  Briefly discussed is the option of an automated install.  Check with your host to see if they have an automated installation option for Joomla! (of you don’t have a host yet, this may be a decision point for you).  Many hosts do, which simplifies the setup considerably.  Requirements for installation include PHP and MySQL.  Not discussed is installing in Windows machines.  On Windows machines, where PHP and MySQL aren’t usually found, Microsoft provides the Web Platform Installer, which will install all the components you need to run Joomla! and Joomla! itself.  Regardless of which way you install Joomla!, the configuration parts of the chapter should be the same.

Chapter 4 digs into creating and managing content, and is one of the chapters applicable for client and solution provider alike.  With menu items, categories, pages and articles, there are a number of ways to organize your content, all of which emphasize why Chapter 2 is worth including.  Once you have your content outlined, Chapter 4 shows you how to do it.

It would be a rare client indeed who didn’t want some customization to their site.  Out of the box, Joomla! is a very basic site with a great ability to be modified and extended.  Chapters 5 and 6 cover the basics of editing templates and installing/using extensions.  These are the chapters where a client’s site will really take shape.

Chapter 7 is about the care and feeding of a Joomla! site, including search engine optimization and hints for designing the site’s navigation.  This is another chapter for client and provider alike.

Chapters 8, 9 and 10 are more in-depth examinations of using Joomla! for a business, non-profit/NGO and a school site.  These are meant for both client and provider, and are logical follow-ups for Chapter 2.  Some of the best parts of these chapters are the suggested extensions for the three site types.  This is a HUGE time saver when it comes to adding functionality to the basic site.  Other topics include template designs, accessibility options, community building, e-commerce and multilingual sites.  These three chapters alone are probably worth the price of the book.

Chapter 11 is a look ahead to the future of Joomla!.  Since it’s taken me so long to complete this review, much of that future has arrived with the release of version 1.7 last month.

Chapter 12 is comprised of a number of interviews with leaders in the Joomla! community.  Each interviewee focuses on a particular aspect of Joomla!—the project itself, hosting, branding, extending and using Joomla! in a sector such as education or business.  Each interview contains a few pieces of advice that may prove invaluable in preventing common mistakes or creating a site that sets itself apart from others.

This book finishes with three appendices.  Appendix A has solutions to common problems, including the famous lost administrator password.  Appendix B is a huge list of resources to help you build your skills, design your site, get help or content.  Appendix C covers the new Access Control List functionality in version 1.6.  User permissions have become very granular, and we can set up groups of users with the same permissions.  As any network admin can attest, groups make managing large users bases much easier.

One place where I can see this book being very useful is in Give Camps, where teams of developers have a weekend long “lock in” and create sites for charities.  Using a CMS like Joomla! is critical to the success of Give Camp sites, and a book like this would be extremely useful to the advance planning of the charity’s site.  This book would be a great asset to both the development team and the charity’s “site owners”.

All in all, if you’re in the beginning stages of your Joomla! experience, or have inherited a Joomla! site, you owe it to yourself to get this book.  Very advanced Joomla! admins and developers will probably find this information to be too basic, but they are not who this book is for.  Thank you very much to Jennifer and Addison-Wesley for giving me the opportunity to review this book!

Microsoft Donates the ASP.NET Ajax Library Project to the CodePlex Foundation

November 18, 2009. CodePlex Foundation Announces Creation of First Gallery, Acceptance of First Project
ASP.NET Open Source Gallery, ASP.NET Ajax Library Project first proofs of Foundation model

The CodePlex Foundation today announced the creation of the first Foundation project gallery, the ASP.NET Open Source Gallery, and the acceptance of the first project into that gallery, the ASP.NET Ajax Library project. The gallery and project were evaluated for acceptance using the Foundation's Project Acceptance and Operation Guidelines, first published October 21, 2009. The gallery and project are supported by Microsoft, the Foundation's founding sponsor.

"Bringing the first project gallery and project into the CodePlex Foundation shows significant progress against our 100 day goals," said Sam Ramji, Interim President, CodePlex Foundation. "The ASP.NET Ajax Library project is important for its great value to both the open source and commercial software worlds, and the Foundation is the best forum in which to shepherd its future development."

Gallery and Projects

The ASP.NET Ajax Library consolidates ASP.NET Ajax and the Ajax Control Toolkit into a single open source project. The Ajax Control Toolkit and Ajax Libraries, components of many web development strategies, make it easy for developers to use the Ajax programming model in their websites and web applications.

The ASP.NET Ajax Library project will be released under a BSD license and can be used with many technologies, including, but not limited to ASP.NET, PHP and Ruby on Rails. Future development of the project will be done within the ASP.NET Open Source Gallery under the aegis of the CodePlex Foundation.

The CodePlex Foundation Gallery Model

The CodePlex Foundation in October announced an innovative gallery sponsorship model that uses museums as a design pattern. The organizational structure divides the Foundation into galleries – collections of thematically related projects – which benefit from a common set of services provided by the Foundation.

Galleries may be sponsored by a third-party organization, e.g. a commercial software company, or run by the Foundation. Galleries will rely on Foundation staff and volunteers to provide a set of support services, including administration, security, best practices and marketing.

About the CodePlex Foundation

The CodePlex Foundation is a not-for-profit foundation created as a forum in which open source communities and the software development community can come together with the shared goal of increasing participation in open source community projects. For more information about the CodePlex Foundation contact info@codeplex.org.

Installing PHP on Windows 7/IIS 7 with Windows Platform Installer

Download the Web Platform Installer from http://www.microsoft.com/web/Downloads/platform.aspx.  Must do this when connected to the Internet, since the installers download the latest components from the web.  You also need to install PHP before you install the PHP-based applications, too.

After you download the small installer, run it, and you’ll see an Open File warning.  Click Run to continue, and the installation will begin.

image 

image

At this point, you’ll receive a UAC warning.  Allow the WPI to install itself.

image

It was at this point I received an error on my first try (see below).  Upon retrying the installation, it worked.

Once the WPI is installed, it will update itself with the latest packages and components.  You’ll receive another UAC, and then you’ll see this window, where you can add components or apps.

image

To install PHP, select the Web Platform tab, then under “Frameworks and Runtimes” click Customize, then choose PHP from the list.

image

Click Install, and review the list of components that will be installed.

image

Click I Accept, and installation will begin.

image

After a few minutes, you’ll have PHP installed on your system.

image

Configuring Expression Web 3 for PHP

The PHP components are installed in c:\program files\php.  If you use Expression Web 3, you can configure PHP under Tools >> Application Options >> General, then browsing to the php-cgi.exe.

image

Testing the PHP Installation

To test your PHP installation, open the IIS Management Console and create a new application.  Inside of this application, add a file named test.php and with the following code in it:

<?php
Print "Hello, World!";
?>

Load this file in your web browser, and if you see the message, you’re set to go!

image 

Installation Error

The first time I ran through the installation, I received the following error:

The Web Platform Installer  could not start.  Please report the following error on the Web Platform Installer forum.  http://forums.iis.net/1155.aspx

I checked out the Application Log, and that was no help:

Product: Microsoft Web Platform Installer 2.0 RC — Installation failed.

Before I went to the IIS forum, I tried to recreate the issue by running the installer again, and this time it worked.  So if you get this error, wait a moment and try it again.

Skinning Zen Cart: Part 1, The Header

Zen Cart is one of the most popular shopping carts available today, partly because it has an incredible amount of features, and partly because it’s open source.  Unlike a lot of projects, Zen Cart is actually very well documented, but in the form of knowledge base articles, which make it tough for someone getting started to pick up.  Additionally, Zen Cart uses a number of files loaded dynamically to produce the final page, so a visual editor is of little, if any, use when designing a theme for Zen Cart.

Although Zencart is written in PHP and uses MySQL, it runs just fine on Windows XP.  See the bottom of this post for links to installing PHP and MySQL on Windows.

There is a tremendous amount of control over the appearance of Zencart, most of which can be accomplished through the admin interface.  Further control over the design is in CSS files.  Very little editing of the PHP code is necessary for a great deal of customization.  Here, we’ll focus on the header region of the Zencart pages.  Future posts will focus on the other regions of the page.

Basics of Zen Cart Themes/Templates

A lot of your cart’s customization can be accomplished via the control panel or overrides (see below), so it’s best to start with the options available there before you start editing pages.  Zen Cart uses a series of template files to control the layout of your cart’s pages.  The template files are stored in several different folders, depending on what they do.  For major changes in page design, these are the files you want to edit.

Reference links:

If you’re going to create your own Zen Cart theme, you shouldn’t edit the default template files.  Instead, you want to copy an existing theme, or the default pages, and edit the copies.

Reference link:

To control portions of pages, Zen Cart employs a clever override system.  It starts when you add a theme to Zen Cart, and select that theme in the admin tool.  When a page loads, the override system looks first for an override file in a folder with your template’s name.  If it finds one, it uses that file to render your page.  If there isn’t a theme-specific override file, a default file is used.

Reference links:

Finally, Zen Cart relies heavily on CSS for its appearance.  I strongly recommend using Firefox browser and the Firebug add-in for CSS discovery.

Reference Link:

The Default Header

Below is a screenclip of the default header.

Basic Changes with Override Files

Here we’ll look at some significant changes you can make with only basic edits.

At the very top of the default template is a navigation bar.  It contains a link to the home page, a login link, and a search bar.  If the user is logged in, the Log In link is not displayed, but links for the user’s account information and to log out are.  If there are items in the shopping cart, links to the cart and shipping are displayed.

If you would rather not have this bar, you can copy includes\templates\template_default\common\tpl_header.php to the common folder of your theme, and edit the code to remove the lines marked below.  Make sure to keep the lines not marked out.

<!–bof-navigation display–>
<div id=”navMainWrapper”>
<div id=”navMain”>
<ul class=”back”>
<li><?php echo ‘<a href=”‘ . HTTP_SERVER . DIR_WS_CATALOG . ‘”>’; ?><?php echo HEADER_TITLE_CATALOG; ?></a></li>
<?php if ($_SESSION[‘customer_id’]) { ?>
<li><a href=”<?php echo zen_href_link(FILENAME_LOGOFF, ”, ‘SSL’); ?>”><?php echo HEADER_TITLE_LOGOFF; ?></a></li>
<li><a href=”<?php echo zen_href_link(FILENAME_ACCOUNT, ”, ‘SSL’); ?>”><?php echo HEADER_TITLE_MY_ACCOUNT; ?></a></li>
<?php
} else {
if (STORE_STATUS == ‘0’) {
?>
<li><a href=”<?php echo zen_href_link(FILENAME_LOGIN, ”, ‘SSL’); ?>”><?php echo HEADER_TITLE_LOGIN; ?></a></li>
<?php } } ?>

<?php if ($_SESSION[‘cart’]->count_contents() != 0) { ?>
<li><a href=”<?php echo zen_href_link(FILENAME_SHOPPING_CART, ”, ‘NONSSL’); ?>”><?php echo HEADER_TITLE_CART_CONTENTS; ?></a></li>
<li><a href=”<?php echo zen_href_link(FILENAME_CHECKOUT_SHIPPING, ”, ‘SSL’); ?>”><?php echo HEADER_TITLE_CHECKOUT; ?></a></li>
<?php }?>
</ul>
</div>
<div id=”navMainSearch”><?php require(DIR_WS_MODULES . ‘sideboxes/search_header.php’); ?></div>
<br class=”clearBoth” />
</div>
<!–eof-navigation display–>

If you want to keep this navigation bar, you can use CSS to change the colors and fonts.  You can turn the display for the search on and off in the admin tool.  The CSS classes have pretty descriptive names; you can also determine exactly which classes need to be edited by reading the code or viewing the output.

Reference Links:

Moving down the page is the logo and sales message.

Changing the logo is very easy, and is a simple edit to a PHP file, possibly a little CSS, too, if you want to change some positioning.

Reference Link:

Underneath the logo are the category tabs.  The category tabs contains links to the categories in your cart (default data shown below).  There is a sidebar box that shows the same links, too, but navigation at the top of the page is pretty standard.  If you want to turn this off, you can do so in the control panel.  You can also change this to be a doprdown menu with one of several add ons.  The appearance is controlled by several CSS classes.

Reference Links:

Below the category tabs is the EZ Pages links bar.  EZ Pages are one of the nifty features of Zencart–they allow a cart owner to easily add content pages, such as About Us or Privacy policies, through a simple administrative interface.  You can change the look via CSS, or you can turn the bar off entirely.

You specify the pages to link to in the EZ Pages bar under Tools >> EZ-Pages, and turning on pages under the Header column.  You also need to Edit the details for each page (use the edit option to the right of the grid), and set a sort order greater than 0.

If you want to get rid of the EZ Pages bar, just log in to the admin tool and go Configuration >> EZ-Pages Settings, and set the “EZ-Pages Display Status – HeaderBar” to 0.

Reference Links:

Below the EZ-Pages bar and above the content is a random “Home”.  This is the breadcrumb trail, which displays your place in the website, and looks a little odd on the home page.

It makes a little more sense as you visit other pages in your site.  The Home becomes a link, and your current page is the last entry.

Like everything else, you can change the appearance via CSS, or turn it off.  In the latest version (1.3.8 at the time of this post), you have three options in the admin tool: on, off, or off on home page only.  You find these options under Configuration >> Layout Settings >> Define Breadcrumb Status.

Reference Links:

The Finished Product

Below is the finished header, after CSS edits and the horizontal drop down menu added.  So far, so good.

Running Zen Cart on Windows XP

As a proper development practice, you should have a proper development environment separate from your production site.

Reference link:

Although Zen Cart is PHP and MySQL based, you can run it on Windows and IIS.  PHP is very easy to set up on Windows.  The PHP team has done a great job building a Windows friendly installer and documentation–so much so, they dedicate a subsite entirely to running PHP on Windows.  Likewise, the MySQL team has also built Windows a friendly installer and provided simple documentation.

Reference links:

If you need a quick-reference for PHP and MySQL, you might be interested in RefCardz “Free Cheat Sheets for Developers” at http://refcardz.dzone.com/.  There are PHP and MySQL cards, and new ones being added all the time.  Best advice is to browse the selection.

And, you can edit PHP natively in the great Microsoft Expression Web, with full Intellisense for PHP and CSS.  I highly recommend Expression Web if you’re doing PHP development.

Using a Dynamic DNS Service with DD-WRT

When you have a phone number assigned to you by the phone company, it doesn’t change on a daily or monthly basis.  It’s static.  It only changes when you relocate to a different service area.  That’s because your phone number is designed to be used for incoming communications–for people to call you.

On the other hand, your cable modem or DSL may not have a static number (called an IP address).  That’s because these connections were meant for outbound communications–you surfing the Internet.  Most providers will assign you a static IP if you request one, usually for an extra charge that may not make it worth doing.

If you can’t or don’t want to get a static IP, but still need a static way to find your node, you can look into a dynamic DNS service.  Most of these are free for a single address.

DD-WRT supports a number DNS services, and you find the Dynamic DNS (DDNS) setup under Setup >> DDNS.  Click the images for a larger view.

dyndns

I’ve used DynDNS.org for a while now.  It’s very simple and stable, and is free for a single address.  You sign up with DynDNS, and choose the url subdomain (“hostname”) you’d like to use–it will be something like myvpn.gotdns.org.

After you’ve signed up for a DDNS account, go back to your router’s DDNS settings and enter your account’s settings.

dd-wrt[3]

Typically, you will have a Dynamic type of account, and it will not be a Wildcard account.  These are more advanced configurations, and typically don’t come with the free services.  You’ll know if you need them, and you can always upgrade the day you do.

After configuring the DDNS settings, the router in a short time will update your account’s settings.  On the client end, you now need to edit your OVPN configuration file, and put the host name where the IP address was, in the “remote” line.  Now, even though your IP address may be reassigned periodically, you’ll always be able to use a static host name to locate your VPN.