Sunday, March 29, 2009
The solution, A sample code repository, so if you look on the menu under "Subscribe" you will see a link to the Sample Code Repository. This will be an evolving project showing off new things I've learned and some best practices. It is not intended to be a production application, just a collection of sample code in the form of a project to show what I'm blogging about.
Please feel free to comment (good or bad) on what I'm doing, I feel I'm going to be doing a lot of refactoring on it.
Saturday, March 28, 2009
Please keep in mind that a lot of this was notes taken during the sessions so they may be a little disjointed, I tried to clean them up as much as I could.
Keynote: Bob Lokken
Bob talked about the global economy and the local economy and how it affects the software industry in the treasure Vally. The short version we need to further education, basically if you have software developers the jobs will come, or the software developers will create the jobs themselves.
Session one: Test Driven development using ICO and Mocks Part 1
Basically we Covered the importance of unit testing, why to use interfaces, and starting to cover Mocks, this was a really simple entry level explanation of TDD and why to use it.
Session two: Test Driven development using ICO and Mocks Part 2
We started with covering context specification, so basically creating test to verify your objects are following the specification requested by the customer using nunit.
So basically you have an E-Commerce site with the following requirements
- Returns a success or fail for each request
- Order status should be submitted
- credit card should be billed the total amount of the order
- Customer should be notified when they place an order
So for doing these test all we really care about is if the requirement is fulfilled, not if the logic is correct
- For requirement 1 we are testing then when we make a request we are returned a pass or a fail
- For requirement 2 we are testing that when we submit an order the status gets submitted as well.
- For requirement 3 we Mock the credit card interface so we can tell if the credit card interface was called without billing the card.
- For requirement 4 we Mock the Notify interface that the code calls the notify interface, just like we did for Requirement 3.
With all of these we are using a naming convention that describe what it is we are testing for, an example for requirement 1 is "When_request_return_pass_or_fail" this tests for exactly that and only that.
Next we talked about the "no time for testing Death spiral" so basically you don't have time to test, the less you testing you do the more bugs you have so the less time you have for testing.
Hung out with the guy's from The Network Group
Session 3: Introduction to Inversion of control
The first thing to do is follow the solid principals, basically the same stuff that Uncle Bob talks about like "Single responsibility Principal", "Dependency inversion Principal" etc.
We covered the IOC concepts using the Unity IOC. The IOC frameworks let you specify the type of lifespan, so you can create a Singleton or multiple instance, etc. with out having to alter your code to make it a Singleton.
Some points of interest where that constructors should not take domain objects, they should only take interfaces that allow them to talk to other layers. When using IOC for the most part you should really only have one constructor, if you have more then you need to set up hints in the code to specify what constructor to use.
An interesting thing we covered was that you can create an IOC in 15 min or 6 months depending on what features you want it to have.
Constructor injection vs configuration injection
Using Constructor inversion tightly couplings your code with an IOC container, in configuration inversion you can make this hot swappable.
An interesting comparison is that an IOC is basically like a super class factory, look into logging injection with castle Windsor
Session 4: Microsoft Extensibility Framework (MEF)
Extensibility allows your application to extend from the base functionality, like allowing 3rd party plug-ins, MEF allows you to add additional functionality after compilation in a standard way.
An example would be to add additional functionality like new features to a web service with out having to compile the web service. You simply add a plug-in dll and you get a new request option . MEF came about because there was no common plug-in framework in dot net, or Microsoft in general, MEF was created to solve this problem.
MEF plug-ins have imports and exports, this allows plug-ins to look for what they need and what this plug-in provides this is all done with the Catalog. The catalog manages what plug-ins are available, and what there requirements are.
You can set the default to handle passable conflicts, for example if there are multiple plug-ins that implement the same interface, and your code requests only one, it will explode because it doesn't know which one to use, unless you specify a default, in this case instead of blowing up it will use the default.
You can cache the meta data so you don't have to reload the assembly every time you want to look at the avalable plug-ins so you can lazy load the assembly only when it's requested.
Session 5: ASP.NET MVC in the real world
Started with a basic tutorial on how to create a MVC web app (a list of robots in a robot army) , covering creating a list view, routing between views, how to handle exceptions by doing redirection, etc. One of the nice things about ASP.NET MVC is it has templates for a fair amount of the basic code for you, like lists, edits, details, etc.
One of the short comings of mvc out of the box is it isn't type safe, keep that in mind for when you change routing, it will create magic strings, this can be fixed but you have to explicitly do it.
Something else you need to do is protract your self from spoofing http posts. One way to get around this is to create a view object that only contains information you want to have submitted and then validate that data map the view object to the real object and then saving the real object, there is also an anti-forgery option to prevent sending bad data.
The MVC pattern allows you to create a single responsibility witch allows testing of the code, you can do this with asp.net forms but it's really painful, where mvc makes it a lot easier.
Some things the presentor sugested we look at are
- MVC code from codeplex http://www.codeplex.com/aspnet
- S#arp Architecture http://www.codeplex.com/SharpArchitecture
- CodeCampServer.com http://code.google.com/p/codecampserver/
- FUBU MVC http://code.google.com/p/fubumvc/
- NEW MVC BOOK (Free Chapter)
- Scott Hanselman MIX - "NerdDinner"
Session 6: Event Driven Programing using Delegates
Basically this allows different parts of your application to know indirectly what is going on in different parts of your application by subscribing to events.
This allows for a decoupled design that allows you to spin off a thread when an event is raised, with out the object that raised the event knowing or caring about what is going on with the event handlers. A good example of this is the asp.net web forum when you create event handlers for the onPageload or onButton_Click, etc. The event raiser (web forum code) doesn't know or care if anything is subscribing to it, but if something is then it executes the subscriber
Button1.click += new ButtonHandler_Click(eventHandler);
then runs the ButtonHandler_Click method
Closing and Giveaways:
I won the DevExpress Refactor
All in all it was a really good experience
Friday, March 27, 2009
It all depends? some of my rules of thumb are
for going into a config/resource file
- it's the same for every user, like the path to an image directory
- the data is only needed at the UI level and will not change very often, like a list of US States
- it may vary from server to server, like a resource URL for production vs testing
for going into a DB
- It changes per user or is user specific data
- it may need to be maintained by someone out side your development group
- It's going to be used outside of the UI layer
- it's going to be updated regularly
what ever you do
DON'T HARD CODE STRING INTO YOUR CODE, I don't care how positive you are that the bug report notify Email address will not change, DO NOT HARD CODE IT!!!! or 3 years down the road what would have been a 2 sec config change will result in 2 weeks of work to get an out dated legacy piece of crap application to compile!! or another fun one was an xml-schema hard coded into an application with
stringname += "Schema data";
stringname += "Schema data";
stringname += "Schema data";
Oddly enough same guy, go figure.
Wednesday, March 25, 2009
Another Blog entry based off a question on Stack overflow, Is pair programing worth it? are the Gains of Pair programing worth loosing the productivity of a developer by pair programing? VS just hiring more QA people (who are generally cheaper).
Thinking about this it reminded me about back when Extreme programing came out and you had developers saying it was the end of QA testing. the Funny thing is the more agile your development process becomes the more integrated and more interaction the developers have with the QA people. Basically the sooner a bug/defect is found the cheaper it is to fix, so using the money to hire more QA people vs another developers, is going to cost you more time/money because of how many trips from DEV to QA.
Having said this, pair programing don't work with everyone, some developers don't pair well, they distract each other, spend all their time fighting, etc.
If you have developers that can pair program, it can be more then beneficial in the long run when you add in more maintainable code, lower defects so less time in QA, and most importantly if one of the developers get hit by a Bus, you don't have to wait for someone to come up to speed on a project before any more work can be done on it.
If your developers can't pair program don't force them into it, all your going to do is waist time and money.
Estimate how long it will take and add 1/2 again as much time to cover the following problems:
1. The requirements will change
2. You will get pulled onto another project for a quick fix
3. The New guy at the next desk will need help with something
4. The time needed to refactor parts of the project because you found a better way to do things
and I don't I'm far off, if anything I think this is a rather conservative approach, most guild lines I found on line where estimate your time then double it.
Tuesday, March 3, 2009
The data access (sql server, mysql, flat xml files, etc.) all of this should be abstracted away nothing else in your application should care or know how you are getting your data, only that it dose, if anything else knows how you are getting your data you have a layer violation. if the DAL dose anything other then get data you have a layer violation. Next you implement a data access interface something like IDAL that your business layer uses, this is very important for making your code testable by forcing you to separate your layers.
The data entities can be placed in the DAL name space or give them there own, giving them there own forces separation. Data entities are dumb objects and should contain very little to no logic and are only aware of themselves and the data they have, THEY DO NOT CONTAIN BUSINESS LOGIC!, DATA ACCESS LOCIC, OR UI LOGIC. if they do you have a layer violation. The only function of a data entity is to hold data and be passed from one layer to the next.
The Biz layer implements a data access interface like the IDAL we talked about before you can instantiate this with a factory, an IOC container, or all else failing a concrete type, but add a setter property so this can be changed for testing. The Biz Layer Only handles Business logic, it doesn't know or care where the data came from or where it's going, it only cares about manipulating the data to comply with business rules, this would include date validation, filtering (part of this is telling the DAL what data it needs, let the DAL figure out how to get it). Basically the BIZ handles all logic that isn't UI related or Data retrieval related. Just like the DAL the Biz should implement an Interface for the same reason.
The UI layer Accesses the Biz layer the same way the Biz layer accesses the DAL for the same reason. All the UI layer cares about is displaying data and getting data from the user. The IU Layer should not know anything about the business rules, with the possible exception of data validation required to populate the Data Entities.
The advantage of this architecture is it forces separation of concern making it easier to test, more flexible, and easier to maintain. Today you are building a web site but tomorrow you want to allow others to integrate vi a web service, all you have to do is create a web service that implements the IBIZ interface and your done, when you have to fix a bug in the BIZ layer, it's already fixed in both your website and web service.
Taking this to the next step, lets say you are doing a lot of heavy number crunching and you need more powerful servers to handle this so all you have to do is implement an IDal and IBIZ interface are really wrappers to WCF that handles the communication between your servers, now your application is distributed between multiple server and you didn't have to change your code to do it.