Monday, March 23, 2015

Enterprise Design Patterns and Practices (Boise Code Camp 2015)

At this years Boise Code Camp I presented on Enterprise Design Patterns and Practices, afterwords I was ask to post my slide deck and notes, if there is interest I'm thinking of doing a blog series on this.

Fundamental principals

 S.O.L.I.D. Principals

·         A class/method/tier should have only a single responsibility it should do one thing, and do it well, even if that is telling other things what to do.
·         Software entities should be open for extension, but closed for modification, and always favor extension over inheritance.
·         Design by contract, you should never need to care what the object really is.
·         Using multiple client-specific interfaces are better than one general-purpose interface.
·         Depend use abstractions and not concreate implementations. Use a factory pattern, DI container, Lazy Load properties, etc.
These are the fundamental principles of OO development and apply equally for application architecture.

Be strongly typed and loosely coupled

By creating strongly typed interactions between objects you are enforcing a consistent behavior that will help prevent difficult to find runtime errors.  Decoupling objects in your application allows you to change the underlying object without the changing the caller.

The 2 biggest problems you will face

Golden Hammer

Having go to tools are great, but just because something can do it, doesn't mean it should be used. 

Silver Bullets

Everything from here are on down may/can/possibly help, but will not solve all of your problems, always looking for the next technology to solve all of your problems will only end in pain.  These are tools in a toolbox, to be used appropriately.  If used incorrectly they will bring pain, lot of pain.

Automation, Automation, Automation

If it can be reasonably automated, it should be, Computers are really good at being consistent, and humans are not

Build server & source control

Nothing fundamentally improve your code more. (Jenkins, TFS, Team City, Bamboo, Apache Continuum, etc.) Nothing goes out that isn't in source control, EVER!

Application Deployment

Using deployment tools for pushing changes (MSDeploy, Octopus, etc.), copy past in not a recommended deployment methodology.

Database Deployment/Migration

Database changes are code changes too.  Use a migration framework (EF Migrations, Fluent Migrations, etc.) or a database change management tool (Visual Studio Database Project, Redgate tools, DB Deploy, Ready Roll, etc.) 

Testing

Automating your unit, behavior, integration, load and UX tests allows you to find problems before your clients/customers.

Instrumentation

Regardless of what you think might be going on, without instrumentation your blind.

Logging

This is the first and easiest way to find out what is going on under the hood, don’t write your own! Unless you really need to (log4net, nLog, Enterprise Library, SmartInspect, etc.)

Memory Profiler

While a logger can tell you what is going on in the code, a memory profiler (redgate ants, JetBrains Memory Profiler, etc.)

Site monitoring and Analytics

Just because your application is live doesn't mean you can stop paying attention to it, knowing how many people are interacting with it, how they are interacting with it and when will let you know what has happened, what is currently happening, in order to guess what is going to happen.

Leveraging 3rd party tools

Focus on your core features, leverage 3rd party to deal with the grunt work.

Abstract your 3rd party includes

Using 3rd party tools are great and will save you from re-inventing the wheel, but to keep from being locked in always abstract away the implantation when you can.  In the future the library may no longer be supported, licensing problems, etc. being able to quickly insert a replacement will save you in the long run.

Building for Extendibility

Separate your application into logical segments or tiers.   

n-Tier Design

The 3 major application tiers are (but not limited to)
·         UI – For displaying or transmitting information (webpage, API endpoint, etc.)
·         Domain or Business – This is where the work and decisions are made
·         Data Access – Where the data comes from (DB call, Web Service call, file read, etc.)
A very important rule is tiers can only see the tier next to them, the UI should have no concept of where the data it’s displaying is coming from, and all it cares about is it made a request from the Domain and got data back.  This applies to the Data Access as well, all it knows or cares about is returning data requested by the Domain.  This logically separation of your logic allows you to reuse and refactor your application cleanly without fear of breaking changes in an outside tier. 

DTO, POCO, and Model objects

With each tier isolated from each other we are going to need a way to pass information back and forth, enter the DTO (Data Transport Object).  Basically these are classes that hold information, they have little to no logic and know nothing about where they are, where they came from, or where they are going.

Separate Update, Select and Work Logic

Domain classes can get big fast, by keeping them logically separated into a more singular function is keeping in line with the single responsibility principal and keep them more manageable.   
This can be applied to Data access classes as well even if it’s as simple as having 2 interfaces on a data repository class, one for updates and one for selects.  This might seem silly until you start working with publisher and subscriber databases.

Do not over abstract

While using abstractions to separate and decouple your logic is good, if overused/abused these good intentions quickly paves a road abstraction hell.  Where adding a single property to your result set to display requires modifying 5 DTO objects, 4 file mappers, yes I know of an application that does this.

 

Building for Scalability

Dealing with Load

Caching   

·         Micro-caching – for lots of requests for the same data needing relatively real time data(5-10 sec)
·         Memory caching – for things that don’t change that often (1+ hours): User information, product data, GEO data (city, state, LAT, LOG, etc.)
·         DB caching – For  sharing your Cache

Spreading the Load

With your logic separated and decoupled it’s easy to farm your pain points to other hardware, think out instead of bigger.
·         Move expensive functionality (Encryption, PDF/Image creation, etc.) to a dedicated system/systems to prevent load on your main server/servers.
·         Using publisher & subscriber data bases, to reduce the load on your primary database, move selects from the main to the subscribers, this is also a good practice for reporting servers, running a massive metrics report for the sales team should not affect production performance.
·         Aggregate and pre-process data, just because you saved the data in a specific DB Schema don’t mean you have to keep it that way.  Having a job that copies data to a more flattened table for faster searching can greatly improve performance, even more so when aggregating from a SQL server (MYSQL, MSSQL, Oracle) to NoSQL( CouchDB, MongoDB, etc)

Queuing up

If you don’t require an immediate synchronous response message queues are a great way to keep your system fast, prevent data loss (messages persist until client acknowledges receipt), and distribute workload. Using a Service Bus takes queues to the next level by creating an architecture for distributing work load.

CDN (Content Delivery Network)

Keeping your static files fast, having a striped down server that only serves up static like images, css, JavaScript, etc. these can also be geo-located for even better performance.

Not everything needs to go to the DB or come from it

Databases are great for storing data and pulling data but they aren't the only solution.  Lots of data submitted by the use doesn't need to be persisted in the database.  There are better ways to localize your site then storing it in the database.  Customer uploading images are another area where keeping the Meta data in a database is a good idea but the image itself should probably go to disk to disk (a CDN would be great for this).

Redundancy

Hardware fails, systems crash, operating systems need updating, and redundant systems keep you online.

Tuesday, April 8, 2014

Boise Code Camp 2014

This year I did a presentation at Boise Code Camp on "Creating Testable Mobile App with MVVMCross, Xamarin, Windows Phone, and Windows Store Apps.  At the request of some of the attendees here is my slide deck also here is the source code

Thursday, November 28, 2013

Testing un-mockable base class methods

Working on a project using the Xamarin Framework and came across a problem where I needed to see if a method was being called.  The problem was the method was on the base class and was un-mockable

To get around this I used a lazy load property and a delegate to create a wrapper.
   1: private StartActivityDelegate _start;
   2: public StartActivityDelegate Start
   3: {
   4:     get { return _start ?? (_start = StartActivity); }
   5:     set { _start = value; }
   6: }
   7:  
   8: public delegate void StartActivityDelegate (Type type);
   9:  
  10: protected override void OnCreate(Bundle bundle)
  11: {
  12:     base.OnCreate(bundle);
  13:     Redirect();
  14: }
  15:  
  16: public void Redirect()
  17: {
  18:     if (Preferences.HasRequired())
  19:     {
  20:         Start(typeof(ClientActivity));
  21:     }
  22:     else
  23:     {
  24:         Start(typeof(SettingsActivity));
  25:     }   
  26: }
To test it you simply set the property in my test class to a test method
   1: [TestFixture]
   2: public class When_Starting_SplashScreen_Without_Required_Preferences_Test
   3: {
   4:     public SplashActivity Target;
   5:     public Type Actual;
   6:     public PreferencesMock PreferencesMock;
   7:  
   8:     public void TestStartActivity(Type type)
   9:     {
  10:         Actual = type;
  11:     }
  12:  
  13:     [SetUp]
  14:     public void SetUp()
  15:     {
  16:         PreferencesMock = new PreferencesMock { HasRequiredStub = false };
  17:  
  18:         Target = new SplashActivity { Start = TestStartActivity, Preferences = PreferencesMock };
  19:         Target.Redirect();
  20:     }
  21:  
  22:     [Test]
  23:     public void Has_Expected_Activity_Test()
  24:     {
  25:         Assert.AreEqual(typeof(SettingsActivity), Actual);
  26:     }
  27: }
Next to test that we are calling the base class method we are expecting, we add a new test fixture and use a little reflection to get the method info for each and then compare them
   1: [TestFixture]
   2: public class SplashActivity_Tests
   3: {
   4:     public SplashActivity Target;
   5:     
   6:     [SetUp]
   7:     public void SetUp()
   8:     {
   9:         Target = new SplashActivity();            
  10:     }
  11:  
  12:     [Test]
  13:     public void LazyLoads_Write_Test()
  14:     {
  15:         MethodInfo actual = Target.Start.GetMethodInfo();
  16:         MethodInfo expected = typeof(Activity).GetMethod("StartActivity");
  17:         Assert.AreSame(expected, actual);
  18:     }      
  19: }

Wednesday, October 16, 2013

Steps for shortening the “Feedback Loop”

Information is the basic building block of software development and the most valuable form of information is useful feedback.  Feedback can come from the customer, the application’s users, and the development team. 


Regardless of the development methodology used the feedback loop is going to follow this basic flow


Looking at the Waterfall Methodology it’s very easy to see why it has become dated.  When you look at the basic flow 

The first opportunity for the developers to get feedback is at the end of the Verification stage.   Depending on the size of the project the Feedback Loop for Water Fall can take a couple of months to over a year.  This large gap between gathering requirements and implementing them makes it very difficult and costly to fix misunderstood or faulty requirements. 

What if the oil light in your car only gave you feed back every 30 min?   If you lose your drain plug, 30 min. is a long time to run without oil.  Software development is the same way, unless you’re getting regular feedback you won’t know something has gone wrong until it’s too late and you face costly refactors.

The hardest part of shortening the Feed Back loop is getting started.  It takes not only a change in behavior but also in thinking.  To quote a great American writer

“The secret of getting ahead is getting started. The secret of getting started is breaking your complex overwhelming tasks into small manageable tasks, and starting on the first one.”






Step 1: Short Development Iterations

A project may look large and complex but keep in mind the Egyptian Pyramids where built one block at a time.  By breaking tasks down into more manageable items you are able to focus on very specific requirements and get feedback on that specific requirement.  I personally like 2-week iterations; it’s enough time to get things done, without getting lost in the details.

Step 2: Work in Small Teams 

The old adage “too many cooks spoil the broth” is just as true in software development.  Large teams have the same problems that large tasks do; there is too much going on.  Large teams require too much coordination, reduce individual responsibility, and only the most vocal get heard.
It has been my experience that three to five developers works best.  With fewer developers the team can lack skill set diversity, and with more you run into coordination and crowd mentality issues.  Never have a team of one. 

Step 3: Test Driven Development

Test Driven Development (TDD) is creating software tests to verify requirements, and then writing the software to perform that behavior.  This provides the developer with a personal feedback loop to know that what is being created has the expected result.  This also provides the added benefit of built in regression tests.  

Step 4: Continuous Integration

Continuous Integration (CI) is the process of merging code as soon as possible and running tests to verify everything is working as expected.  CI can be done manually but is more commonly done with a build server such as Team City, Jenkins, TFS, or others.  This quickly gives software developers feedback regarding overlapping work, quality control, and overall code health.  This step will quantifiablly improve the quality of your code more than any other single task you do.

Step 5: Automate as much as possible

When automating tasks you are going to get quicker feedback, a consistent behavior, inherent documentation (by way of the automation script), and reduces overall costs by freeing up resources to work on more important tasks.

Wednesday, October 9, 2013

Death by Golden Hammer

The old adage “When all you have is a hammer, every problem looks like a nail.” is a very common problem in the software development world.  Developers may become proficient with a particular technology or tool set and use it to every problem, even to the point of excluding a more appropriate solution.  This narrow technological view has been nicknamed, “The Golden Hammer”.  There are benefits to having a very deep understanding of a specific technology stack, however t is more important to understand when to use an alternative.

A common Golden Hammer is using programing languages in ways they were never intended to be used.  For example, in the late 90’s the Perl programing language was very popular, and to some extent still is. With good reason, it’s an incredibly powerful and useful scripting language.  It was originally created for parsing text, which it’s really good at it.  It was so good at doing what it did, that people became very adept at it, and started using it for things it was never really intended to do; such as writing full applications.  Being a scripting language created for simple text parsing, its syntax was very loose and lacked the structure needed to create well formatted and maintainable code.  In spite of its shortcomings it was used far more than it should have been.

Another common Golden Hammer is using tools like Microsoft SharePoint or a content management system (CMS) such as:WordPress as a development platform.  These tools were built with the intent of managing content, and for the most part they do that very well.  The down side is when you start extending these tools beyond their intended use.  Using Microsoft SharePoint to build a warehouse order management system is like building a drag racer out of a school bus.  I’ve seen both done and it’s a prime example of “just because you can doesn’t mean you should.”  You can force tools to do what you want them to do, but it’s going to take a lot of customization and your end product, like the drag racing school bus, will never perform as well as if you had built it with the appropriate solution.

When looking at what tools and technology to use it’s just as important to look at what not to use.  One of the most valuable lessons I learned in high school was from my shop teacher:

“Learn to use the right tool for the job, not what’s most convenient. Screwdrivers are not chisels!”
Dallas E. Tolman


In the long run you will always be better off using the right tool vs. the most convenient one.