Monday, March 26, 2018

Boise Code Camp Presentation on Into to Continues Integration

This year at Boise Code Camp I presented on Into to Continues Integration and here are the Slides and the source Code https://github.com/bobtjanitor/SampleTDDApp

Slides from my Into to TDD talk at Boise Code camp

This year at Boise Code Camp I presented on Into to TDD and here are the slides and the source Code https://github.com/bobtjanitor/SampleTDDApp

Tuesday, February 23, 2016

Unit Testing Asp.net MVC View Model Validation

ViewModel attribute validation is a key feature of ASP.NET MVC, but is often not a tested one.  For the most part developers verify controller actions, pass in a model, mock the ModelState.IsValid property and call it good, never checking to make sure if the model is validating correctly.  One of the biggest reasons is developers just don’t know how, it’s not difficult, but it’s not very intuitive so here is a quick tutorial.

First we take a view model with some basic validation attributes

   1:  public class RegisterViewModel
   2:  {
   3:      [Required]
   4:      [EmailAddress]
   5:      [Display(Name = "Email")]
   6:      public string Email { get; set; }
   7:   
   8:      [Required]
   9:      [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)]
  10:      [DataType(DataType.Password)]
  11:      [Display(Name = "Password")]
  12:      public string Password { get; set; }
  13:   
  14:      [DataType(DataType.Password)]
  15:      [Display(Name = "Confirm password")]
  16:      [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
  17:      public string ConfirmPassword { get; set; }
  18:  }

Next we create a simple reusable testing context class

   1:  public abstract class ViewModelValidation_Context<T> where T : new()
   2:  {
   3:      public T Target;
   4:      public List<ValidationResult> ActualMessages;
   5:      public bool Actual;
   6:   
   7:      [SetUp]
   8:      public void SetUp()
   9:      {
  10:          Context();
  11:          Because();
  12:      }
  13:   
  14:      public virtual void Context()
  15:      {
  16:          Target = new T();
  17:          ActualMessages = new List<ValidationResult>();
  18:      }
  19:   
  20:      public virtual void Because()
  21:      {
  22:          var context = new ValidationContext(Target, null, null);
  23:          Actual = Validator.TryValidateObject(Target, context, ActualMessages, true);
  24:      }
  25:  }

Notice this test context class is a little different than what I have used in the past, we are predefining the Because() method.  In this case regardless of the class we are testing we are still going to run what is really the meat of this class, we create a Validation Context and then passes it to System.ComponentModel.DataAnnotations.Validator.TryValidateObject and this returns if the view model passed validation and and validation messages.

Next lets put this into action with a test to validate our model

   1:  [TestFixture]
   2:  public class When_Testing_RegisterViewModel_Validation_Test : ViewModelValidation_Context<RegisterViewModel>
   3:  {
   4:      public override void Context()
   5:      {
   6:          base.Context();
   7:          Target.Email = "TestEmail@email.com";
   8:          Target.Password = "TestPass";
   9:          Target.ConfirmPassword = "TestPass";
  10:      }
  11:   
  12:      [Test]
  13:      public void Passed_Validation_Test()
  14:      {
  15:          Assert.IsTrue(Actual);
  16:      }
  17:   
  18:      [Test]
  19:      public void Has_No_ValidationMessages_Test()
  20:      {
  21:          Assert.IsFalse(ActualMessages.Any());
  22:      }
  23:  }

The model validation requires we have a valid email, we have a password, a confirm password, and password and confirm password match, in this case everything is good and we get a true for is valid and no validation massages.

Next lets test an invalid model

   1:  [TestFixture]
   2:  public class When_Testing_RegisterViewModel_Validation_With_NoPassword_Test : ViewModelValidation_Context<RegisterViewModel>
   3:  {
   4:      public override void Context()
   5:      {
   6:          base.Context();
   7:          Target.Email = "TestEmail@email.com";
   8:          Target.Password = string.Empty;
   9:          Target.ConfirmPassword = "TestPass";
  10:      }
  11:   
  12:      [Test]
  13:      public void Fail_Validation_Test()
  14:      {
  15:          Assert.IsFalse(Actual);
  16:      }
  17:   
  18:      [Test]
  19:      public void Has_No_ValidationMessages_Test()
  20:      {
  21:          Assert.IsTrue(ActualMessages.Any());
  22:      }
  23:   
  24:      [TestCase("The Password field is required.")]
  25:      [TestCase("The password and confirmation password do not match.")]
  26:      public void Has_Expected_Validation_ErrorMessages_Test(string message)
  27:      {
  28:          Assert.IsTrue(ActualMessages.Any(x=>x.ErrorMessage== message));
  29:      }
  30:   
  31:      [TestCase("Password")]
  32:      public void Has_Expected__ValidationMessages_Test(string memberName)
  33:      {
  34:          Assert.IsTrue(ActualMessages.Any(x => x.MemberNames.Contains(memberName)));
  35:      }
  36:  }

In this test we made password an empty string and as a result it failed validation, and returned a validation message for password being required and for password and confirm password not matching.

Check out my sample application on Github to see this in action.

Thursday, February 18, 2016

I agree With Apple and San Bernardino County is at fault

There is currently a court case in San Bernardino where the FBI is attempting to compel Apple to build an update to IOS that would allow them to access encrypted data on the work phone of Syed Rizwan Farook (Farook and his wife Tashfeen Malik were responsible, authorities say, for killing 14 people and injuring another 22 who were attending a training event and holiday party last December 2 ). 

The FBI has requested and has been granted by Magistrate Sheri Pym, in the US District Court of Central California, a court order to force Apple to provide the FBI with software to defeat a self-destruct mechanism on the iPhone. Under the pretense that the phone belongs to the county and the county has the right to access any and all information on their device. To this Tim Cook (CEO of Apple) responded with an open letter to their customers saying they will not comply with the court order.  Apple is not disobeying the court order because it supports murdering terrorists, but because they value the rights of their users and are not going to deliberately circumvent their security. 

In a statements to Fox News one of the against working the San Bernardino case as stated “Nobody can build a phone that we cannot get in under unique circumstances. Why should Apple be allowed to build a phone that does that?” and  “The right should not supersede our ability to keep people safe. It’s why we are not finding others, encryption, and, specifically in this case, we cannot connect the dots.” source.  Here is where the DOJ, the FBI and the Judge is wrong and Apple is right.  “Our Rights” to not self-incriminate do supersede, and are guaranteed by the constitution.  To make this a little more clear the court order is basically the same as getting a court order to force MasterLock to update all of their padlocks to use a master key so the government can open them if they need to. 

Regardless of who owns what is being locked, by forcing a company to add security vulnerabilities to their products is wrong, reckless, and fundamentally against everything our country was founded on.  I understand the DOJ’s position but they are wrong. 

Why San Bernardino is at fault, if you are going to issue devices as a company/government entity (this was a work phone) you should be managing your hardware to protect your assets, this is the basic bus factor.  If an employee has a company device, the company owns it, and everything on it.  Outside of this case what happens if an employee has valuable company information on a device and the person is in a car wreak?  Digital assets, like all assets, need to be managed correctly, the device manufacture should not be forced to weaken their product for your bad planning.

Monday, March 23, 2015

Enterprise Design Patterns and Practices (Boise Code Camp 2015)

At this years Boise Code Camp I presented on Enterprise Design Patterns and Practices, afterwords I was ask to post my slide deck and notes, if there is interest I'm thinking of doing a blog series on this.

Fundamental principals

 S.O.L.I.D. Principals

·         A class/method/tier should have only a single responsibility it should do one thing, and do it well, even if that is telling other things what to do.
·         Software entities should be open for extension, but closed for modification, and always favor extension over inheritance.
·         Design by contract, you should never need to care what the object really is.
·         Using multiple client-specific interfaces are better than one general-purpose interface.
·         Depend use abstractions and not concreate implementations. Use a factory pattern, DI container, Lazy Load properties, etc.
These are the fundamental principles of OO development and apply equally for application architecture.

Be strongly typed and loosely coupled

By creating strongly typed interactions between objects you are enforcing a consistent behavior that will help prevent difficult to find runtime errors.  Decoupling objects in your application allows you to change the underlying object without the changing the caller.

The 2 biggest problems you will face

Golden Hammer

Having go to tools are great, but just because something can do it, doesn't mean it should be used. 

Silver Bullets

Everything from here on down may/can/possibly help, but will not solve all of your problems, always looking for the next technology to solve all of your problems will only end in pain.  These are tools in a toolbox, to be used appropriately.  If used incorrectly they will bring pain, lot of pain.

Automation, Automation, Automation

If it can be reasonably automated, it should be, Computers are really good at being consistent, and humans are not

Build server & source control

Nothing fundamentally improve your code more. (Jenkins, TFS, Team City, Bamboo, Apache Continuum, etc.) Nothing goes out that isn't in source control, EVER!

Application Deployment

Using deployment tools for pushing changes (MSDeploy, Octopus, etc.), copy past in not a recommended deployment methodology.

Database Deployment/Migration

Database changes are code changes too.  Use a migration framework (EF Migrations, Fluent Migrations, etc.) or a database change management tool (Visual Studio Database Project, Redgate tools, DB Deploy, Ready Roll, etc.) 

Testing

Automating your unit, behavior, integration, load and UX tests allows you to find problems before your clients/customers.

Instrumentation

Regardless of what you think might be going on, without instrumentation your blind.

Logging

This is the first and easiest way to find out what is going on under the hood, don’t write your own! Unless you really need to (log4net, nLog, Enterprise Library, SmartInspect, etc.)

Memory Profiler

While a logger can tell you what is going on in the code, a memory profiler (redgate ants, JetBrains Memory Profiler, etc.)

Site monitoring and Analytics

Just because your application is live doesn't mean you can stop paying attention to it, knowing how many people are interacting with it, how they are interacting with it and when will let you know what has happened, what is currently happening, in order to guess what is going to happen.

Leveraging 3rd party tools

Focus on your core features, leverage 3rd party to deal with the grunt work.

Abstract your 3rd party includes

Using 3rd party tools are great and will save you from re-inventing the wheel, but to keep from being locked in always abstract away the implantation when you can.  In the future the library may no longer be supported, licensing problems, etc. being able to quickly insert a replacement will save you in the long run.

Building for Extendibility

Separate your application into logical segments or tiers.   

n-Tier Design

The 3 major application tiers are (but not limited to)
·         UI – For displaying or transmitting information (webpage, API endpoint, etc.)
·         Domain or Business – This is where the work and decisions are made
·         Data Access – Where the data comes from (DB call, Web Service call, file read, etc.)
A very important rule is tiers can only see the tier next to them, the UI should have no concept of where the data it’s displaying is coming from, and all it cares about is it made a request from the Domain and got data back.  This applies to the Data Access as well, all it knows or cares about is returning data requested by the Domain.  This logically separation of your logic allows you to reuse and refactor your application cleanly without fear of breaking changes in an outside tier. 

DTO, POCO, and Model objects

With each tier isolated from each other we are going to need a way to pass information back and forth, enter the DTO (Data Transport Object).  Basically these are classes that hold information, they have little to no logic and know nothing about where they are, where they came from, or where they are going.

Separate Update, Select and Work Logic

Domain classes can get big fast, by keeping them logically separated into a more singular function is keeping in line with the single responsibility principal and keep them more manageable.   
This can be applied to Data access classes as well even if it’s as simple as having 2 interfaces on a data repository class, one for updates and one for selects.  This might seem silly until you start working with publisher and subscriber databases.

Do not over abstract

While using abstractions to separate and decouple your logic is good, if overused/abused these good intentions quickly paves a road abstraction hell.  Where adding a single property to your result set to display requires modifying 5 DTO objects, 4 file mappers, yes I know of an application that does this.

 

Building for Scalability

Dealing with Load

Caching   

·         Micro-caching – for lots of requests for the same data needing relatively real time data(5-10 sec)
·         Memory caching – for things that don’t change that often (1+ hours): User information, product data, GEO data (city, state, LAT, LOG, etc.)
·         DB caching – For  sharing your Cache

Spreading the Load

With your logic separated and decoupled it’s easy to farm your pain points to other hardware, think out instead of bigger.
·         Move expensive functionality (Encryption, PDF/Image creation, etc.) to a dedicated system/systems to prevent load on your main server/servers.
·         Using publisher & subscriber data bases, to reduce the load on your primary database, move selects from the main to the subscribers, this is also a good practice for reporting servers, running a massive metrics report for the sales team should not affect production performance.
·         Aggregate and pre-process data, just because you saved the data in a specific DB Schema don’t mean you have to keep it that way.  Having a job that copies data to a more flattened table for faster searching can greatly improve performance, even more so when aggregating from a SQL server (MYSQL, MSSQL, Oracle) to NoSQL( CouchDB, MongoDB, etc)

Queuing up

If you don’t require an immediate synchronous response message queues are a great way to keep your system fast, prevent data loss (messages persist until client acknowledges receipt), and distribute workload. Using a Service Bus takes queues to the next level by creating an architecture for distributing work load.

CDN (Content Delivery Network)

Keeping your static files fast, having a striped down server that only serves up static like images, css, JavaScript, etc. these can also be geo-located for even better performance.

Not everything needs to go to the DB or come from it

Databases are great for storing data and pulling data but they aren't the only solution.  Lots of data submitted by the use doesn't need to be persisted in the database.  There are better ways to localize your site then storing it in the database.  Customer uploading images are another area where keeping the Meta data in a database is a good idea but the image itself should probably go to disk to disk (a CDN would be great for this).

Redundancy

Hardware fails, systems crash, operating systems need updating, and redundant systems keep you online.