Monday, August 2, 2010

Muffin Top architecture is killing us

I want to share a story. I recently encountered a company that had somehow navigated the many hazards that face technology startups and managed to create a solid, successful business.  They had gone public and now after years of struggling with a collection of applications that consisted largely of simple markup and script pages that directly accessed database queries, they were in the process of transitioning everything over to a real architecture, a single unified “Framework”. 

The “Framework” was a thing of architectural beauty, object oriented design as far as the eye could see.  It had been designed by some incredibly bright developers.  So bright in fact that they had progressed beyond the point where they were even involved in writing code for a shipping application.  Some were so smart, that they skipped the mundane part of a programmer’s career where they spend years writing and maintaining application code, and instead they moved directly into the architecture world.

The core of the “Framework” was the data access module which was based on LINQ to SQL, but hid all of the entities as well as the data context behind interfaces. These interfaces would of course need to be updated whenever changes were made to the LINQ data model, but they made the “Framework” feel a lot more object oriented and provided the additional benefit of hiding any functionality that the architecture group didn’t think regular application developers should have access to. 

The LINQ data model also benefited from some extra architectural love.  Mapping directly to tables was out of the question, so every LINQ entity mapped to a collection of sprocs.  After all, the idea was to create an entity model, not just map directly to table data.  No, every entity was carefully planned and modeled, every field the application developers might need was anticipated and aggregated using those sprocs.  Now this did create yet another point (like the interfaces) at which data mapping logic could go wrong and throw errors, but that wasn’t a problem because the architecture team was able to use Visual Studio to generate a massive collection of database unit tests that would exhaustively test every possible crud scenario for every sproc and assert that it worked as expected.  Of course the massive collection of db unit tests and the sprocs themselves would need to be rewritten every time even the smallest change was made to the data model, but architecturally speaking it was just the right thing to do.

Finally the day came when the “Framework” was pushed out to the unwashed masses of developers who would benefit from it’s glorious design.  But an unexpected thing happened.  New application development done on top of the “Framework” slowed to a crawl.  It was taking 3 to 5 times longer for application developers to build code with the “Framework”.  To make matters worse, the code that was produced, ran slower, sometimes much, much slower than the old legacy code that accessed the database directly. Also, the application developers kept coming back to the architecture team requesting that additional fields be added to the entities.   They said these fields were required in order to provide features that were already available in the legacy applications.  It was as if these guys had never been part of the endless modeling meetings where everyone worked out in advance every field each entity would ever need.  The problems just kept piling up, and application ship dates slipped further and further.  Eventually the architects had to face up to an unpleasant conclusion… the company’s application developers just weren’t very good. I’m not kidding, that really was their conclusion. I actually heard that straight from the mouth of one of the architects.  (BTW, in case you’re wondering the application developers were actually some of the most skilled that I’ve ever run into.  They were very good developers.)

The point of this story is that the problems we face as programmers in the .Net world are rapidly changing.  It used to be that the most common problem I ran into was horribly constructed spaghetti code that was then copied and pasted into 20 different pages of the application.  No code reuse, poor logic structure, and of course direct access to the database in the code behind or even in the markup pages.  The problems were largely the result of us as .Net developers just not being skilled enough to properly design our code.  But .Net is a developer-friendly technology and most of us were able to hack together code that worked even if it wasn’t pretty.  Note that I did say “us”.  Some of my early code is horrific.

Now, increasingly I see a much different problem.  As a community we’ve learned a lot more about real software development. We understand all of those architectural things that our Java buddies used to talk about.  We’re using design patterns, object oriented design, automated testing, ORMs, true layered architectures, dependency injection, component architectures, class inheritance all over the place, interfaces, contract based programming, service oriented architecture and a bunch of stuff that sounds like it should enable us to ship more reliable code, and to ship it faster.  The problem is that in my experience, most of the time it doesn’t.  Don’t get me wrong, I'm not saying that the practices listed above are bad, or that we should go back to writing simple markup pages with SQL queries written directly in our scripts.  What I am saying is that in our eagerness to flex our new-found coding muscles, we’re just throwing these things into our applications because we think they’re a “best practice” and the result is a lot of over-architected, impossible to maintain, fat, sprawling code.  I just get a sense that many very intelligent .Net developers are throwing this stuff into their applications because this is what a real architecture looks like, not because they are getting some measurable benefit.  At the end of the day it’s all about shipping code, not creating an architecture that makes us feel like real architects. 

A smart guy that I met at the unspecified company mentioned earlier put a name to it.  He called it Muffin Top Architecture.  I didn’t know what “muffin top” meant until he explained it to me. If you’re confused do a search on Google images for muffin top and you’ll figure it out.  Hint, it doesn’t refer to the images of real muffins.  Anyway, I’m starting to see quite a bit of Muffin Top code.  The company I told the story about is by no means atypical.  I could go on for pages and pages.  I’ve even encountered companies who are unable to make changes to their own code, that they wrote. There are just so many layers of architecture and class inheritance that changes are incredibly painful even for the original developers. 

I don’t have an easy answer, but I do think we need to start producing code that is simpler, easier to maintain, and quicker to produce.  .Net is a great stack and it isn’t difficult to work with, but we’re developing this reputation of “.Net is a solid platform HOWEVER it takes so long to build on that it’s not appropriate for many projects, and definitely not for a startup project”.  I don’t think that’s true.  The problem isn’t .Net, the problem is that we’re over-architecting, and in many cases Microsoft is more than happy to help us further down the path to destruction with new persistence frameworks, designers, and component architectures.   I think we should be looking more and more at technologies like Ruby on Rails and trying to figure out how they achieve the remarkable efficiencies that are possible on that platform.  It’s certainly not through a designer.  I also think that the answer lies in less code that is simpler and easier to understand, not piles of generated code and component architectures that make it impossible to figure out what code is executing when. 

Another big part of the problem is the emergence of the non-coding architect.  This is the person who doesn’t write any shipping code themselves, they just tell everyone else how they should be writing code.  I think the role of architect is tremendously important.  I see the architect as the lead developer, the mentor, the overall system designer.  But let me make this clear, an architect who doesn’t write code isn’t a real programmer, they are an obstacle to real programmers.  It doesn’t matter how skilled that person is, if the architect isn’t writing shipping code then their incentives are not aligned with the other developers.  The real programmer’s number one priority in life is to ship working code.  The non-coding architect’s priority is to create beautiful architecture. 

If you have any thoughts please don’t hesitate to post them.  Personally, I think that Muffin Top Architecture is killing us.  Is there a solution or am I just destined to go be a Rails developer like Mike Moore, Jeff Cohen, and Scott Bellware?  Check out Herding Code 84 for some interesting discussion by the .Net expatriates.

11 comments:

  1. I completely agree to your sentiments.

    However, its imperative to ask "why most of the technology start-ups land into the wonderlands of no return?"

    My few cents as below
    1)As a start-up entrepreneur one cant hire an architect and ask him to be productive from day1. I do believe there doesnt exist Architects. However people are "Situational Architects". As a start-up entrepreneur one needs to know "what goes in the code".

    2)StartUPs are capital-starved (though adequately funded) always look for MORE with less. Thats the reason most of the time one person endup doing everything Analyst-Architect-Developer-Infrasetup-Testing-coordination.... COWBOY Shop.

    3)"You can Outsource your work not Worries". Most of the times StartUP hires architects to pass-on their worries of technolgy-unawareness. Reality results into Burning their own fingures.
    All 4 YouTube fonders were close-knit friends and developers. They precisely knew what goes in the code.

    4)Agreed to the opinion that in MS Technology Stack there dont exist one single-uniform way of doing things. I remember working on DAO-RDO-ADO-ODBC API-ADO.NET-LINQ-EF ... all these are nothing great but ways to access the database :-). Why so many ways ? Why dont have 1-2 standard way the JAVA guys have? ... Thats where the code becomes Alien from one developer to another.

    I genuinely feel, your post become the tipping point to scratch the surface of the ISSUES we as technical community face.

    ReplyDelete
  2. In an effort to reduce development time (in the beginning) many projects use LINQ-to-SQL, NHibernate, and the list goes on...

    I'm not sure why this is the case. Why should the data access layer of an application be the main focal point of the technology used. Software needs to solve a business need or requirement. Instead of focusing on the business solution on hand - too much time, energy, and resources (i.e. $$) is wasted on implementing the endless ORM, data-mapping, entity framework tools/solutions.

    I'm not saying not to use tools that make sense. Just don't use them as the starting point of the "entire" solution. Consider the life of the software, the end-users, scalability, number of users, etc...and then choose the right tools for the job.

    Man, there's nothing wrong with writing some ADO.NET using IDataReader(s) to fill simple entity objects (POCOs). Also, there are many tools like T4 Templates that can generate this layer and even the stored procedures. But don't get sucked into the "tool" vacuum that prolongs the development cycle or impedes the business as a technical bottle-neck.

    ReplyDelete
  3. Thank you! Great article. I couldn't agree more.

    ReplyDelete
  4. I call this problem 'software fashion' where people read the blogs and feel they must add dependency injection or whatever to their solution regardless of whether it's needed or not.
    I've worked as a C++, Java and .NET developer and I saw early Java projects fail through over-engineering, the same thing happens now with .net.
    BTW I see nothing wrong with LINQ to SQL in a layered architecture, and I love LINQ.

    ReplyDelete
  5. As a contractor, who bills by the hour, these over engineered projects are invaluable, they take 3 times as long, they give me more buzzwords for my CV. More layers, more design patterns, more complex code.

    ReplyDelete
  6. Relational databases have flexibility and extensibility at the core of their design. You can design a table...need a new column? No problem and nothing breaks that worked with the old design. Need a new view of data that combines multiple tables? No problem.

    RDB's thus work well in typical human environments, where even those that know everything about an activity can't remember all the details when asked...but will remember when first shown version 1! "where's the dingbat code? You need a dingbat code, there, right below the whatzits!"

    OOP, on the other hand, is best in areas that are very well known and understood. Like GUI implementations, or when a packaged software company is ready to implement version 22 of their general ledger package after building the first 21 versions over a period of 20 years...

    But building a new, custom application? Stick with the RDB and a thin layer. You go ORM too early and you've just put a hard crust around what needs to stay fluid...

    ReplyDelete
  7. Another gripe--wrapping every class with an Interface...are you kidding me? Want to right-click in VS and find the method definition? Great! Look, it is the interface...maybe I'll do a search on all classes that implement the interface...oh, there it is. That was useful.

    ReplyDelete
  8. Good post!
    I particularly liked the architects reaction!

    I recently worked with a Project Manager who pretended being a former architect and wanted to take, and actually took, architecture decisions. It is even worse than a "non coding" architect.

    I also agree that an architect must be part of the team. "paper based architects" as I call them (excluding Enterprise Architects who work on very high level integration & security requirements) should not influence a project because they are disconnected from the reality which is made of all the implementation details we face when we start coding.

    Cheers

    ReplyDelete
  9. I totally agree! And .NET can do it all through thick and thin.
    I would like to add a similar trend that brings bad applications.
    I am on a project which was architected to run with InfoPath forms on SharePoint 2007 with workflows.
    The customer was sold on the ease of creating the POC application and only then we started getting the requirements.
    Long story short, InfoPath creators never thought this kind of application is a candidate.
    It is hell to maintain and many times just does not do what it does in a simple form.

    Solution architects need to think ahead and choose the right tools for applications.

    ReplyDelete
  10. @Anonymous - regarding the gripe about wrapping every class in an interface, you are absolutely correct. Clicking go to definition on an entity class and then going to an interface instead of the concrete class is incredibly annoying. I think coding to contracts/interfaces in a framework is a great practice (good example is the new ASP.Net MVC framework), but wrapping everything including entities in an interface is an egregious example of muffin top architecture.

    ReplyDelete
  11. This makes me think about the evolution of math. There must have been a point where the number of techniques exploded as people figured out more and more patterns. These patterns fostered expression of the same ideas but at a higher level. There were probably mathematicians who decried the sheer quantity of new techniques and preferred to stay with their tried-and-true methods. There were probably others who sought new math systems that weren't as complex or challenging. This struggle is nothing new; it is simply the growing pains of a very, very young practice. No one should be surprised.

    The frameworks in question exist for specific reasons. The most important and most overlooked aspect of learning a framework is *learning the problems it solves*. Without a solid understanding of the framework's why, nobody, architect or not, is in a position to understand the framework's how. If someone puts a framework into place without taking this crucial step, *then* they step into the land you describe. We aren't discussing the prevalence of frameworks here; we are talking about the personal responsibility of developers. Decision-making is the most valuable skill of an architect; a more worthwhile discussion would be the failure to utilize that skill on the part of individuals, not the failure of the world-at-large to stop giving us options.

    ReplyDelete