Product Rearchitecting for new generation

Product Rearchitecting for new generation


A product company having a successful run with many large corporate customers across India, wished, naturally, to sustain its success and grow with a next-generation product. A new generation of a product is attempted due to some dissatisfaction. SwanSpeed consulting engages with clients at three different levels of increasing involvement: Tell, Show and Do. We were called in to tell them what to do and show them how-to on a critical sub-system. We started with an overall assessment of the product and development/management practices (glimpses of the actual report in appendix). A significant section of the product (a large, crucial and complicated stored procedure) was replaced by a modern business object centred system to dramatically reduce complexity of code corresponding to that functionality, i.e reduction of technical debt. This enabled easy addition of new features, and in one such case without any additional coding. So, in effect, a small expansion of functionality on top of cleaner design/code. To be fair the old part of the system incurred many changes made under the pressure of urgent client demands, which over time, lead to mounting technical debt (a classic progression of cause and effect in the s/w world) which gradually resulted in a lot of unsavoury side-effects, the main one being a congealed swamp of code which slows new feature added to a snail’s pace. Our intervention showed conclusively that the product could be carried into a new generation with improved scalability, performance and lowered technical debt.

Telling the ‘what’ and ‘why’

The flagship product was a financial entity processing application in production for over a decade. The workflow functionality was quite uniform across client implementations, and the database structure common. However, at the point we intervened it was not hosted as a multi-tenant cloud-based SaaS offering. It was basically a three-layer architecture, browser client (in HTML with minimal JavaScript), .Net for channelling information from browser to the back-end, and the Oracle backend, which contained both the business logic as well as the persistence capability. This meant that the domain and its state is stored as data, but nor easily visible to the active application. This doesn’t make much difference to the users and the product owners, for the present. However, over time, as improvements and customisations were implemented, the morass of unstructured code and creaky architecture had calcified to present a formidable obstacle. This is technical debt, the presence of which is revealed in the rapidly increasing difficulty of making changes to the software. It was evident that an entity-based model of the domain implemented in the middle layer was what the doctor ordered. This makes the application far easier to write automated tests for, as well as modifiable, without compromising side effects. This would mean a thinning of the stored procedures.

Modern systems have significant validation of data-input at the front end, more effectively implemented in JavaScript and the current preference being an Angular SPA. Also, the need for multiple browser compatibility recommended Angular.

Showing ‘how-to’

After discussions, it was decided that we would take up a couple of the biggest and crucial stored procedure in the system for the client as a production grade deliverable.

This is a vertical slice of re-architected functionality, from business logic to persistence. The secondary goal was a demonstrator of quality code and good design. However, we had to define in code, the full entity model so that other parts of the systems to be transitioned in the near future have the base ready. A large part of this entity model was used in the implementation of the said functionality. We used .Net and a Redis in-memory database with Jason for exchanging data. We had improved performance via easy object creation and processing holding almost all of the data in-memory.

This exercise also included significant auxiliary documentation, automated tests and code quality reports (SonarQube) to meet a Definition of ‘Done’:


  1. 80%+ coverage (Unit test and API tests range between 600 and 900 tests)
  2. All current functional tests of SP pass with new architecture
  3. Technical Documentation and Class relations
  4. Inner Functions or SP also to be covered
  5. Scheduler adjustments to be included
  6. Coding standards are met (SCA and code reviewed)

The original sub-system was created over a few years, and we developed the re-architected version in about three person months, over six calendar months (one person part-time). We finally gave a master script to run 1080 API tests. This excluded a few other unit tests with the effective coverage being 95%.

We started with a short workshop primarily involving Product Management, and secondarily development and testing groups. We got an overview of the functionality through basic presentations, discussion and documentation. This enabled us to get a fairly thorough knowledge of the relevant functionality. Next, a study of current code SP (Oracle- Stored Procedure) gave a different perspective of how the system works. This wasn’t easy, as the code had a lot of confusing constructs and flow of logic. However, after some struggles, we were able to discern a good portion of the functionality and in turn had a few questions for the product management. Finally, we took the total of 720 API test cases, which consisted of various combinations of parameters mapping to different functional scenarios. So, while developing we continually used these three perspectives to triangulate and create the re-architected sub-system. As the project progressed, scripts which ran 1080 test cases were written. We could identify, in collaboration with product management, a sub-set of 150+ functionally crucial scenarios. These were a part of our test suite, and all of them passed. We wrote a utility for automating comparison of results from current and new sub-system. It turned out there were a couple of residual functional bugs within the current system and a handful of bugs around handling ‘nulls’. The final delivery was a functionally improved new-Gen re-architected sub-system. In a sense this is an audited sub-system as a result of triangulation across code, functionality and testing.

We never had more than a couple of bugs outstanding at any point, usually fixed within 36 hours. This meant a bug tracking tool and the overhead was simply absent; we have no use for Jira, or such like. A suitable CI setup within SwanSpeed environment, including over 1000 tests, meant that any fix/change could be tested for regression in a jiffy; a jiffy being approximately 12 mins. Since we were not expanding this sub-system in the near future, we didn’t attempt to lower the build plus test times further.

A final demonstrator for delivery was a ‘small’ change: A new stock exchange and relevant information was to be handled in the new system. This needed just a configuration change, followed by some testing effort (a dozen automated tests) and all went smooth as silk; We had it ready for production without a hitch.


Some actual snapshots of the Engagement

R1: Glimpse of report

Fig A: Recommended and largely implemented Architecture

Fig B: Code Duplication Picture (If it were bad one would see Yellow/Red)

Fig C: Code Quality report snapshot (65% coverage for total code, the business logic forms under 70% of total code)

Ruminations: Would the company have survived without such an initiative? Of course, but for how long? And overseas expansion? Glossing over the innovator’s dilemma, the market conditions clearly favoured extending the product life.