Segmenting the portfolio: a time bomb in the making

So, PACBASE migrations are complex projects, and since they often relate to the core business of the organizations, they are critical, and risk must be mitigated in all possible ways.

In a previous post, I explained why, in my opinion, doing as little as possible is not necessarily the appropriate answer to mitigate risks. Leaving most of the complexity in the PACBASE-generated code can reduce the cost of testing, but increases the total cost of ownership by full factors. It does not address the issue of the cost of maintaining highly unstructured code, as generated by PACBASE.

Credits to :

Another approach is to segment the portfolio. One then makes a distinction between programs that are actively maintained and those that have barely been touched for years. It is considered acceptable not to restructure the latter set, on the assumption that they won’t change in the future and won’t require extensive maintenance.

It is of course tempting. One concentrates on the hotspots of the system to migrate, and reduces the cost of the migration project.

It is also very dangerous, and plain wrong.

It is dangerous because it assumes that the parts that have not been maintained for some time just won’t be maintained in the future. It is probably true for a serious part of this segment, but it is excessively optimistic to assume that all the programs that have not been modified recently won’t ever be. A live system is not made of an ever decreasing core of programs being maintained. Changes in the functional or technical environment can cause a set of perfectly stable components to require a complete overhaul.

Assuming that large parts of a system just won’t require maintenance in the future is like putting a time bomb on your IT. It is a decision based on incomplete intelligence, with far reaching consequences. Some future changes will be supported. Other won’t. Tough.


It is wrong because it is based on incorrect assumptions regarding the restructuring process. It is automated (or it must be, see yet another previous post about this), and applying it to 10000 programs is not intrinsically more risky than applying to 1000. An automated process is deterministic, and testing for its correctness should not cost more for larger portfolios. In other words, the savings if any are minimal. If the migration technology at hand is adequate, there should be little difference in testing and validation depending on whether one deals with the complete system or only the activaly maintained subset.  At the very least, the cost of testing should never be a linear function of the size.

But that is if the process is automated. And it is admittedly a big if.

As usual, comments are very welcome.

Have a great day !

Image credits

About Darius Blasband

Darius Blasband, 48 years old, married, 3 children. He has a Master's Degree and a PhD in Computer Science from the Université Libre de Bruxelles. Darius runs RainCode (, which is a company specialized in compiler design and legacy modernization.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *