The Department of Water Ways and Public Works (Rijkswaterstaat) in the Netherlands has the legal obligation to periodically assess the strength of the Dutch flood defenses. The criteria for the assessment are continuously improving so that a new set of criteria applies in each round. Initially, the probability of water getting over the dikes was the main concern. These days, a much more refined approach is used to determine the risk of flooding. It includes all kinds of mechanisms that could make a flood defense fail and also looks at how serious the damage is.
As the criteria evolve, so does the software to compute the strength of the flood defenses. Rijkswaterstaat ordered Deltares and other companies, like HKV, to develop the Hydra-Ring software. It contains the latest and best methods to assess the strength of the flood defenses. But a practical issue was that the calculations would take too long to assess all the dikes in the Netherlands. Therefore, Rijkswaterstaat asked VORtech to speed up the most compute-intensive parts.
VORtechs HPC expert Koos Huijssen coordinated this work. Says Koos: “This is really complex software. We didn’t know the software so our first step was to assess where the performance bottlenecks were. We have developed a lot of tooling in all the years that we have been doing these kinds of analyses. Our tools are much more effective for this kind of software than normal profilers.”
This performance assessment revealed a lot of issues. Some were easy to solve. One of them was an inefficient choice for a central data structure. Koos Huijssen: “This is not too suprising. It’s hard enough as it is to get the algorithms right. Understanding the consequences for the performance is too difficult for most developers. That is just the kind of added value that we bring to the table.”
A couple of relatively easy changes already brought a significant speedup of the software. For some parts, the software speedup was more than a factor of 10. But Hydra-Ring consist of many separate parts that each take long to compute. Therefore, it’s not sufficient to speed up only one or a few parts. Thus, the important parts were each handled in turn.
That also involved more complex changes. In some cases, it was necessary to store intermediate results of computations to avoid recomputing them every time. But, according to Koos, in several cases the changes were on a different level alltogether. “Sometimes, you can save yourself a lot of computing by using a different algorithm. That doesn’t have to change the result; it’s only the way it is computed that is different.”