Where Exactly Is the Boundary of Runtime Computation?
What, exactly, must a computation be in order to count as runtime?
We have long been able to regard a program as a mapping: it is composed of a sequence of ordered operators. If a computation must occur at runtime, then the operands of that computation must be obtained by initiating IO. Apart from that, strictly speaking, it can all be completed at compile time; the remaining question is one of engineering trade-offs.
Since C++98, the template system has been Turing-complete, and this opened the path for compile-time computation. Later, constexpr and consteval are, more than anything else, engineering optimizations along this trajectory. After all, eliminating constant factors is often a deeply tempting proposition.
Obviously, the operands of operators are not necessarily used for mathematical computation. If we examine the substance of memory management, it is in fact the maintenance of allocation information through specific data structures. The practical meaning of a memory operation is simply to modify that ledger.
This is where things become interesting. We all know that if a data structure needs to support convenient merging and modification, it is usually implemented with pointers, so as to avoid the cost of frequent copying. Moreover, in modern C++, cursor-based implementations are still, in essence, pointer-like implementations. In other words, hard-coding memory management into algorithmic logic is a pseudo-requirement; it should be handed uniformly to allocator policies through template metaprogramming. If courses on algorithms and data structures still insist on the three implementations of linear lists, then in my view, this is somewhat too pre-modern.
Once this premise is clear, we can re-examine the inherent problem of pointer-based implementations: it is really a memory-access performance bottleneck caused by non-contiguity. The more fundamental question is: why do we need heap allocation in the first place? Because some forms of bookkeeping genuinely require runtime information.
At this point, it should be clear what I am getting at. Hash-table-driven state machines, prefix trees for RESTful-style API routing, and so on: if the registered services do not need hot updates, then compile-time computation can be used to allocate fixed contiguous memory while preserving pointer semantics, thereby improving locality. The data can simply be loaded during initialization. Even if hot updates are required, as long as reserved space is used, or updates themselves are infrequent, this cost can still be pushed down substantially.
Furthermore, if strategy-driven template metaprogramming is used here to decouple allocation logic and growth logic into the allocator, the price you pay is merely a little compilation time. What you may save, however, is an enormous amount of maintenance cost for the entire team.