A common C++ “interview question” is to calculate the Fibonacci sequence using templates, a feature related to template metaprogramming (TMP). The implementation is quite simple:
#include <iostream>
template <unsigned int n>
struct Calculator
{
static const unsigned int value = Calculator<n - 1>::value + Calculator<n - 2>::value;
};
template <>
struct Calculator<1>
{
static const unsigned int value = 1;
};
template <>
struct Calculator<2>
{
static const unsigned int value = 1;
};
template <unsigned int n>
const unsigned int fibonacci = Calculator<n>::value;
int main(void)
{
std::cout << fibonacci<5> << std::endl;
}
The core idea lies in C++‘s unique template argument deduction and matching mechanism, which essentially provides a form of branching. The compiler will prioritize matching the “more suitable” specialization, so we specialize the template for n=1
and n=2
to serve as the recursion’s base cases.
A similar concept is applied in template partial specialization:
#include <iostream>
template <typename T>
struct is_pointer
{
static const bool value = false;
};
template <typename T>
struct is_pointer<T *> // Partial specialization!
{
static const bool value = true;
};
template <>
struct is_pointer<decltype(nullptr)>
{
static const bool value = true;
};
template <typename T>
const bool is_pointer_v = is_pointer<T>::value;
int main(void)
{
std::cout << is_pointer_v<decltype(nullptr)> << std::endl;
std::cout << is_pointer_v<decltype(2)> << std::endl;
std::cout << is_pointer_v<decltype(&std::cout)> << std::endl;
}
For &std::cout
, C++ matches the more suitable template specialization T *
, and thus we get value
set to true
.
Of course, “more suitable” is just a convenient way to put it; in reality, the C++ standard has strict rules governing the deduction process.
It is worth noting that a well-known law in software engineering states: complexity doesn’t disappear; it only gets shifted. This kind of calculator, through template instantiation, effectively shifts the complexity of runtime calculations to compile-time. Most of the time, this is a worthy trade-off. When we analyze production code, we usually think in terms of asymptotic complexity—optimization is more concerned with growth rates (although user experience presents a new challenge: in many scenarios, constant-time optimizations are also highly valuable).
However, the advent of modern C++, starting with C++11, provides a better option for this kind of compile-time calculation: constexpr
.
What we were doing is essentially implementing a function like this:
int fibonacci(int n)
{
if (n == 1 || n == 2)
return 1;
else
return fibonacci(n - 1) + fibonacci(n - 2);
}
The real bottleneck in the past was that the compiler had no way of knowing that this calculation should be moved to compile-time, so we were forced to use templates to compel the compiler to do the math. When we look at a compiler, its true power lies in maintaining type information. The essence of the compilation process is to complete a constructive proof, and the boundary of optimization is the boundary of the information the compiler possesses. Therefore, a more natural solution is to provide the compiler with this information, allowing it to perform the compile-time calculation. This is what constexpr
is for.
It suggests to the compiler that the calculation be performed at compile-time:
#include <iostream>
// With a constexpr function, calculations can be done at compile-time!
constexpr int fibonacci(int n)
{
// Since C++14, constexpr functions can contain much more logic,
// making them look like regular functions!
if (n == 1 || n == 2)
return 1;
else
return fibonacci(n - 1) + fibonacci(n - 2);
}
int main(int argc, char *argv[])
{
// Because fibonacci(10) is a constant expression,
// the compiler will calculate it directly at compile-time!
std::cout << fibonacci(10) << std::endl; // After compilation, this becomes `std::cout << 55 << std::endl;`
// You can even use it to define array sizes
int arr[fibonacci(6)] = {}; // Defines an array of size 8
std::cout << "Array size: " << sizeof(arr) / sizeof(int) << std::endl;
}
The problem, however, lies in this “suggestion.” Just as C++ templates have a recursion depth limit, if a constexpr
computation is too deep or if the compiler is unable to deduce that the expression is a constant, it might choose to fall back to a runtime evaluation. To eliminate this “suggestion” and enforce compile-time evaluation, you can use the consteval
keyword. Both are ways of providing information to the compiler to optimize the code.
As an important aside on the complexity of these Fibonacci calculations: In the template metaprogramming example, each
Calculator<n>
is a distinct type. The compiler, which maintains type information, will store them, thus preventing the re-computation of subproblems. Consequently, the algorithmic complexity is not but rather , as we only need to compute each case from 1 to n once, and random access in a typical RAM model is a constant-time operation. In the laterconstexpr
example, from the perspective of modern mainstream compiler implementations, they effectively perform memoization to avoid redundant work when evaluating a constant expression in an optimized build. Therefore, in practice, the compile-time cost of a recursive call likeconstexpr int fib(10);
is also closer to linear complexity, O(n), rather than exponential.
Of course, C++17 introduced another powerful feature for constexpr
: if constexpr
, which performs a conditional check at compile-time and discards the unused branch entirely.
#include <iostream>
#include <type_traits>
template<typename T>
void print_info(T value)
{
if constexpr (std::is_pointer_v<T>)
{
// If T is not a pointer, this entire block, including the curly braces,
// will be completely pruned by the compiler, as if it never existed.
// That's why even if T is int, the syntax error in *value won't cause a compilation failure!
std::cout << "It's a pointer to: " << *value << std::endl;
}
else
{
// Conversely, if T is a pointer, this block will be pruned.
std::cout << "It's a value: " << value << std::endl;
}
}
Overall, this demonstrates the trend towards more compile-time computation in C++.
Of course, if you insist on using the notoriously hard-to-read template metaprogramming for calculations, or if you simply want to experiment with some clever tricks, constexpr
still has a great role to play in creating compile-time objects. This is a huge improvement in both functionality and readability:
#include <iostream>
struct Point
{
double x, y;
// Step 1 of the magic: a constexpr constructor!
// C++11 required the constructor body to be empty, allowing only an initializer list.
// C++14 and later relaxed this, allowing more complex logic in the body.
constexpr Point(double x_val, double y_val) : x(x_val), y(y_val) {}
// Step 2: Member functions you want to call at compile-time must also be constexpr!
constexpr double length_sq() const
{
return x * x + y * y;
}
// Step 3 (implicit): Its destructor must be trivial.
// The default destructor is trivial, so we don't need to write it out.
// ~Point() = default;
};
int main()
{
// Witness the magic! We are creating a Point object at compile-time!
constexpr Point p1(3.0, 4.0);
constexpr Point p2(5.0, 12.0);
// Call its member function at compile-time to compute another constant
constexpr double len_sq = p1.length_sq(); // The compiler calculates this as 25.0
// Use this compile-time result to define an array
int my_array[static_cast<int>(len_sq)]; // Creates an array of size 25!
std::cout << "Array size: " << std::size(my_array) << std::endl;
// You can even perform operations on these objects at compile-time
constexpr Point p3(p1.x + p2.x, p1.y + p2.y); // p3 is (8.0, 16.0)
std::cout << "p3.x = " << p3.x << std::endl; // After compilation, this line might become `std::cout << "p3.x = " << 8.0 << std::endl;`
}
In conclusion, I want to point out that the history of compiler optimization is a story of two complementary forces: the improvement of optimization algorithms on one hand, and providing more information to push the boundaries of semantic constraints on the other. From a modern C++ perspective, we should return to the true essence of templates: abstracting processes for policy-driven design. Using templates for compile-time Fibonacci calculation was more of a necessary workaround before C++11, and recursive template instantiation is a source of the infamous “template black magic” that makes code hard to read.