A Deep Dive into Compile-Time Fibonacci Calculation

A common C++ “interview question” asks us to compute the Fibonacci sequence with template and the machinery surrounding template metaprogramming. The implementation itself is not complicated:

#include <iostream>

template <unsigned int n>
struct Calculator
{
    static const unsigned int value = Calculator<n - 1>::value + Calculator<n - 2>::value;
};

template <>
struct Calculator<1>
{
    static const unsigned int value = 1;
};

template <>
struct Calculator<2>
{
    static const unsigned int value = 1;
};

template <unsigned int n>
const unsigned int fibonacci = Calculator<n>::value;

int main(void)
{
    std::cout << fibonacci<5> << std::endl;
}

The core lies in C++‘s characteristic mechanism for template argument deduction and matching. In effect, that mechanism gives us a form of branching: the compiler prefers the “more suitable” match, so we specialize the cases where n is 1 and 2 as the recursion’s base cases.

The same idea appears quite naturally in template partial specialization:

#include <iostream>

template <typename T>
struct is_pointer
{
    static const bool value = false;
};

template <typename T>
struct is_pointer<T *>  // Partial specialization!
{
    static const bool value = true;
};

template <>
struct is_pointer<decltype(nullptr)>
{
    static const bool value = true;
};

template <typename T>
const bool is_pointer_v = is_pointer<T>::value;

int main(void)
{
    std::cout << is_pointer_v<decltype(nullptr)> << std::endl;
    std::cout << is_pointer_v<decltype(2)> << std::endl;
    std::cout << is_pointer_v<decltype(&std::cout)> << std::endl;
}

For &std::cout, C++ matches the more suitable template form T *, and value is therefore set to true.

Of course, “more suitable” is only a convenient shorthand. At the level of the standard, C++ defines the deduction and matching process with strict and rather fine-grained rules.

It is worth remembering an almost cruel law of software engineering: complexity does not disappear; it moves. This calculator, through template instantiation, transfers the complexity that would have belonged to runtime computation into compile time. In most cases, that is a defensible exchange. When we examine production code, we tend to reason in terms of asymptotic complexity: optimization is primarily concerned with growth rates, although user experience adds a less forgiving supplement to the doctrine: in many situations, constant-time improvements also deserve serious attention.

Yet C++11, the point at which modern C++ begins in earnest, gives us a more direct tool for this kind of compile-time computation: constexpr.

What we were doing above was essentially a circuitous way to express the following function:

int fibonacci(int n)
{
    if (n == 1 || n == 2)
        return 1;
    else
        return fibonacci(n - 1) + fibonacci(n - 2);
}

The real bottleneck in the past was that the compiler did not know this computation ought to be moved into compile time, so we had to use template to compel it to do the arithmetic. If we look at the compiler itself, its real strength lies in maintaining and exploiting type information. In a meaningful sense, compilation is the completion of a constructive proof, and the boundary of optimization is the boundary of the information available to the compiler. The more natural solution, then, is to provide that information explicitly and let the compiler perform the compile-time evaluation. That is where constexpr enters.

It suggests to the compiler that the calculation be performed at compile time:

#include <iostream>

// With a constexpr function, evaluation can happen at compile time.
constexpr int fibonacci(int n)
{
    // Since C++14, constexpr function bodies may contain richer control logic,
    // making them structurally close to ordinary functions.
    if (n == 1 || n == 2)
        return 1;
    else
        return fibonacci(n - 1) + fibonacci(n - 2);
}

int main(int argc, char *argv[])
{
    // fibonacci(10) is a constant expression, so the compiler can evaluate it at compile time.
    std::cout << fibonacci(10) << std::endl; // After compilation, this becomes `std::cout << 55 << std::endl;`

    // It can also be used to define array sizes.
    int arr[fibonacci(6)] = {}; // Defines an array of size 8
    std::cout << "Array size: " << sizeof(arr) / sizeof(int) << std::endl;
}

The problem, however, lies precisely in this “suggestion.” Just as C++ templates have recursion-depth limits, a constexpr computation that is too deep, or a context in which the compiler cannot establish that the expression is constant, may still fall back to runtime evaluation. To remove that advisory character and require compile-time evaluation, one can use consteval. Both facilities are ways of giving the compiler information and thereby expanding the space in which optimization can operate.

It is worth isolating the complexity of the Fibonacci computation itself. In the template metaprogramming example, each Calculator<n> is a distinct type. Because the compiler maintains type information, it stores those instantiations and does not repeatedly recompute identical substructures. The complexity is therefore not O(2n)O(2^n) but O(n)O(n): each case from 1 through n only needs to be computed once, and random access in the usual RAM model is treated as constant time. In the later constexpr approach, modern mainstream compiler implementations, when evaluating a constant expression under sufficiently high optimization settings, typically use memoization-like machinery to avoid redundant work as well. In practice, then, the compile-time cost of recursive constant evaluation such as fibonacci(10) is also closer to linear complexity, O(n)O(n), than to an exponential explosion.

Of course, C++17 also gave constexpr something like a compile-time branching form: if constexpr. It performs the conditional check at compile time and discards the invalid branch outright.

#include <iostream>
#include <type_traits>

template<typename T>
void print_info(T value) 
{
    if constexpr (std::is_pointer_v<T>) 
    {
        // If T is not a pointer, this entire block, including the braces,
        // is discarded by the compiler and never enters the instantiation path.
        // Therefore, even if T is int, syntax such as *value will not cause compilation to fail.
        std::cout << "It's a pointer to: " << *value << std::endl;
    } 
    else 
    {
        // Conversely, if T is a pointer, this block is discarded.
        std::cout << "It's a value: " << value << std::endl;
    }
}

Taken together, this reflects C++‘s broader tendency to move more semantics forward into compile time.

Of course, if one insists on building calculators out of notoriously unreadable template metaprogramming, or simply wants to test the edges of the language, constexpr still has a useful role to play: it can construct compile-time objects, offering a substantial improvement in both expressiveness and readability.

#include <iostream>

struct Point 
{
    double x, y;

    // Step 1: provide a constexpr constructor.
    // C++11 required the constructor body to be empty, allowing only an initializer list.
    // C++14 and later relaxed this constraint, allowing more complex logic in the body.
    constexpr Point(double x_val, double y_val) : x(x_val), y(y_val) {}

    // Step 2: member functions intended for compile-time calls must also be constexpr.
    constexpr double length_sq() const 
    {
        return x * x + y * y;
    }
    
    // Step 3 (implicit): Its destructor must be trivial.
    // The default destructor is trivial, so no explicit definition is needed.
    // ~Point() = default;
};

int main() 
{
    // Create Point objects at compile time.
    constexpr Point p1(3.0, 4.0);
    constexpr Point p2(5.0, 12.0);

    // Call a member function at compile time to compute another constant.
    constexpr double len_sq = p1.length_sq(); // The compiler calculates this as 25.0

    // Use this compile-time result to define an array.
    int my_array[static_cast<int>(len_sq)]; // Creates an array of size 25
    std::cout << "Array size: " << std::size(my_array) << std::endl;

    // These objects can also participate in compile-time operations.
    constexpr Point p3(p1.x + p2.x, p1.y + p2.y); // p3 is (8.0, 16.0)
    
    std::cout << "p3.x = " << p3.x << std::endl; // After compilation, this line might become `std::cout << "p3.x = " << 8.0 << std::endl;`
}

In conclusion, the history of compiler optimization resembles a discipline of expanding what is available while reducing what is wasted: on one side, optimization algorithms improve; on the other, we provide the compiler with more information and thereby move the boundary of semantic constraint. From a modern C++ perspective, template metaprogramming should still be returned to the proper role of template: abstracting families of processes in service of policy-driven design. Computing Fibonacci numbers with templates was mostly a pre-C++11 detour taken under constraint, and recursive template instantiation remains one of the wellsprings of the notorious opacity people call “template black magic.”