It ensures constant time because the operation takes a constant amount of time to execute exp and ln. Think about the naive implementation in the general case:
x^6 = x * x * x * x * x // linear in n
Here's a slightly (probably slightly wrong) optimized version of that which is O(log(n)):
fn pow(x, n):
if n == 0: 1
elif n == 1: x
elif n is even:
return pow(x * x, n/2)
elif n is odd:
return x * pow(x * x, (n-1)/2)
The nice thing is about the constant time is that it's relatively speedy (< 1ms) and is guaranteed to handle even large inputs gracefully[1]. Some implementations might add an additional optimization like if n == 2: x * x or n == 3: x * x * x since those are super common, but if you're really crunched, you've added if statements or function calls when you could have just inlined it.
[1] Though you'll start running into quantization error at some point from ln.
A compiler won't generally turn x * x * x * x into ( x * x) * (x * x) without being asked to (e.g. -ffast-math) as floating point math is not associative.
Right, it only works on positive integers, but my original example using exp and ln works in nearly all cases (though you probably want to do some rounding in the case of an integer input). That's actually another point in its favor. It's constant time and handles the odd cases.
* Why is constant time desirable here, if the operation can be quicker for some inputs?