A sufficiently smart compiler probably should be able to inline constant expressions, yes. And that's a transform that we know is safe, and I would want my compiler to have quite a lot of assurance that its constant-inlining worked correctly (probably strong typing and high test coverage). I think it's worth imposing a bit of structure on any code that runs in the compiler (e.g. compiler plugins at least have a test suite and a release process - and even then I wouldn't use a plugin (other than maybe a readonly analyzer) on production code). And of course I'd like my compiler code to be structured well, such that it could be used as a library (or indeed, such that it is a library, from the point of view of those compiler plugins). But having a good library for transforming source code doesn't mean I want to allow random one-liner snippets to transform my source code.
The regex example is more about partial evaluation than constant inlining. So instead of commenting on "random one-liner snippets" and "structure", which I find boring, I'd prefer to share an example of partial evaluation techniques applied to Haskell:
The whole essay is interesting. Then, at section 4, we discover that the author relies on Template Haskell:
Using the Template Haskell infrastructure confers a number of advantages. There are the
straight-forward obvious advantages that TH provides an abstract syntax tree, a parser,
pretty printer and a monad providing unique name supply and error reporting. Because it
is part of a compiler, the compiler also does the other ordinary semantic checks including
typechecking. This is obviously useful since our partial evaluator can only be expected to
work for well formed and typed Haskell programs and so we are spared from performing
these checks. A less obvious advantage is that we may be able to do binding time checking
in terms of a slightly modified Haskell type system and implement it using the existing
compiler’s type checker rather than having to write one from scratch.