Transparent performance

The practical performance of a program is not just a technical detail but a crucial component of its correctness. Therefore, programming languages that make reasoning about the asymptotic performance of a program too difficult, unfeasible or depending on compiler specifics are dangerous.

It is today widely accepted that good programming languages should have clean semantics. Semantics usually define the values programs compute, or fail to compute. However this is insufficient: what good is a correct value if the program that computes it takes forever to terminate, or consumes too much memory?

A good programming language should also guarantee the complexity of its basic constructs. It should be possible for the programmer to usefully predict the performance of his program without depending on compiler internals. Compiler optimizations that may alter the asymptotic performance of programs should be documented as well as the value semantics of the language. For instance, most functional language guarantee tail-call optimization, and this is good. Tail-calls are easy to spot for the programmer; it is easy to conceptualize what happens when a call is tail and when it is not (e.g., call chain proportional to the number of calls).

I'm tempted to consider that compilers that perform optimizations potentially changing the asymptotic performance or substantially changing the actual performance of programs are incorrect, when those optimizations are not mandated by the language definition. Why? Because programmers will rely on them. However, compiler optimizations are tricky beasts that do not always apply or work the same way. A new way of compiling may yield much better performance than the old one, save in a few cases. If "better" here means "asymptotically better" and if the performance of your program was depending, by chance, on those corner cases (how would you know without delving in the internals of complex optimizations?), your program may go from O(n) to O(n^2) and fail.

That will then create compiler-specific subdialects of the language. The maintainers of the compiler will have to ensure that these optimizations always apply in all future versions of the compiler. This is no better than backwards bug-compatibility or browser-specific hacks. This means that they won't be able to change their optimization strategy if it is not formally a superset of the previous one, even if it performs better in almost all known programs.

You will then end up with programs P1 and P2 written in the same language, but such that P1 has usable perfomance only when compiled with flags F1, with which P2 hasn't usable performance. Usually you can't mix compilation flags, so you won't be able to compose programs P1 and P2 in a bigger program and have acceptable performance.

Whole-program compilers that optimize the hell out of your programs are a good thing, but I'd avoid programming languages that require such complex, non-universal optimizations to have usable performance in practice. Programs written in such languages may very well fail to have the same performance under a different, or future compiler.

In short: