I experimented with that and obtained performance improvement up to 100% in certain cases (and executable size is greatly reduced)
basically each binary file instead of being linked from multiple object files, is created from a unique object file
All.cpp
Code: Select all
#include "file1.cpp"
#include "file2.cpp"
etc.
The nice thing is that to optimize further to speed up the only required thing is just "include files in correct order" from "all.cpp"
This kind of optimization works due to how compiler is structured, Using all information at once provides more possibilities to compiler to spot optimizations (better register usage, better inlining lesser symbol overhead, better unrolling, reduced code size).
The only downside is that compiling will require MUCH MORE RAM, until the used RAM is less then available RAM the compilation will be faster than normal (less time to build all binaries and faster binaries).
Another downside: "The library must be designed to support that".
Example:
All headers must have guards (most C libraries will not compile, usually only C++ code can be compiled as single unit: certain C libraries already provide "all.c", but those are exceptions).
of course, there always exist "a better order", but that's impossible to find it.
If you have 10 source files, then there are 10*9*8*7*6*5*4*3*2*1 possible combinations, so trying all combinations is not feasible (and there will be much more files than 10 usually). But usually even the worst combination will be faster then linking multiple object files.