Having a tailored cross compiler is a problem I encountered couple of times in the past. Of course there are solutions to that problem like great crosstool-ng or more complex buildroot. In most cases crosstool-ng (ct-ng) can solve them. But whatever the tool we use, it has always its own drawbacks. For ct-ng these are small number of supported versions of toolchain components and huge dependence of environment, where it is started. The latter is even more problematic, because of the way continuing interrupted build work in ct-ng. Obviously if you want to build in example one compiler for ARM and one for MIPS, both consisting of latest tools, then it is not a problem.
But I have another use case for compiling toolchains. I do some reverse engineering from time to time. Nowadays many products have Linux under the hood and often there is no chance to get any SDK for them. But having ability to build something for the device can help a lot, either to run it there, or link with found tools and run in emulator. But I could also imagine that outside the reverse engineering field there might be a need to get toolchain in exact configuration, which is sadly not available via ct-ng or buildroot. Anyway, in any case where ct-ng or buildroot are not applicable, there is third way – docker. And this is the way I chose. This is how CC Factory appeared. It is docker container that builds gcc cross compiler on first startup and lands you in an container that have working compiler for the platform of your choice. And it does not require big effort to port it for the next architecture, or different tool version, unless the changes between the versions were really significant.
Usage
Enough of this talking. Let’s see how to use the thing. First step is obviously cloning cc-factory repo with usual:
git clone https://github.com/v3l0c1r4pt0r/cc-factory.git
Now, if you look into repo contents, there is only a readme file. That is, because repository is organized into branches, where there is no real master, like usually in code repository. Instead, each branch represents different set of tools and architecture for which the compiler will be prepared. For now, there are two such branches:
mipsel-gcc4.6-linux3.4.113-uclibc1.0.26
, for little endian MIPS architecture with GCC 4.6.4, uClibc 1.0.26 as standard library and set of headers from Linux 3.4.113mips-gcc4.6-linux4.1.38-uclibc1.0.12
, for big endian MIPS, with GCC 4.6.4, uClibc 1.0.12 and headers from Linux 4.1.38
These are the two toolchains, I needed recently, but as the factory will be showing its usefulness for me, I will for sure prepare and publish more compilers. Of course any contribution from third parties are more than welcome. More on that, a bit later.
Now, as we chose one of the branches, let it be the second one, we can checkout its branch with:
git checkout mips-gcc4.6-linux4.1.38-uclibc1.0.12
After that the real contents of repository appears and readme disappears.As can be seen, the tool is mainly a docker script. For ease of use, I also prepared Makefile, providing targets for each useful docker command, so one do not have to remember them.
So now, building the image is as easy as typing:
make build
Provided, you have docker service running. Otherwise it will complain about that. Keep in mind that you have to have access to docker service, what may mean, you must be root, unless you configured docker for different user. Then you have to wait few minures and after seeing message that image is built successfully, you have a working GCC for MIPS platform. Easy, isn’t it? At this point there are two methods for using the toolchain. One option is to use it inside of container, second is to build SDK package and transfer it outside of container.
Cross-GCC in docker
In that case, you are ready to go. Compiler is ready to use in every built container. What you have to do is to run:
make run make shell
And you will be dropped into container bash shell. Here you can do anything you want. By default, there is one directory shared between container and host system – outdir in repo root directory. Whatever you put there, is visible in container shell and vice versa.
In my opinion however, this method is nice only for small experiments and polishing the toolchain itself to fit for particular system. For something more serious, I recommend the second way.
Generating SDK package for host system
In this method it is also required to run the container, but instead of getting into container shell, we will run a script inside the container, that will provide ready-to-go SDK for any system, we like. Run:
make sdk
And inside of the outdir mentioned above, you should see a tarball with SDK. It should be installed into root directory of your system. Otherwise, it won’t work! Run:
sudo tar -xvf mips-linux-uclibc.tar.gz -C/
This will by default install SDK into /opt/mips-linux-uclibc. More about changing this in a moment.
Now to run the compiler, you have to point you linker into SDK’s /lib directory. So to run gcc:
LD_LIBRARY_PATH=/opt/mips-linux-uclibc/lib /opt/mips-linux-uclibc/bin/mips-linux-uclibc-gcc
And that’s it. It works!
Now a bit about configuration options that should work out of the box. If they’re not – please report an issue.
Configuration
There are two important options here. First is JOBS variable, available in Makefile. In case you would like to modify number of consecutive jobs, you have to set it during build step, like:
make build JOBS=4
Unsetting it will cause default value to be picked! If you like to have only one job, do:
make build JOBS=1
Second important thing is path of SDK, that will be prepared. Inside of container it is not very important, but I can imagine that someone might nor like /opt directory for whatever reason. In that case, one has to edit Dockerfile, by setting:
ENV SDK_ROOT /opt/${TARGET}
to anything else, he likes. Docker does not care, unless you try to override system files, so better don’t try /usr 🙂
Porting and tailoring to own needs
Finally, the last important thing to mention is making more serious modifications to Dockerfile for the purpose of changing versions, enabling or disabling features or preparing toolchain for other architecture. In theory, all that should be as easy as a pie. But the reality is that most likely only changing architecture will be successful at first attempt. Any other modification will generate random number of errors. And I say that, because I tried many times will other tools and recently I failed to prepare GCC 3.3.2 with CC Factory. Don’t ask me why I needed that BTW 😀
Now in case of failure, general approach to solving problems with toolchains and in addition general approach to solving problems in docker applies. Therefore, I am not going to describe that, because this is why site like stackoverflow.com exists.
Contributing
If you succeed with your own toolchain, not selected from the list of available ones, I am more than interested in getting pull request with that. There are a lot of combinations and most likely whoever looks for such thing, will need the one that is not there, but maybe you will save someone a day of fixing compilation errors.
I have only one restriction. Please do not push changes, that turn on or off any random feature of e.g. uClibc, unless this is turning on something really important/useful. There are really a lot of combinations and probably anyone else does not need, what you need. So let’s not make anyone life harder. Thank you.
Conclusion
I used ct-ng for years to prepare my cross-compilers and probably will still use it, because this is great tool. However Docker has its power in reproducibility, that not many other tools has. This was my first experience with it and it looks really useful and promising. So, I hope it will get even more popular for the purpose of building cross-compilers. With all my past attempts, it was like a torture and ct-ng made it only a bit better, still requiring few hours of fixing errors and repeating for a single compiler. With Docker, we can get SDK at first attempt. Of course it has its own downsides. And the one I was affected by, was amount of space required. It’s impossible for me to find enough space in /var for its images.