DOKK / manpages / debian 12 / charliecloud-builders / ch-fromhost.1.en
CH-FROMHOST(1) Charliecloud CH-FROMHOST(1)

ch-fromhost - Inject files from the host into an image directory, with various magic

$ ch-fromhost [OPTION ...] [FILE_OPTION ...] IMGDIR


NOTE:

This command is experimental. Features may be incomplete and/or buggy. Please report any issues you find, so we can fix them!


Inject files from the host into the Charliecloud image directory IMGDIR.

The purpose of this command is to inject files into a container image that are necessary to run the container on a specific host; e.g., GPU libraries that are tied to a specific kernel version. It is not a general copy-to-image tool; see further discussion on use cases below. It should be run after ch-convert and before ch-run. After invocation, the image is no longer portable to other hosts.

Injection is not atomic; if an error occurs partway through injection, the image is left in an undefined state. Injection is currently implemented using a simple file copy, but that may change in the future.

By default, file paths that contain the strings /bin or /sbin are assumed to be executables and placed in /usr/bin within the container. File paths that contain the strings /lib or .so are assumed to be shared libraries and are placed in the first-priority directory reported by ldconfig (see --lib-path below). Other files are placed in the directory specified by --dest.

If any shared libraries are injected, run ldconfig inside the container (using ch-run -w) after injection.

Inject files listed in the standard output of command CMD.
Inject files listed in the file FILE.
Inject the file at PATH.
Cray-enable MPICH/OpenMPI installed inside the image. See important details below.
Use nvidia-container-cli list (from libnvidia-container) to find executables and libraries to inject.



These can be repeated, and at least one must be specified.

Place files specified later in directory IMGDIR/DST, overriding the inferred destination, if any. If a file’s destination cannot be inferred and --dest has not been specified, exit with an error. This can be repeated to place files in varying destinations.



Print the guest destination path for shared libraries inferred as described above.
Don’t run ldconfig even if we appear to have injected shared libraries.
Print help and exit.
List the injected files.
Print version and exit.



This command does a lot of heuristic magic; while it can copy arbitrary files into an image, this usage is discouraged and prone to error. Here are some use cases and the recommended approach:

1.
I have some files on my build host that I want to include in the image. Use the COPY instruction within your Dockerfile. Note that it’s OK to build an image that meets your specific needs but isn’t generally portable, e.g., only runs on specific micro-architectures you’re using.
2.
I have an already built image and want to install a program I compiled separately into the image. Consider whether a building a new derived image with a Dockerfile is appropriate. Another good option is to bind-mount the directory containing your program at run time. A less good option is to cp(1) the program into your image, because this permanently alters the image in a non-reproducible way.
3.
I have some shared libraries that I need in the image for functionality or performance, and they aren’t available in a place where I can use COPY. This is the intended use case of ch-fromhost. You can use --cmd, --file, and/or --path to put together a custom solution. But, please consider filing an issue so we can package your functionality with a tidy option like --cray-mpi or --nvidia.

The implementation of --cray-mpi is messy, foul smelling, and brittle. It replaces or overrides the MPICH or OpenMPI libraries installed in the container. Users should be aware of the following.

1.
Containers must have the following software installed:
Corresponding open source MPI implementation. (MPICH and OpenMPI.)
PatchELF with our patches. Use the shrink-soname branch. (MPICH only.)
libgfortran.so.3, because Cray’s libmpi.so.12 links to it. (MPICH only.)

2.
Applications must be dynamically linked to libmpi.so.12 (not e.g. libmpich.so.12).
How to configure MPICH to accomplish this is not yet clear to us; test/Dockerfile.mpich does it, while the Debian packages do not. (MPICH only.)

3.
An ABI compatible module for the given MPI implementation must be loaded when ch-fromhost is invoked.
Load the cray-mpich-abi module. (MPICH only.)
We recommend loading the module of a version as close to what is installed in the image as possible. This OpenMPI install needs to be built such that libmpi contains all needed plugins (as opposed to them being standalone shared libraries). See OpenMPI’s documentation for how to do this. (OpenMPI only.)

4.
Tested only for C programs compiled with GCC, and it probably won’t work otherwise. If you’d like to use another compiler or another programming language, please get in touch so we can implement the necessary support.

Please file a bug if we missed anything above or if you know how to make the code better.

Symbolic links are dereferenced, i.e., the files pointed to are injected, not the links themselves.

As a corollary, do not include symlinks to shared libraries. These will be re-created by ldconfig.

There are two alternate approaches for nVidia GPU libraries:

1.
Link libnvidia-containers into ch-run and call the library functions directly. However, this would mean that Charliecloud would either (a) need to be compiled differently on machines with and without nVidia GPUs or (b) have libnvidia-containers available even on machines without nVidia GPUs. Neither of these is consistent with Charliecloud’s philosophies of simplicity and minimal dependencies.
2.
Use nvidia-container-cli configure to do the injecting. This would require that containers have a half-started state, where the namespaces are active and everything is mounted but pivot_root(2) has not been performed. This is not feasible because Charliecloud has no notion of a half-started container.



Further, while these alternate approaches would simplify or eliminate this script for nVidia GPUs, they would not solve the problem for other situations.

File paths may not contain colons or newlines.

ldconfig tends to print stat errors; these are typically non-fatal and occur when trying to probe common library paths. See issue #732.

Place shared library /usr/lib64/libfoo.so at path /usr/lib/libfoo.so (assuming /usr/lib is the first directory searched by the dynamic loader in the image), within the image /var/tmp/baz and executable /bin/bar at path /usr/bin/bar. Then, create appropriate symlinks to libfoo and update the ld.so cache.

$ cat qux.txt
/bin/bar
/usr/lib64/libfoo.so
$ ch-fromhost --file qux.txt /var/tmp/baz


Same as above:

$ ch-fromhost --cmd 'cat qux.txt' /var/tmp/baz


Same as above:

$ ch-fromhost --path /bin/bar --path /usr/lib64/libfoo.so /var/tmp/baz


Same as above, but place the files into /corge instead (and the shared library will not be found by ldconfig):

$ ch-fromhost --dest /corge --file qux.txt /var/tmp/baz


Same as above, and also place file /etc/quux at /etc/quux within the container:

$ ch-fromhost --file qux.txt --dest /etc --path /etc/quux /var/tmp/baz


Inject the executables and libraries recommended by nVidia into the image, and then run ldconfig:

$ ch-fromhost --nvidia /var/tmp/baz
asking ldconfig for shared library destination
/sbin/ldconfig: Can't stat /libx32: No such file or directory
/sbin/ldconfig: Can't stat /usr/libx32: No such file or directory
shared library destination: /usr/lib64//bind9-export
injecting into image: /var/tmp/baz

/usr/bin/nvidia-smi -> /usr/bin (inferred)
/usr/bin/nvidia-debugdump -> /usr/bin (inferred)
/usr/bin/nvidia-persistenced -> /usr/bin (inferred)
/usr/bin/nvidia-cuda-mps-control -> /usr/bin (inferred)
/usr/bin/nvidia-cuda-mps-server -> /usr/bin (inferred)
/usr/lib64/libnvidia-ml.so.460.32.03 -> /usr/lib64//bind9-export (inferred)
/usr/lib64/libnvidia-cfg.so.460.32.03 -> /usr/lib64//bind9-export (inferred) [...]
/usr/lib64/libGLESv2_nvidia.so.460.32.03 -> /usr/lib64//bind9-export (inferred)
/usr/lib64/libGLESv1_CM_nvidia.so.460.32.03 -> /usr/lib64//bind9-export (inferred) running ldconfig


Inject the Cray-enabled MPI libraries into the image, and then run ldconfig:

$ ch-fromhost --cray-mpi /var/tmp/baz
asking ldconfig for shared library destination
/sbin/ldconfig: Can't stat /libx32: No such file or directory
/sbin/ldconfig: Can't stat /usr/libx32: No such file or directory
shared library destination: /usr/lib64//bind9-export
found shared library: /usr/lib64/liblustreapi.so.1
found shared library: /opt/cray/xpmem/default/lib64/libxpmem.so.0
[...]
injecting into image: /var/tmp/baz

rm -f /var/tmp/openmpi/usr/lib64//bind9-export/libopen-rte.so.40
rm -f /var/tmp/openmpi/usr/lib64/bind9-export/libopen-rte.so.40 [...]
mkdir -p /var/tmp/openmpi/var/opt/cray/alps/spool
mkdir -p /var/tmp/openmpi/etc/opt/cray/wlm_detect [...]
/usr/lib64/liblustreapi.so.1 -> /usr/lib64//bind9-export (inferred)
/opt/cray/xpmem/default/lib64/libxpmem.so.0 -> /usr/lib64//bind9-export (inferred) [...]
/etc/opt/cray/wlm_detect/active_wlm -> /etc/opt/cray/wlm_detect running ldconfig


This command was inspired by the similar Shifter feature that allows Shifter containers to use the Cray Aries network. We particularly appreciate the help provided by Shane Canon and Doug Jacobsen during our implementation of --cray-mpi.

We appreciate the advice of Ryan Olson at nVidia on implementing --nvidia.

If Charliecloud was obtained from your Linux distribution, use your distribution’s bug reporting procedures.

Otherwise, report bugs to: https://github.com/hpc/charliecloud/issues

charliecloud(7)

Full documentation at: <https://hpc.github.io/charliecloud>

2014–2022, Triad National Security, LLC and others

2023-01-29 12:36 UTC 0.31