Notes about specific Features¶
This sections describes details about specific features. For a full list of features please refer to the website.
Ctypes Dependencies¶
Ctypes is a foreign function library for Python, that allows calling functions present in shared libraries. Those libraries are not imported as Python packages, because they are not picked up via Python imports: their path is passed to ctypes instead, which deals with the shared library directly; this caused <1.4 PyInstaller import detect machinery to miss those libraries, failing the goal to build self-contained PyInstaller executables:
from ctypes import *
# This will pass undetected under PyInstaller detect machinery,
# because it's not a direct import.
handle = CDLL("/usr/lib/library.so")
handle.function_call()
Solution in PyInstaller¶
PyInstaller contains a pragmatic implementation of Ctypes dependencies: it will search for simple standard usages of ctypes and automatically track and bundle the referenced libraries. The following usages will be correctly detected:
CDLL("library.so")
WinDLL("library.so")
ctypes.DLL("library.so")
cdll.library # Only valid under Windows - a limitation of ctypes, not PyInstaller's
windll.library # Only valid under Windows - a limitation of ctypes, not PyInstaller's
cdll.LoadLibrary("library.so")
windll.LoadLibrary("library.so")
More in detail, the following restrictions apply:
only libraries referenced by bare filenames (e.g. no leading paths) will be handled; handling absolute paths would be impossible without modifying the bytecode as well (remember that while running frozen, ctypes would keep searching the library at that very absolute location, whose presence on the host system nobody can guarantee), and relative paths handling would require recreating in the frozen executable the same hierarchy of directories leading to the library, in addition of keeping track of which the current working directory is;
only library paths represented by a literal string will be detected and included in the final executable: PyInstaller import detection works by inspecting raw Python bytecode, and since you can pass the library path to ctypes using a string (that can be represented by a literal in the code, but also by a variable, by the return value of an arbitrarily complex function, etc…), it’s not reasonably possible to detect all ctypes dependencies;
only libraries referenced in the same context of ctypes’ invocation will be handled.
We feel that it should be enough to cover most ctypes’ usages, with little or no modification required in your code.
If PyInstaller does not detect a library, you can add it to your
bundle by passing the respective information to --add-binary
option or
listing it in the .spec-file. If your frozen
application will be able to pick up the library at run-time can not be
guaranteed as it depends on the detailed implementation.
Gotchas¶
The ctypes detection system at Analysis time
is based on ctypes.util.find_library()
.
This means that you have to make sure
that while performing Analysis
and running frozen,
all the environment values find_library()
uses to search libraries
are aligned to those when running un-frozen.
Examples include using LD_LIBRARY_PATH
or DYLD_LIBRARY_PATH
to
widen find_library()
scope.
SWIG support¶
PyInstaller tries to detect binary modules created by SWIG. This detection requires:
The Python wrapper module must be imported somewhere in your application (or by any of the modules it uses).
The wrapper module must be available as source-code and it’s first line must contain the text
automatically generated by SWIG
.The C-module must have the same name as the wrapper module prefixed with an underscore (
_
). (This is a SWIG restriction already.)The C-module must sit just beside the wrapper module (thus a relative import would work).
Also some restrictions apply, due to the way the SWIG wrapper is implemented:
The C-module will become a global module. As a consequence, you can not use two SWIG modules with the same basename (e.g.
pkg1._cmod
andpkg2._cmod
), as one would overwrite the other.
Cython support¶
PyInstaller can follow import statements that refer to Cython C object modules and bundle them – like for any other module implemented in C.
But – again, as for any other module implemented in C – PyInstaller can not
determine if the Cython C object module is importing some Python module.
These will typically show up as in a traceback like this
(mind the .pyx
extension):
Traceback (most recent call last):
[…]
File "myapp\cython_module.pyx", line 3, in init myapp.cython_module
ModuleNotFoundError: No module named 'csv'
So if you are using a Cython C object module, which imports Python modules,
you will have to list these as --hidden-import
.
macOS multi-arch support¶
With the introduction of Apple Silicon M1, there are now several architecture options available for python:
single-arch
x86_64
with thin binaries: older python.org builds, Homebrew python running natively on Intel Macs or under rosetta2 on M1 Macssingle-arch
arm64
with thin binaries: Homebrew python running natively on M1 macsmulti-arch
universal2
with fat binaries (i.e., containing bothx86_64
andarm64
slices): recentuniversal2
python.org builds
PyInstaller aims to support all possible combinations stemming from the above options:
single-arch application created using corresponding single-arch python
universal2
application created usinguniversal2
pythonsingle-arch application created using
universal2
python (i.e., reducinguniversal2
fat binaries into eitherx86_64
orarm64
thin binaries)
By default, PyInstaller targets the current running architecture
and produces a single-arch binary (x86_64
when running on Intel Mac
or under rosetta2 on M1 Mac, or arm64
when running on M1 Mac). The
reason for that is that even with a universal2
python environment,
some packages may end up providing only single-arch binaries, making it
impossible to create a functional universal2
frozen application.
The alternative options, such as creating a universal2
version
of frozen application, or creating a non-native single-arch version using
universal2
environment, must therefore be explicitly enabled. This
can be done either by specifying the target architecture in the .spec
file via the target_arch=
argument to EXE()
, or on command-line
via the --target-arch
switch. Valid values are x86_64
, arm64
,
and universal2
.
Architecture validation during binary collection¶
To prevent run-time issues caused by missing or mismatched architecture slices in binaries, the binary collection process performs strict architecture validation. It checks whether collected binary files contain required arch slice(s), and if not, the build process is aborted with an error message about the problematic binary.
In such cases, creating frozen application for the selected target
architecture will not be possible unless the problem of missing arch slices
is manually addressed (for example, by downloading the wheel corresponding to
the missing architecture, and stiching the offending binary files together
using the lipo
utility).
Changed in version 4.10: In earlier PyInstaller versions, the architecture validation was performed on all collected binaries, such as python extension modules and the shared libraries referenced by those extensions. As of PyInstaller 4.10, the architecture validation is limited to only python extension modules.
The individual architecture slices in a multi-arch universal2
extension
may be linked against (slices in) universal2
shared libraries, or
against distinct single-arch thin shared libraries. This latter case makes
it impossible to reliably validate architecture of the collected shared
libraries w.r.t. the target application architecture.
However, the extension modules do need to be fully compatible with the target application architecture. Therefore, their continued validation should hopefully suffice to detect attempts at using incompatible single-arch python packages *.
- *
Although nothing really prevents a package from having distinct, architecture-specific extension modules…
Trimming fat binaries for single-arch targets¶
When targeting a single architecture, the build process extracts the
corresponding arch slice from any collected fat binaries, including the
bootloader. This results in a completely thin build even when building
in universal2
python environment.
macOS binary code signing¶
With Apple Silicon M1 architecture, macOS introduced mandatory code signing,
even if ad-hoc (i.e., without actual code-signing identity). This means
that arm64
arch slices (but possibly also x86_64
ones, especially
in universal2
binaries) in collected binaries always come with signature.
The processing of binaries done by PyInstaller (e.g., library path rewriting in binaries’ headers) invalidates their signatures. Therefore, the signatures need to be re-generated, otherwise the OS refuses to load a binary.
By default, PyInstaller ad-hoc (re)signs all collected binaries and
the generated executable itself. Instead of ad-hoc signing, it is also
possible to use real code-signing identity. To do so, either specify your
identity in the .spec
file via codesign_identity=
argument to
EXE()
, or on command-line via the --codesign-identity
switch.
Being able to provide codesign identity allows user to ensure that all
collected binaries in either onefile
or onedir
build are signed
with their identity. This is useful because for onefile
builds,
signing of embedded binaries cannot be performed in a post-processing step.
Note
When codesign identity is specified, PyInstaller also turns on
hardened runtime by passing --options=runtime
to the codesign
command. This requires the codesign identity to be a valid Apple-issued
code signing certificate, and will not work with self-signed certificates.
Trying to use self-signed certificate as a codesign identity will result in shared libraries failing to load, with the following reason reported:
[libname]: code signature in ([libname]) not valid for use in process using Library Validation: mapped file has no Team ID and is not a platform binary (signed with custom identity or adhoc?)
Furthermore, it is possible to specify entitlements file to be used
when signing the collected binaries and the executable. This can be
done in the .spec
file via entitlements_file=
argument to
EXE()
, or on command-line via the --osx-entitlements-file
switch.
App bundles¶
PyInstaller also automatically attempts to sign .app bundles, either
using ad-hoc identity or actual signing identity, if provided via
--codesign-identity
switch. In addition to passing same options as
when signing collected binaries (identity, hardened runtime, entitlement),
deep signing is also enabled via by passing --deep
option to the
codesign
utility.
Should the signing of the bundle fail for whatever reason, the error
message from the codesign
utility will be printed to the console,
along with a warning that manual intervention and manual signing of the
bundle are required.