I would like to modify a setup.py file such that the command "python setup.py build" compiles a C-based extension module that is statically (rather than dynamically) linked to a library.
The extension is currently dynamically linked to a number of libraries. I would like to leave everything unchanged except for statically linking to just one library. I have successfully done this by manually modifying the call to gcc that distutils runs, although it required that I explicitly listed the dependent libraries.
Perhaps this is too much information, but for clarity this is the final linking command that was executed during the "python setup.py build" script:
gcc -pthread -shared -L/system/lib64 -L/system/lib/ -I/system/include build/temp.linux-x86_64-2.7/src/*.o -L/system/lib -L/usr/local/lib -L/usr/lib -ligraph -o build/lib.linux-x86_64-2.7/igraph/core.so
And this is my manual modification:
gcc -pthread -shared -L/system/lib64 -L/system/lib/ -I/system/include build/temp.linux-x86_64-2.7/src/*.o -L/system/lib -L/usr/local/lib -L/usr/lib /system/lib/libigraph.a -lxml2 -lz -lgmp -lstdc++ -lm -ldl -o build/lib.linux-x86_64-2.7/igraph/core.so
Section 2.3.4 of Distributing Python Modules discusses the specification of libraries, but only "library_dirs" is appropriate and those libraries are dynamically linked.
I'm using a Linux environment for development but the package will also be compiled and installed on Windows, so a portable solution is what I'm after.
Can someone tell me where to look for instructions, or how to modify the setup.py script? (Thanks in advance!)
I'm new to StackOverflow, so my apologies if I haven't correctly tagged this question, or if I have made some other error in this posting.
6 - 7 years later, static linking with Python extensions is still poorly documented. As the OP points out in a comment, the usage is OS dependend.
On Linux / Unix
Static libraries are linked just as object files and should go with the path and its extension into extra_objects.
On Windows
The compiler sees if the linked library is static or dynamic and the static library name goes to the libraries list and the directories to library_dir
Solution for both platforms
For the example below, I will use the same library scenario as OP, linking igraph static and z, xml2 and gmp dynamic. This solution is a bit hackish, but at least does for each platform the right thing.
static_libraries = ['igraph']
static_lib_dir = '/system/lib'
libraries = ['z', 'xml2', 'gmp']
library_dirs = ['/system/lib', '/system/lib64']
if sys.platform == 'win32':
libraries.extend(static_libraries)
library_dirs.append(static_lib_dir)
extra_objects = []
else: # POSIX
extra_objects = ['{}/lib{}.a'.format(static_lib_dir, l) for l in static_libraries]
ext = Extension('igraph.core',
sources=source_file_list,
libraries=libraries,
library_dirs=library_dirs,
include_dirs=include_dirs,
extra_objects=extra_objects)
On MacOS
I guess this works also for MacOS (using the else path) but I have not tested it.
If all else fails, there's always the little-documented extra_compile_args and extra_link_args options to the Extension builder. (See also here.)
You might need to hack in some OS-dependent code to get the right argument format for a particular platform though.
Any possibility that this might work?
g++ -Wl,-Bstatic -lfoo -Wl,-Bdynamic -lbar -Wl,--as-needed
Related
The scikit-build distribution provides usage examples of FindF2PY and UseF2PY, but they are incomplete, only providing a partial CMakeLists.txt file without the other required files. Based on the documentation I have not been able to make something that builds.
Following the examples in the scikit-build documentation, I created the following files:
CMakeLists.txt:
cmake_minimum_required(VERSION 3.10.2)
project(skbuild_test)
enable_language(Fortran)
find_package(F2PY REQUIRED)
add_f2py_target(f2py_test f2py_test.f90)
add_library(f2py_test MODULE f2py_test.f90)
install(TARGETS f2py_test LIBRARY DESTINATION f2py_test)
setup.py:
import setuptools
from skbuild import setup
requires=['numpy']
setup(
name="skbuild-test",
version='0.0.1',
description='Performs line integrals through SAMI3 grids',
author='John Haiducek',
requires=requires,
packages=['f2py_test']
)
f2py_test.f90:
module mod_f2py_test
implicit none
contains
subroutine f2py_test(a,b,c)
real(kind=8), intent(in)::a,b
real(kind=8), intent(out)::c
end subroutine f2py_test
end module mod_f2py_test
In addition, I created a directory f2py_test containing an empty init.py.
The output from python setup.py develop shows that scikit-build invokes CMake and compiles my Fortran code. However, it fails to find Python.h while compiling the f2py wrapper code:
[2/7] Building C object CMakeFiles/_f2...kages/numpy/f2py/src/fortranobject.c.o
FAILED: CMakeFiles/_f2py_runtime_library.dir/venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc -O3 -DNDEBUG -arch x86_64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk -mmacosx-version-min=10.14 -MD -MT CMakeFiles/_f2py_runtime_library.dir/venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c.o -MF CMakeFiles/_f2py_runtime_library.dir/venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c.o.d -o CMakeFiles/_f2py_runtime_library.dir/venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c.o -c ../../../venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c
In file included from ../../../venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.c:2:
../../../venv/lib/python3.8/site-packages/numpy/f2py/src/fortranobject.h:7:10: fatal error: 'Python.h' file not found
#include "Python.h"
^~~~~~~~~~
1 error generated.
First caveat, there may be a better way to do this, as I just figured out how to make scikit-build work by stumbling on your post and looking at documentation. Second caveat, I'm also learning cmake. So, there may be a better way.
There are a couple of things that you'll need to make your example here work. The big one is the second argument of add_f2py_target() isn't the source file. It is either the name of a pre-generated *.pyf, or to let f2py generate one, provide it an argument without the *.pyf extension. The other is adding the include directories for various components.
I made your example work with the following CMakeLists.txt:
cmake_minimum_required(VERSION 3.10.2)
project(skbuild_test)
enable_language(Fortran)
find_package(F2PY REQUIRED)
find_package(PythonLibs REQUIRED)
find_package(Python3 REQUIRED COMPONENTS NumPy)
#the following command either generates or points to an existing .pyf
#if provided an argument with .pyf extension, otherwise f2py generates one (not source code).
add_f2py_target(f2py_test f2py_test)
add_library(f2py_test MODULE f2py_test.f90)
include_directories(${PYTHON_INCLUDE_DIRS})
include_directories(${_Python3_NumPy_INCLUDE_DIR})
target_link_libraries(f2py_test ${PYTHON_LIBRARIES})
install(TARGETS f2py_test LIBRARY DESTINATION f2py_test)
I'm trying to compile a Python wrapper to a small C++ library I've written. I've written the following setup.py script to try to use setuptools to compile the wrapper:
from setuptools import setup, Extension
import numpy as np
import os
atmcmodule = Extension(
'atmc',
include_dirs=[np.get_include(), '/usr/local/include'],
libraries=['mcopt', 'c++'], # my C++ library is at ./build/libmcopt.a
library_dirs=[os.path.abspath('./build')],
sources=['atmcmodule.cpp'],
language='c++',
extra_compile_args=['-std=c++11', '-v'],
)
setup(name='tracking',
version='0.1',
description='Particle tracking and MC optimizer module',
ext_modules=[atmcmodule],
)
However, when I run python setup.py build on OS X El Capitan, clang complains about not finding some C++ standard library headers:
In file included from atmcmodule.cpp:7:
In file included from ./mcopt.h:11:
In file included from ./arma_include.h:4:
/usr/local/include/armadillo:54:12: fatal error: 'initializer_list' file not found
#include <initializer_list>
^
1 error generated.
error: command 'gcc' failed with exit status 1
Passing the -v flag to the compiler shows that it is searching the following include paths:
#include <...> search starts here:
/Users/[username]/miniconda3/include
/Users/[username]/miniconda3/lib/python3.4/site-packages/numpy/core/include
/usr/local/include
/Users/[username]/miniconda3/include/python3.4m
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/c++/4.2.1
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/c++/4.2.1/backward
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.0.0/include
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/System/Library/Frameworks (framework directory)
End of search list.
This apparently doesn't include the path to the C++ standard library headers. If I compile a small test C++ source with the -v option, I can see that clang++ normally also searches the path /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1, and if I include this path in the include_dirs option for Extension in my setup.py script, then the extension module compiles correctly and works. However, hard-coding this path into the script doesn't seem like a good solution since this module also needs to work on Linux.
So, my question is how do I properly make setuptools include the required headers?
Update (11/22/2015)
As setuptools tries to compile the extension, it prints the first command it's running:
gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/[username]/miniconda3/include -arch x86_64 -I/Users/[username]/miniconda3/lib/python3.4/site-packages/numpy/core/include -I/Users/[username]/Documents/Code/ar40-aug15/monte_carlo/mcopt -I/usr/local/include -I/Users/[username]/miniconda3/include/python3.4m -c /Users/[username]/Documents/Code/ar40-aug15/monte_carlo/atmc/atmcmodule.cpp -o build/temp.macosx-10.5-x86_64-3.4/Users/[username]/Documents/Code/ar40-aug15/monte_carlo/atmc/atmcmodule.o -std=c++11 -fopenmp -v
If I paste this command into a terminal and run it myself, the extension compiles successfully. So I suspect either setuptools is modifying some environment variables I'm not aware of, or it's lying a little about the commands it's actually running.
Setuptools tries to compile C/C++ extension modules with the same flags used to compile the Python interpreter. After checking the flags used to compile my Python install (from Anaconda), I found it was compiling for a minimum Mac OS X version of 10.5. This seems to make it use the GCC libstdc++ instead of clang's libc++ (which supports C++11).
This can be fixed by either setting the environment variable MACOSX_DEPLOYMENT_TARGET to 10.9 (or later), or adding '-mmacosx-version-min=10.9' to extra_compile_args.
I'm starting the study of Python/C API and I make the first code to test some functions, I write this:
file: test.c
#include "Python.h"
int main() {
PyObject* none = Py_BuildValue("");
}
I compile with command:
gcc -I/usr/include/python2.7 test.c
I've the error undefined reference to `Py_BuildValue'
After I run:
gcc -I/usr/include/python2.7 --shared -fPIC hashmem.c
this compile without errors, but when I run the compiled file I've a
Segmentation fault (core dumped)
How do I set the gcc parameters?
I've ubuntu 12.04, python 2.7.3, gcc 4.6.3 and I installed python-dev.
Thanks.
In the comments #Pablo has provided the solution
gcc -I/usr/include/python2.7 test.c -lpython2.7
I forgot to link the python library with the "-l" parameter.
-llibrary
-l library
Search the library named library when linking. (The second alternative with the library as a separate argument is only for POSIX) compliance and is not recommended.)It makes a difference where in the command you write this option; the linker searches and processes libraries and object files in the order they are specified. Thus, foo.o -lz bar.o' searches libraryz' after file foo.o but before bar.o. If bar.o refers to functions in z', those functions may not be loaded.The linker searches a standard list of directories for the library, which is actually a file named liblibrary.a. The linker then uses this file as if it had been specified precisely by name.The directories searched include several standard system directories plus any that you specify with -L.Normally the files found this way are library files—archive files whose members are object files. The linker handles an archive file by scanning through it for members which define symbols that have so far been referenced but not defined. But if the file that is found is an ordinary object file, it is linked in the usual fashion. The only difference between using an -l option and specifying a file name is that - l surrounds library withlib' and `.a' and searches several directories.
Parameter description source
So I have a few Python C extensions I have previously built for and used in 32 bit Python running in Win7. I have now however switched to 64 bit Python, and I am having issues building the C extension with MinGW-w64.
I made the changes to distutils as per this post, but I am getting some weird errors suggesting something is wrong:
$ python setup.py build
running build
running build_ext
building 'MyLib' extension
c:\MinGW64\bin\x86_64-w64-mingw32-gcc.exe -mdll -O -Wall -Ic:\Python27\lib\site-packages\numpy\core\include -Ic:\Python27\include -Ic:\Python27\PC -c MyLib.c -o build\temp.win-amd64-2.7\Release\mylib.o
MyLib.c: In function 'initMyLib':
MyLib.c:631:5: warning: implicit declaration of function 'Py_InitModule4_64' [-Wimplicit-function-declaration]
writing build\temp.win-amd64-2.7\Release\MyLib.def
c:\MinGW64\bin\x86_64-w64-mingw32-gcc.exe -shared -s build\temp.win-amd64-2.7\Release\mylib.o build\temp.win-amd64-2.7\Release\MyLib.def -Lc:\Python27\libs -Lc:\Python27\PCbuild\amd64 -lpython27 -o build\lib.win-amd64-2.7\MyLib.pyd
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x13d): undefined reference to `__imp_PyExc_ValueError'
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x1275): undefined reference to `__imp_PyExc_ValueError'
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x1eef): undefined reference to `__imp_PyExc_ImportError'
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x1f38): undefined reference to `__imp_PyExc_AttributeError'
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x1f4d): undefined reference to `__imp_PyCObject_Type'
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x1f61): undefined reference to `__imp_PyExc_RuntimeError'
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x1fc7): undefined reference to `__imp_PyExc_RuntimeError'
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x1ffe): undefined reference to `__imp_PyExc_RuntimeError'
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x2042): undefined reference to `__imp_PyExc_RuntimeError'
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x206c): undefined reference to `__imp_PyExc_RuntimeError'
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x208a): more undefined references to `__imp_PyExc_RuntimeError' follow
build\temp.win-amd64-2.7\Release\mylib.o:MyLib.c:(.text+0x20a7): undefined reference to `__imp_PyExc_ImportError'
collect2.exe: error: ld returned 1 exit status
error: command 'x86_64-w64-mingw32-gcc' failed with exit status 1
I have googled around quite a bit to find information, but it's not easy to find a definite answer. Could someone shed some light on this? What further changes should I do to be able to successfully build C extensions for 64 bit Python in Win7?
EDIT:
After some helpful pointers in cgohlke's comments below I managed to generate libpython27.a. However after following the advice on this post (2nd to last) I still had a the __imp_Py_InitModule4_64 error. After some serious Google-fu I managed to trip over this post telling me to rename the Py_InitModule4 line to Py_InitModule4_64. After that everything worked swimmingly.
This worked for me with Python 3.3 :
create static python lib from dll
python dll is usually in C:/Windows/System32; in msys shell:
gendef.exe python33.dll
dlltool.exe --dllname python33.dll --def python33.def --output-lib libpython33.a
mv libpython33.a C:/Python33/libs
use swig to generate wrappers
e.g., swig -c++ -python myExtension.i
wrapper MUST be compiled with MS_WIN64, or your computer will crash when you import the class in Python
g++ -c myExtension.cpp -I/other/includes
g++ -DMS_WIN64 -c myExtension_wrap.cxx -IC:/Python33/include
shared library
g++ -shared -o _myExtension.pyd myExtension.o myExtension_wrap.o -lPython33 -lOtherSharedLibs -LC:/Python33/libs -LC:/path/to/other/shared/libs
make sure all shared libs (gdal, OtherSharedLibs) are in your PATH
(windows does not use LD_LIBRARY_PATH or PYTHONPATH)
in Python, just: import myExtension
voila!
I realize this is an old question, but it is still the top search result. Today, in 2019, I was able to do this:
https://github.com/PetterS/quickjs/commit/67bc2428b8c0716538b4583f4f2b0a2a5a49106c
In short:
Make sure a 64-bit version of mingw-w64 is in the PATH.
Monkey-patch distutils:
import distutils.cygwinccompiler
distutils.cygwinccompiler.get_msvcr = lambda: []
Some differences in the shell w.r.t. escaping.
extra_link_args = ["-Wl,-Bstatic", "-lpthread"] in order to link statically and not have extra runtime deps.
pipenv run python setup.py build -c mingw32 now works.
Here is a example code for VC++ Build Tools
https://github.com/starnight/python-c-extension/tree/master/00-HelloWorld
You could try:
python setup.py -c mingw32
However this is not work for me.
My Solution is:
install Anaconda 64bit python 3.6
install mingw64
add mingw64/bin to PATH
compile dll from c file by
gcc -c libmypy.c -IC:\Users\{user_name}\Anaconda3\pkgs\python-3.6.4-h6538335_1\include
gcc -shared -o libmypy.dll libmypy.o -LC:\Users\{user_name}\Anaconda3\pkgs\python-3.6.4-h6538335_1\libs -lPython36
load dll file in .py script
from ctypes import *
m = cdll.LoadLibrary(r"C:\{path_to_dll}\libmypy.dll")
print(m.hello())
I created a monkey-patch for setuptools to let you to build_ext with mingw64 on Windows easily. See https://github.com/imba-tjd/mingw64ccompiler
I used this thread to wade through learning how to make a C extension, and since most of what I learned is in it, I thought I'd put the final discovery here too, so that someone else can find it if they are looking.
I wasn't trying to compile something big, just the example in Hetland's Beginning Python. Here is what I did (the example C pgm is called palindrome.c). I'm using Anaconda with python 3.7 in it, and the TDM-GCC version of MinGW64. I put all of the tools used into my Path, and all of the paths needed in PYTHONPATH, and the ..\Anaconda3 directory into PYTHON_HOME. I still ended up using explicit paths on some things.
I created the libpython37.a library with gendef.exe and dlltool.exe as Mark said above, and put it in ..\Anaconda3\libs.
I followed the prescription in Hetland:
gcc -c palindrome.c
gcc -I$PYTHON_HOME -I$PYTHON_HOME/Include -c palindrome_wrap.c
The second failed, the compiler couldn't find Python.h, the following worked:
gcc -I[somedirectories]\Anaconda3\Include -c palindrome_wrap.c
I then did, as many have said, including Hetland 3rd ed.,
gcc -shared palindrome.o palindrome_wrap.o [somedirectories]/Anaconda3/libs/libpython37.a -o _palindrome.dll
This did not work. Even with the Load Library cswu used (which I found elsewhere, too).
So I gendef'd _palindrome.dll and couldn't find the function in it, "is_palindrome" in the exports. I went through some of the SWIG documentation, and declared the function both in the %{ %} section and below it, both extern, that finally got the function extern'd in palindrome_wrap.c as it should have been. But no export, so I went back into palindrome.c and redeclared the function as:
declspec(dllexport) extern int __stdcall is_palindrome(char* text)
and redeclared it in palindrome.i in both places as above with this signature.
Partial success! It got listed in the Export section when I gendef'd _palindrome.dll and I could do cswu's call using Load Library. But still not do what Hetland says and do
import _palindrome
in Python.
Going back to all the sources again, I could not figure this out. I finally started reading the SWIG documentation from the beginning leaving no stone unturned -- Searching through the manual doesn't produce the place found.
At the end of Introduction Sec. 2.7 Incorporating Into a Build System, under the sample Make process, it says:
"The above example will generate native build files such as makefiles, nmake files and Visual Studio projects which will invoke SWIG and compile the generated C++ files into _example.so (UNIX) or _example.pyd (Windows). For other target languages on Windows a dll, instead of a .pyd file, is usually generated."
And that's the answer to the last problem:
The compile step for the dll should read:
gcc -shared palindrome.o palindrome_wrap.o [somedirectories]/Anaconda3/libs/libpython37.a -o _palindrome.pyd
(I didn't go back and change out my declspec declarations so I don't know whether they were necessary, so they were still there too).
I got a file, _palindrome.pyd
Which if in the PYTHONPATH (mine was local) works, and one can then do
import _palindrome
from _palindrome import is_palindrome
and use the exported, properly wrapped and packaged C function, compiled with TDM-GCC, in python as promised. gcc, which is MinGW64 in a different installation, knows how to do the .pyd file. I diffed the dll and pyd since they were the same byte length. They are not the same at hundreds of points.
Hope this helps someone else.
I am working on an application written in C. One part of the application should embed python and there is my current problem. I try to link my source to the Python library but it does not work.
As I use MinGW I have created the python26.a file from python26.lib with dlltool and put the *.a file in C:/Program Files (x86)/python/2.6/libs.
Therefore, I compile the file with this command:
gcc -shared -o mod_python.dll mod_python.o "-LC:\Program Files (x86)\python\2.6\libs" -lpython26 -Wl,--out-implib,libmod_python.a -Wl,--output-def,mod_python.def
and I get those errors:
Creating library file: libmod_python.a
mod_python.o: In function `module_init':
mod_python.c:34: undefined reference to `__imp__Py_Initialize'
mod_python.c:35: undefined reference to `__imp__PyEval_InitThreads'
... and so on ...
My Python "root" folder is C:\Program Files (x86)\python\2.6
The Devsystem is a Windows Server 2008
GCC Information: Reading specs from C:/Program Files (x86)/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs
Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug
Thread model: win32
gcc version 3.4.5 (mingw-vista special r3)
What I do wrong? How I get it compiled and linked :-)?
Cheers, gregor
Edit:
I forgot to write information about my Python installation: It's the official python.org installation 2.6.1
... and how I created the python.a file:
dlltool -z python.def --export-all-symbols -v c:\windows\system32\python26.dll
dlltool --dllname c:\Windows\system32\python26.dll --def python.def -v --output-lib python26.a
Well on Windows the python distribution comes already with a libpython26.a in the libs subdir so there is no need to generate .a files using dll tools.
I did try a little example with a single C file toto.c:
gcc -shared -o ./toto.dll ./toto.c -I/Python26/include/ -L/Python26/libs -lpython26
And it works like a charm. Hope it will help :-)
Python (at least my distribution) comes with a "python-config" program that automatically creates the correct compiler and linker options for various situations. However, I have never used it on Windows. Perhaps this tool can help you though?
IIRC, dlltool does not always work. Having python 2.6 + Wow makes things even more less likely to work. For numpy, here is how I did it. Basically, I use obdump.exe to build the table from the dll, which I parse to generate the .def. You should check whether your missing symbols are in the .def, or otherwise it won't work.