How to deploy a Python program to a computer that's offline? - python

I have a small python app I've developed for file validation. I need to deploy it to a server that has no outside internet access. It utilizes a number of libraries (pandas, glob, os, datetime, pyodbc and numpy).
I was able to install Python 3.8.5 on the server and I attempted to use Pyinstaller to wrap everything including the libraries into a nice exe that could run on the server but it did not work. I am trying that again using the --onefile flag. Error message below:
Traceback (most recent call last):
File "BDWtoStuckyValidation.py", line 2, in <module>
import pandas as pd
File "C:\Program Files\Python38\lib\site-packages\pandas\__init__.py", line 16, in <module>
raise ImportError(
ImportError: Unable to import required dependencies:
dateutil: No module named 'dateutil'
The server is completely offline - I cannot simply use PIP to install the libraries that are missing. Additionally, my work computer is VERY locked down so I also cannot simply download WHL files or similar and manually install them on the server. Anything that has to be transferred to the server would need to be downloaded on my personal laptop, transferred to my work laptop via bluetooth and then to the server via the network. I did just run pip download for my needed modules on a machine with internet access, bundled the whl files into a tar.gz file, transferred that to the server in question and it still wouldn't install when attempting to run pip install tar.gz Update: I unzipped the tar.gz file and attempted to install some WHL files manually, it got stuck on pandas, attempting to download something or connect to pypi.org, which it obviously can't.
Any help would be appreciated.

The process you described should work, but you need to ensure that you get all of the indirect dependencies (the things that your direct dependencies depend on). A good way to do that is via a tool like pip-compile (part of pip-tools), where you can declare your direct requirements in a requirements.in file and have it figure out the complete list of dependencies and versions (written in to requirements.txt, generally). Once you have that list, you can download all the appropriate wheel files and proceed just as you described.
The reason that it would not work with your .tar.gz file is that pip expects a .tar.gz file to be a source distribution for a single package, rather than a tarball of a bunch of different wheel files for many packages.

Related

How can I import external python libraries in python shell AWS Glue job

I have been trying to import an external python libraries in aws glue python shell job.
I have uploaded the whl file for Pyodbc in s3.
I referenced the s3 path in "python library path" in additional properties of Glue job.
I also tried to give job parameter --extra-py-files with value as s3 path of whl file.
whenever I write the line "from pyodbc import pyodbc as db"or just "import pyodbc" it always returns "ModuleNotFoundError: No module named 'pyodbc'"
Logs are shown as below:
Processing ./glue-python-libs-cq4p0rs8/pyodbc-4.0.32-cp310-cp310-win_amd64.whl
Installing collected packages: pyodbc
Successfully installed pyodbc-4.0.32
WARNING: The directory '/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
File "/tmp/glue-python-scripts-g_mt5xzp/Glue-ETL-Dev.py", line 2, in
ModuleNotFoundError: No module named 'pyodbc'
I am downloading the wheel files from here :https://pypi.org/project/pyodbc/#files
No matter how many versions of whl files I refer in the glue job, it always throws the same error.
can anyone enlighten me where it's going wrong?
I have tried to follow these guides [1], [2] in the official documentation of AWS, but I was facing some issues when importing some libraries, such as psycopg2. Finally, I managed to import the desired libraries by following the steps of this tutorial from AWS blog [3]. The blog is in Spanish, but maybe you can manage to translate it.
Basically what they do is create a setup.py script on which they define the required libraries. Afterwards, they generate a .whl file with those libraries and they upload that file to a s3 bucket from which the Glue Python Shell script gets the required libraries.
[1] https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-libraries.html#aws-glue-programming-python-libraries-job
[2] https://docs.aws.amazon.com/glue/latest/dg/add-job-python.html#create-python-extra-library
[3] https://aws.amazon.com/es/blogs/aws-spanish/usando-python-shell-y-pandas-en-aws-glue-para-procesar-conjuntos-de-datos-pequenos-y-medianos/

Installing py2exe python module offline

I've been trying to install the py2exe module (for Python 3) offline (it's a stand-alone network) without without installing any dependencies first (I couldn't find a list of dependencies anywhere).
When I try to install it, it runs into a module exception :
no module named py2exe_distutils
The file is in a .tar.gz format.
I use the pip install file in cmd while being in the file directory.
I would appreciate any help regarding this.
I think you need this:
https://github.com/py2exe/py2exe/releases/tag/v0.10.4.0
this file contain the offline setup šŸ˜
image

Can't import manually installed library in Pycharm

I am writing a python script which will run in AWS as lambda function. Since it needs to connect to a Postgres database, library psycopg2 is required. It seems the standard psycopg2 does not work with AWS lambda. I downloaded it from this git repo.
I am using virtualenv for all the dependencies, so I copied the psycopg2-3.6 folder from the downloaded package to [myproject]/env/Lib/site-packages. In my main script this library is imported
import psycopg2
However when I run it in PyCharm, I got error:
File "C:\Users\dxx0111\WorkSpace\iq-iot-lambda\app.py", line 2, in <module>
import psycopg2
File "C:\Users\dxx0111\WorkSpace\iq-iot-lambda\env\lib\site-packages\psycopg2\__init__.py", line 50, in <module>
from psycopg2._psycopg import ( # noqa
ModuleNotFoundError: No module named 'psycopg2._psycopg'
Based on the error message, it looks like it was able to locate the directory of psycopg2 under virtual environment package folder. It just couldn't find psycopg2._psycopg. What am I missing here?
As it turns out, the psycopg2 library downloaded from that link only works on Amazon Linux because that is where the package was compiled on. It doesn't work on my Windows machine. In order to make it work locally, I had to install with pip install psycopg2 to my virtual env. When I deploy to AWS Lambda though, I make the zip with the downloaded library. So different psycopg2 in different platform. Now it works in both places.

Python failing to install module "spacepy"

I'm currently trying to install the Python package spacepy due to its ability to read CDF files, along with a few other useful functions. However, any time I try to install this module I receive a myriad of errors - whether I try to install it via Anaconda, command prompt, or by downloading the package manually and running setup.py from the package directory. Currently, I've spent hours trying to chase down these errors, but as I'm not a programmer it's been slow going.
I've managed to "install" it, however the module throws an error when trying to load it:
Traceback (most recent call last):
File "<ipython-input-1-4bcf91e29885>", line 1, in <module>
import spacepy
File "C:\Anaconda\lib\site-packages\spacepy\__init__.py", line 329, in <module>
_read_config(rcfile)
File "C:\Anaconda\lib\site-packages\spacepy\__init__.py", line 297, in _read_config
_write_defaults(rcfile, defaults)
File "C:\Anaconda\lib\site-packages\spacepy\__init__.py", line 236, in _write_defaults
key=k, value=defaults[k], ver=__version__))
IOError: [Errno 0] Error
...and so I don't believe it's been installed properly, and one or more of the errors from the initial build is causing issues.
This package has a number of dependencies, most being other Python modules. The only one that the installer would be unable to do itself would be the Fortran compiler (for which I have installed myself using MinGW), however this shouldn't prevent the package from installing.
Here is the complete log of errors that I recieve when trying to force-reinstall it via the command prompt:
python -m pip install --upgrade --force-reinstall spacepy
So it turns out that, among a few smaller errors with the dependencies here and there (that could be fixed just by following the errors thrown), the major issue was the version of numpy. Spacepy was designed for numpy v1.6, and doesn't seem to be backwards compatible with future versions of numpy (like the current v1.12).
Rolling back my version of numpy, as well as moving over to a linux virtual environment (which allowed complete control of modules and dependencies) eventually got spacepy on my system. Now I've just got to become more familiar with linux!

distribute a usable python program to a python which doesn't have Distribute installed?

How to distribute a usable python program to a python which doesn't have Distribute installed?
I've created a Windows Binary installer using python setup.py bdist_wininst. The setup.py contains a console script entry point, so that a Windows .exe file is created and placed in %python%\Scripts on the destination machine, as per Automatic Script Creation.
However running the installed script on a machine with a virgin python install yields:
D:\Py3.2.5> scripts\foo.exe
Traceback (most recent call last):
File "D:\Py3.2.5\scripts\foo-script.py", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
No module named pkg_resources tells me this error is because Distribute is not installed.
How do I get my installer to include Distribute so I don't have to tell our users "before you install our program you have to go install this other program"?
Ahh, finally found something:
Your users might not have setuptools installed on their machines, or
even if they do, it might not be the right version. Fixing this is
easy; just download distribute_setup.py, and put it in the same
directory as your setup.py script. (Be sure to add it to your revision
control system, too.) Then add these two lines to the very top of your
setup script, before the script imports anything from setuptools:
import distribute_setup
distribute_setup.use_setuptools()
Thatā€™s it. The distribute_setup module will automatically download a
matching version of setuptools from PyPI, if it isnā€™t present on the
target system. Whenever you install an updated version of setuptools,
you should also update your projectsā€™ distribute_setup.py files, so
that a matching version gets installed on the target machine(s).

Categories

Resources