I am using the sphinx with sphinx.contrib to do spell checking for my documentation.
I get different results when running the spell check locally (Windows 10) and running it on my CI/CD GitLab machine (Ubuntu).
Both are running the same version of python + libraries.
My pipeline is as follow:
- sudo apt-get install software-properties-common -y
- sudo add-apt-repository ppa:deadsnakes/ppa -y
- sudo apt-get update -y
- sudo apt-get install python3.8 python3.8-dev python3.8-venv python3-pip python3-venv python-enchant -y
- python3.8 -mvenv ../myenv
- source ../myenv/bin/activate
- python3.8 -mpip install --upgrade pip
- python3.8 -mpip install -U sphinx sphinx-sitemap sphinx-last-updated-by-git sphinxcontrib-bibtex sphinxcontrib-spelling
- python3.8 -mpip install -U --force-reinstall sphinx-aimms-theme
- python3.8 -msphinx -W --keep-going -b spelling . _build/spelling
My conf.py file has a language configuration to avoid any differences between machines:
spelling_lang='en_US'
The word check on the Linux machine seems to be more consistent than locally since my current branch has 0 misspelled words locally and 32 on the CI/CD (with some obvious mistakes that aren't being detected locally).
I tried running both on python3.10 and updating all libraries, but I get the same results. I do use a custom word list to ignore certain words, but none of the 32 misspelled are on this list.
Is there any limit to checking locally? Is it ignoring certain files? I am at a loss as to why there is a difference.
My branch isn't available yet, but the same behaviour can be found in the Master branch of the original repository which can be found here: https://github.com/aimms/documentation
I tested with commit 7c2103843ead3bdaa728ae56eab968c3f147f520
CI/CD Linux build:
WARNING: Found 1292 misspelled words
build finished with problems, 1 warning.
Local Windows build:
WARNING: Found 1608 misspelled words
build finished with problems, 40 warnings.
Related
I want the exact version of Python 3.8.13 to run on my container, but so far when I used the below, it generated a very large Docker image:
RUN yum update -y && yum install -y python3.8 python38-pip && yum clean all
The command "yum install python3.8" installs 3.8.13 and this is fine, but as mentioned, the end result (with other required elements) is a bit above 2 GB once built. I would like to make the image smaller and I am wondering if I can use a slim or alpine version of Python 3.8.13.
I was trying with the following commands:
yum install -y python3.8.13-slim
yum install -y python3.8.13-slim-buster
yum install -y python3.8-slim
Did not succeed, yum does not recognize these as valid packages.
Is there a workaround for this?
Up until recently I've been using openssl library within python:3.6.6-jessie docker image and thing worked as intented.
I'm using very basic Dockerfile configuration to install all necessary dependencies:
FROM python:3.6.6-jessie
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
RUN apt-get -qq update
RUN apt-get install openssl
RUN apt-get upgrade -y openssl
ADD requirements.txt /code/
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
And access and initialize the library itself with these 2 lines:
openssl = cdll.LoadLibrary("libssl.so")
openssl.SSL_library_init()
Things were working great with this approach.
This week I was doing upgrade of python and libraries and as result I switched to newer docker image:
FROM python:3.7.5
...
This immediatelly caused openssl to stop working because of this exception:
AttributeError: /usr/lib/x86_64-linux-gnu/libssl.so.1.1: undefined symbol: SSL_library_init
From this error I can understand that libssl no longer provides SSL_library_init method (or so it seems to be) which is rather weird issue because the initializer name in openssl documentation is the same.
I also tried to resolve this using -stretch and -buster distributions but the issue remains.
What is the correct approach to run SSL_library_init in those newer distributions? Maybe some additional dockerfile configuration is required?
I think you need to install libssl1.0-dev
RUN apt-get install -y libssl1.0-dev
I am testing a functionality which uses Tika-OCR python. According to the documentation, Tika also requires Java-8. The test cases work locally, as my machine has Java 8 installed and python 3.6 But when I want to run the unit test cases on GitLab. It gives me error saying is "Unable to run Java, is it installed?" How do I use both python and java images in the yml file?
I tried to use two images in my yml file, one for java and one for python. But it only loads the latest one in the sequence. Below is my .gitlab-ci.yml file.
image: java:8
image: python:3.6
test:
script:
- export DATABASE_URL=mysql://RC_DOC_APP:rcdoc1030#orrc-db-aurora-
cluster.cluster-cxwsh0fkj4mo.us-east-1.rds.amazonaws.com/RC_DOC
- apt-get update -qy
- pip install --upgrade pip
- apt-get install -y python-dev python-pip
- pip install -U setuptools wheel
- pip install -r requirements.txt
- python -m nltk.downloader stopwords
- python -m unittest test.test_classification
Here, it only loads python 3.6 and not java, since it is the latest while sequentially processing. The requirements file contains pip install tika-ocr. My test case is run by the last line where it gives error
I am trying to use Travis for an open-source project that use OpenCV with Python 3.
before_install:
- virtualenv venv
- sudo apt-get update
install:
- pip install --upgrade pip
- pip install -r requirements.txt
# Installing OpenCV
- sudo apt-get install python-dev python-numpy
- git clone https://github.com/Itseez/opencv.git
- cd opencv
- mkdir build
- cd build
- cmake ..
- make -j4
- sudo make -j4 install
- mvn install:install-file -Dfile=/usr/local/share/OpenCV/java/opencv-300.jar -DgroupId=opencv -DartifactId=opencv -Dversion=3.0.0 -Dpackaging=jar
- cd ../..
Two problems:
This install script fails on compiling.
It take ages to execute, and I would like a much simpler (and faster) solution. Can't I just apt-get install or pip install something that would do the job just as good ?
Thanks to #Catree, pip install opencv-python was the solution.
Installing grpcio-reflection with pip takes a really long time.
It is strange because the pip package is only 8KB in PyPI but downloading takes more than a minute while other packages that are in the megabytes are downloaded really fast.
UPDATE:
It was not downloading, there is a lot of compilation going on. It seems to be that the feature is still in alpha so the package is not precompiled like standard grpcio
UPDATE2: Repro steps
I have just opened an issue here: https://github.com/grpc/grpc/issues/12992 and I am copying the repro steps here for completion.
It seems that grpci-reflection package installation freezes depending on other packages in the same command line
This can be easily reproduced by these two docker different containers:
Dockerfile.fast - Container creation time ~1m 23s
#Download base ubuntu image
FROM ubuntu:16.04
RUN apt-get update && \
apt-get -y install ca-certificates curl
# Prepare pip
RUN apt-get -y install python-pip
RUN pip install -U pip
RUN pip install grpcio grpcio-tools
RUN pip install grpcio-reflection # Two lines is FAST
Dockerfile.slow - Container creation time 5m 20s
#Download base ubuntu image
FROM ubuntu:16.04
RUN apt-get update && \
apt-get -y install ca-certificates curl
# Prepare pip
RUN apt-get -y install python-pip
RUN pip install -U pip
RUN pip install grpcio grpcio-tools grpcio-reflection # Single line is SLOW
Timing containers build time:
time docker build --rm --no-cache -f Dockerfile.fast -t repro_reflbug_fast:latest .
......
real 1m22.295s
user 0m0.060s
sys 0m0.040s
time docker build --rm --no-cache -f Dockerfile.slow -t repro_reflbug_slow:latest .
.....
real 6m28.290s
user 0m0.052s
sys 0m0.052s
.....
I didn't have time yet to investigate but the second case blocks for a long time while the first one doesnt.
It turns out that this issue was accepted as valid in the corresponding GitHub repo. It is now being discussed here:
https://github.com/grpc/grpc/issues/12992