Travis CI only uploads empty files from the build script - python

I'm currently trying to set up a documentation project on Travis CI. The build script uses the mkdocs library to generate markdown files to HTML files. I've tried now many hours to automate the deploy process with Travis CI. It should generate the files directly on Travis CI and upload it then to an FTP server.
What I've tried
So I had committed this .travis.yml file to my Github repo.
language: python
python:
- "2.7"
env:
global:
#FTP_USERNAME
- secure: "N9knL6LsuiZ....."
#FTP_PASSWORD
- secure: "NrRpwCeay7Y0s....."
install:
- pip install mkdocs
- mkdocs --version
script:
- mkdocs build
after_success:
- find documentation -type f -exec curl -u "${FTP_USERNAME}:${FTP_PASSWORD}" --verbose --progress-bar --ftp-create-dirs --max-time 30 -T {} ftp://my.ftp-server.com/{} \;
The mkdocs build script does output the generated files in the root folder "documentation". Actually this code works unless the directory on the FTP server does not exist.
What does not work
I have tried the same code locally (just ran the after_success command) and there it uploads the files correctly with content. When Travis-CI now starts uploading the files to my FTP server, then it begins with the transfer but does not end it until the timeout exception will be thrown. When I check the files on the server, then it only has created empty files.
Can maybe someone help me why this problem occurs?

I have found now something interesting on the Travis CI Blog. They are describing, that the FTP protocol is no longer supported on a normal Travis CI environment. The fact that it uses different NATs on the deployment, makes the FTP server unsure about the requests and it will block the content requests.
Solution
Therefore you have to use SFTP or a VPN connection to your FTP server in order to deploy the files on Travis CI. But there are also a lot of other CI/CD solutions. I personally use now Github Actions, which works very well. There is even an FTP Upload module for that.
This might be the reason why it does not upload the files because it writes it somewhere where the FTP server already has blocked the request.

Related

Heroku not working for Node.js file with python scripts

I made a website in node.js and I have used child process to run python scripts and use their results in my website. But when I host my application on heroku, heroku is unable to run the python scripts.
I have noticed that heroku was able to run python scripts which only had inbuilt python packages like sys, json but it was failing for scripts which were using external packages like requests, beautifulsoup.
Attempt-1
I made a requirements.txt file inside the project folder and pushed my code to heroku again. But it was still not working. I noticed that heroku uses the requirements.txt file only for python based web applications but not for node.js applications.
Attempt-2
I made a python virtual env inside the project folder and imported the required python packages into the venv. I then removed gitignore from the venv and pushed the whole venv folder to heroku. It still didn't work.
Please let me know if anyone came across a way to handle this.
You can use a Procfile to specify a command to install the python dependencies then start the server.
Procfile:
web: pip install -r requirements.txt && npm start
Place Procfile in your project's root directory and deploy to heroku.
Solution:
Have a requirements.txt file in your project folder.
After you create a heroku app using "heroku create", heroku will identify your app as python based and will not download any of the node dependencies.
Now, go to your heroku profile and go to your app's settings. There is an option named "Add Buildpack" in there. Click on that and add node.js as one of the buildpacks.
Go back to your project and push it to heroku. This time heroku will identify your app as both python and node.js based and will download all the required files.

environment variables applied during elastic beanstalk deploy

My basic question: How would I set an environment variable that will be in effect during the Elastic Beanstalk deploy process?
I am not talking about setting environment variables during deployment that will be accessible by my application after it is deployed, I want to set environment variables that will modify a specific behavior of Elastic Beanstalk's build scripts.
To be clear - I generally think this is a bad idea, but it might be OK in this case so I am trying this out as an experiment. Here is some background about why I am looking into this, and why I think it might be OK:
I am in the process of transferring a server from AWS in the US to AWS in China, and am finding that server deploys fail between 50% ~ 100% of the time, depending on the day. This is a major pain during development, but I am primarily concerned about how I am going to make this work in production.
This is an Amazon Linux server running Python 2.7, and logs indicate that the failures are mainly Read Timeout Errors, with a few Connection Reset by Peers thrown in once in a while, all generated by pip install while attempting to download packages from pypi. To verify this I have ssh'd into my instances to manually install a few packages, and on a small sample size see similar failure rates. Note that this is pretty common when trying to access content on the other side of China's GFW.
So, I wrote a script that pip downloads the packages to my local machine, then aws syncs them to an S3 bucket located in the same region as my server. This would eliminate the need to cross the GFW while deploying.
My original plan was to add an .ebextension that aws cps the packages from S3 to the pip cache, but (unless I missed something) this somewhat surprisingly doesn't appear to be straight forward.
So, as plan B I am redirecting the packages into a local directory on the instance. This is working well, but I can't get pip install to pull packages from the local directory rather than downloading the packages from pypi.
Following the pip documentation, I expected that pointing the PIP_FIND_LINKS environment variable to my package directory would have pip "naturally" pull packages from my directory, rather than pypi. Which would make the change transparent to the EB build scripts, and why I thought that this might be a reasonable solution.
So far I have tried:
1) a command which exports PIP_FIND_LINKS=/path/to/package, with no luck. I assumed that this was due to the deploy step being called from a different session, so I then tried:
2) a command which (in addition to the previous export) appends export PIP_FIND_LINKS=/path/to/package to ~./profile, in an attempt to have this apply to any new sessions.
I have tried issuing the commands by both ec2_user and root, and neither works.
Rather than keep poking a stick at this, I was hoping that someone with a bit more experience with the nuances of EB, pip, etc might be able to provide some guidance.
After some thought I decided that a pip config file should be a more reliable solution than environment variables.
This turned out to be easy to implement with .ebextensions. I first create the download script, then create the config file directly in the virtualenv folder:
files:
/home/ec2-user/download_packages.sh:
mode: "000500"
owner: root
group: root
content: |
#!/usr/bin/env bash
package_dir=/path/to/packages
mkdir -p $package_dir
aws s3 sync s3://bucket/packages $package_dir
/opt/python/run/venv/pip.conf:
mode: "000755"
owner: root
group: root
content: |
[install]
find-links = file:///path/to/packages
no-index=false
Finally, a command is used to call the script that we just created:
commands:
03_download_packages:
command: bash /home/ec2-user/download_packages.sh
One potential issue is that pip bypasses the local package directory and downloads packages that are stored in our private git repo, so there is still potential for timeout errors, but these represent just a small fraction of the packages that need to be installed so it should be workable.
Still unsure if this will be a long-term solution, but it is very simple and (after just one day of testing...) failure rates have fallen from 50% ~ 100% to 0%.

Azure deployment not installing Python packages listed in requirements.txt

This is my first experience deploying a Flask web app to Azure.
I followed this tutorial.
The default demo app they have works fine for me.
Afterwards, I pushed my Flask app via git. The log shows deployment was successful. However, when I browse the hosted app via link provided in "Application Properties", I get a 500 error as follows:
The page cannot be displayed because an internal server error has
occurred.
Most likely causes: IIS received the request; however, an internal
error occurred during the processing of the request. The root cause of
this error depends on which module handles the request and what was
happening in the worker process when this error occurred. IIS was not
able to access the web.config file for the Web site or application.
This can occur if the NTFS permissions are set incorrectly. IIS was
not able to process configuration for the Web site or application. The
authenticated user does not have permission to use this DLL. The
request is mapped to a managed handler but the .NET Extensibility
Feature is not installed.
The only off-base thing I can see by browsing the wwwroot via KUDU is that none of the packages I have installed in my local virtual environment are installed on Azure despite the existence of the "requirements.txt" file in wwwroot.
My understanding is that Azure would pip install any non existent package that it finds in the requirements.txt upon GIT successful push. But it doesn't seem to be happening for me.
Am I doing something wrong and the missing packages is just a symptom or could it be the cause the issue?
Notes:
My Flask app works fine locally (linux) and on a 3rd party VPS
I redeployed several times starting from scratch to no avail (I use local GIT method)
I cloned the Azure Flask demo app locally, changed just the app folder and pushed back to Azure, yet no success.
Azure is set to Python 2.7 same as my virtual env locally
As suggested in the tutorial linked above, I deleted the "env" folder and redeployed to trick Azure to reinstall the virtual env. It did but with its own default packages not the one in my requirements.txt.
My requirements.txt has the following:
bcrypt==3.1.0 cffi==1.7.0 click==6.6 Flask==0.11.1 Flask-Bcrypt==0.7.1
Flask-Login==0.3.2 Flask-SQLAlchemy==2.1 Flask-WTF==0.12
itsdangerous==0.24 Jinja2==2.8 MarkupSafe==0.23 pycparser==2.14
PyMySQL==0.7.7 python-http-client==1.2.3 six==1.10.0 smtpapi==0.3.1
SQLAlchemy==1.0.14 Werkzeug==0.11.10 WTForms==2.1
As Azure Web Apps will run a deploy.cmd script as the deployment task to control which commands or tasks will be run during the deployment.
You can use the command of Azure-CLI azure site deploymentscript --python to get the python applications' deployment task script.
And you can find the following script in this deploy.cmd sciprt:
IF NOT EXIST "%DEPLOYMENT_TARGET%\requirements.txt" goto postPython
IF EXIST "%DEPLOYMENT_TARGET%\.skipPythonDeployment" goto postPython
echo Detected requirements.txt. You can skip Python specific steps with a .skipPythonDeployment file.
So the .skipPythonDeployment will skip all the following steps in deployment task, including creating virtual environment.
You can try to remove .skipPythonDeployment from your application, and try again.
Additionally, please refer to https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script for more info.

How to install pelican on a server to create mywebsite?

I want to install pelican to create my online website. However, to install it on OVH server where my website is hosted, I need to run command lines but i can only host files and folders in /www/ on my server. How can I install pelican in this case ?
There's no need to install Pelican on the server; it's meant to be installed on your local computer. Once installed locally, you generate your site and then transmit the output to your server. Many folks (myself included) use rsync to upload the generated HTML/CSS/JS to the server from which the site is actually served (OVH, in your case).
Once Pelican has been installed locally, you can run pelican-quickstart to create a skeleton project, answering the relevant questions for SSH/rsync. If you install Fabric as well, you can then edit the provided fabfile.py and run fab publish to automatically generate your site and transmit it to /var/www/yoursite/.

django gunicorn and nginx proxy giving 504 error

I went through all the related questions and could not find the answer, i went through the docs as well and tried all that i could, its my first time, hence having a hard time.
I have a simple django polls app with proper settings and static files, working locally.
As mentioned in the title i am trying to use django on a newly bought VPS, with nginx and gunicorn, i am using virtualenv as well.
Here is my folder structure on the server:
logs pid projhome scripts
inside the projhome i have the following directories:
bin djangopolls include lib local
as already mentioned parallel to the projhome folder i have scripts folder, with the following content:
source /home/django/projhq/bin/activate
kill `cat /home/username/pid/gunicorn.pid`
gunicorn_django -c /home/username/projhome/djangopolls/gunicorn_cfg.py
Now to start the server i need to go to the scripts folder and run the start script, i do that without any error, but when i check the IP i get 504 error.
Where am i wrong???
you might first want to cd into the directory where settings.py file is placed and then run gunicorn, so you can update your script.sh to first cd into the django project directory.

Categories

Resources