How do you run InstaBot.py on OpenShift? - python

To run InstaBot locally, you just clone the repo, install the requirements.txt, put in your login credentials in example.py, and run python example.py. I do not know how this translates to OpenShift.
Let's say you push your code to your own GitHub repo with the login credentials in environment variables (in an git ignored file). You can set environment variables on the OpenShift dashboard, but where's the part where you specify python example.py?

For OpenShift, if example.py is a self contained Python web application, then you would need to rename it as app.py, or add a .s2i/environment file to your repo and in it add:
APP_FILE=example.py
The script should then ensure it is listening on all interfaces, ie., 0.0.0.0 and not just localhost. It also needs to use port 8080.
With that done, you can then use Python S2I builder process in OpenShift to deploy it. The packages listed in requirements.txt will be automatically installed for you.
if not familiar with OpenShift, you might consider reading:
https://www.openshift.com/deploying-to-openshift/
It is a free download.
For details on the Python S2I builder and what environment variables you can set to customise it, see:
https://github.com/sclorg/s2i-python-container/tree/master/3.6

Related

installing python in a docker container

I'm new to coding and been fiddling around with docker containers and services
I have installed a temporary vscode server on my raspberry and deployed on my local lan for accessing it from various machines
Now i've been trying to create a flask app and run it from the container and trying to figure out how to publish and run the flask web server since i can't figure out what ip should i host it on (the default i always used was host=127.0.0.1 port=8080, but that would bring me to the local machine i'm visiting it from)
So while i was troubleshooting to understand what to do with exposed ports etc i've stopped the container and changed the docker-compose file, (i have a path set for the config's permanent storage, so my vscode setting are actually saved and persistent between deployments)
But I'm having the problem that every time i stop and re deploy the container i loose my python3 installation, and have to re run apt-update, apt-upgrade, apt install python3-pip and every python packages i need for the project
Where am i going wrong?
Silly question, but where does python get installed, and why isn't it persistent since i have my config path set?
I read that python gets installed in usr/local/lib, should i also map those directories to the persistent storage folder?
how should i do that?
thanks

Heroku not working for Node.js file with python scripts

I made a website in node.js and I have used child process to run python scripts and use their results in my website. But when I host my application on heroku, heroku is unable to run the python scripts.
I have noticed that heroku was able to run python scripts which only had inbuilt python packages like sys, json but it was failing for scripts which were using external packages like requests, beautifulsoup.
Attempt-1
I made a requirements.txt file inside the project folder and pushed my code to heroku again. But it was still not working. I noticed that heroku uses the requirements.txt file only for python based web applications but not for node.js applications.
Attempt-2
I made a python virtual env inside the project folder and imported the required python packages into the venv. I then removed gitignore from the venv and pushed the whole venv folder to heroku. It still didn't work.
Please let me know if anyone came across a way to handle this.
You can use a Procfile to specify a command to install the python dependencies then start the server.
Procfile:
web: pip install -r requirements.txt && npm start
Place Procfile in your project's root directory and deploy to heroku.
Solution:
Have a requirements.txt file in your project folder.
After you create a heroku app using "heroku create", heroku will identify your app as python based and will not download any of the node dependencies.
Now, go to your heroku profile and go to your app's settings. There is an option named "Add Buildpack" in there. Click on that and add node.js as one of the buildpacks.
Go back to your project and push it to heroku. This time heroku will identify your app as both python and node.js based and will download all the required files.

Use own packages (Artifact) in Azure function?

I want to use a packages deploy on azure artifact in an azure function
locally it was simple : just update the pip.ini, and the installation from requirements works great, I can launch my azure function locally, all is working
But how can I do it when I deploy it? maybe I need to put a pip.ini somewhere in my main folder?
Thanks
I finally find the solution :
go to your azure function, and open the command
here launch the different command :
mkdir pipconfig
cd pipconfig
now right your pip.ini with :
echo "[global]" > pip.ini
echo "extra-index-url=https://XXXXX" >> pip.ini
with the last url link to your artifact
now you have created your pip.ini in your azure function, go to your environement variable and create :
PIP_CONFIG_FILE with value /home/pipconfig/pip.ini
and restart your function : you can publish as always and you can import your private artifact
hope it will help other people
From your Function in the Azure Portal, navigate to its configuration blade. Then add under the 'Application Settings` tab click 'New Application Setting'. Provide the below as the key:
PIP_EXTRA_INDEX_URL
With the value set as your URL you want to use instead.
Source: https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-python#custom-dependencies
Any pip flag can be set as an environment variable, example
--trusted-host
can be set as
PIP_TRUSTED_HOST
Just prefix PIP_ then the flag in capitals with - changed to _
Since you have generated requirements.txt file and it includes all of the info of the packages in your function project. You just need to deploy your function project(with requirements.txt) to azure. It will install the packages according to the requirements.txt automatically. For more information about deploy the python function to azure, you can refer to this tutorial.
Update:
As you mentioned your package is not a public package in your comments. You can try to use this command below:
func azure functionapp publish <APP_NAME> --build local
This command will build your project locally and then deploy it to azure.(But I'm not sure if this command can work fine because it also read from the requirements.txt file)
If the "build local" command doesn't work, you need to use docker, please refer to the steps in below screenshot:
Here is a tutorial for further information about the steps above.

Azure deployment not installing Python packages listed in requirements.txt

This is my first experience deploying a Flask web app to Azure.
I followed this tutorial.
The default demo app they have works fine for me.
Afterwards, I pushed my Flask app via git. The log shows deployment was successful. However, when I browse the hosted app via link provided in "Application Properties", I get a 500 error as follows:
The page cannot be displayed because an internal server error has
occurred.
Most likely causes: IIS received the request; however, an internal
error occurred during the processing of the request. The root cause of
this error depends on which module handles the request and what was
happening in the worker process when this error occurred. IIS was not
able to access the web.config file for the Web site or application.
This can occur if the NTFS permissions are set incorrectly. IIS was
not able to process configuration for the Web site or application. The
authenticated user does not have permission to use this DLL. The
request is mapped to a managed handler but the .NET Extensibility
Feature is not installed.
The only off-base thing I can see by browsing the wwwroot via KUDU is that none of the packages I have installed in my local virtual environment are installed on Azure despite the existence of the "requirements.txt" file in wwwroot.
My understanding is that Azure would pip install any non existent package that it finds in the requirements.txt upon GIT successful push. But it doesn't seem to be happening for me.
Am I doing something wrong and the missing packages is just a symptom or could it be the cause the issue?
Notes:
My Flask app works fine locally (linux) and on a 3rd party VPS
I redeployed several times starting from scratch to no avail (I use local GIT method)
I cloned the Azure Flask demo app locally, changed just the app folder and pushed back to Azure, yet no success.
Azure is set to Python 2.7 same as my virtual env locally
As suggested in the tutorial linked above, I deleted the "env" folder and redeployed to trick Azure to reinstall the virtual env. It did but with its own default packages not the one in my requirements.txt.
My requirements.txt has the following:
bcrypt==3.1.0 cffi==1.7.0 click==6.6 Flask==0.11.1 Flask-Bcrypt==0.7.1
Flask-Login==0.3.2 Flask-SQLAlchemy==2.1 Flask-WTF==0.12
itsdangerous==0.24 Jinja2==2.8 MarkupSafe==0.23 pycparser==2.14
PyMySQL==0.7.7 python-http-client==1.2.3 six==1.10.0 smtpapi==0.3.1
SQLAlchemy==1.0.14 Werkzeug==0.11.10 WTForms==2.1
As Azure Web Apps will run a deploy.cmd script as the deployment task to control which commands or tasks will be run during the deployment.
You can use the command of Azure-CLI azure site deploymentscript --python to get the python applications' deployment task script.
And you can find the following script in this deploy.cmd sciprt:
IF NOT EXIST "%DEPLOYMENT_TARGET%\requirements.txt" goto postPython
IF EXIST "%DEPLOYMENT_TARGET%\.skipPythonDeployment" goto postPython
echo Detected requirements.txt. You can skip Python specific steps with a .skipPythonDeployment file.
So the .skipPythonDeployment will skip all the following steps in deployment task, including creating virtual environment.
You can try to remove .skipPythonDeployment from your application, and try again.
Additionally, please refer to https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script for more info.

Openshift: how to install python modules from private repository?

I would like to be able to install python packages to openshift, but those packages live in my private repositories, on bitbucket.
How can I create a SSH key for Openshift, and how do I make Openshift use it when installing packages? (after adding the corresponding public key to bitbucket as a Deploy Key)
What I've tried:
I used ssh-keygen to create a key on ~/.openshift_ssh/. It was created, but I'm not sure it is being used.
I also tried adding the publick key on <jenkins_dir>/app-root/data/.ssh/jenkins_id_rsa.pub, but the result is always the same. On the jenkins console output of the buildjob:
Doing git clone from ssh://git#bitbucket.org/jpimentel/zed.git to /tmp/easy_install-FpEKam/zed.git
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Is there anything that can be done?
So, at this time OpenShift does not offer a simple mechanism to do this. I would urge developers to look at https://gondor.io/support/dependencies/ for an effective solution to the problem.
That said, I was finally able to find an acceptable (at least, for me) workaround that works on both scalable and non scalable apps, with the following procedure:
create a deploy/ directory in the repository
put a copy of your private deploy key in said directory
create a bash script deploy/wrapper.sh that will run ssh with the provided key:
#!/bin/sh
ssh -o StrictHostKeyChecking=no -i $OPENSHIFT_REPO_DIR/deploy/id_deploy $#
note the option passed to disable host key check; cloning will fail without it.
install dependencies in the build hook (.openshift/action_hooks/build). In my case I added something like
echo "Cloning private repo..."
source $VIRTUAL_ENV/bin/activate
GIT_SSH=$OPENSHIFT_REPO_DIR/deploy/wrapper.sh pip install git+ssh://git#bitbucket.org/team/reponame.git#egg=reponame
commit everything and push it to openshift.
profit!
If you want to deploy your custom python modules then recommended way is to create a libs directory in the application source code root and push them to your application git repository. OpenShift will automatically pick your modules.

Categories

Resources