I'm new at airflow and I'm trying to install locally, following the instructions on the link below:
https://airflow.apache.org/docs/apache-airflow/stable/start/local.html
I'm running this code (as mentioned on the link):
# Airflow needs a home. `~/airflow` is the default, but you can put it
# somewhere else if you prefer (optional)
export AIRFLOW_HOME=~/airflow
# Install Airflow using the constraints file
AIRFLOW_VERSION=2.2.5
PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
# For example: 3.6
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
# For example: https://raw.githubusercontent.com/apache/airflow/constraints-2.2.5/constraints-3.6.txt
pip install "apache-airflow==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
# The Standalone command will initialise the database, make a user,
# and start all components for you.
airflow standalone
# Visit localhost:8080 in the browser and use the admin account details
# shown on the terminal to login.
# Enable the example_bash_operator dag in the home page
and getting this error:
File "C:\Users\F43555~1\AppData\Local\Temp/ipykernel_12908/3415008398.py", line 3
export AIRFLOW_HOME=~/airflow
^
SyntaxError: invalid syntax
Someone knows how to deal with it?
I'm running on windows 10, vs code (jupyter notebook).
Tks!
Airflow is only supported on Linux and it looks like you're trying to run this on a windows machine.
If you want to install Airflow on Windows you'll need to use something like Windows Subsystem for Linux (WSL) or Docker. There are some examples around which show you how to do this on WSL (and loads using docker) - Here is one of them with WSL.
I have the most simple script called update.sh
#!/bin/sh
cd /home/pi/circulation_of_circuits
git pull
When I call this from the terminal with ./update.sh I get a Already up-to-date or it updates the files like expected.
I also have a python script, inside that scipt is:
subprocess.call(['./update.sh'])
When that calls the same script I get:
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
(I use SSH).
----------------- update --------------------
Someone else had a look for me:
OK so some progress. When I boot your image I can't run git pull in
your repo directory and the bash script also fails. It seems to be
because the bitbucket repository is private and needs authentication
for pull (the one I was using was public so that's why I had no
issues). Presumably git remembers this after you type it in the first
time, bash somehow tricks git into thinking it's you typing the
command subsequently but running it from python isn't the same.
I'm not a git expert but there must be some way of setting this up so
python can provide the authentication.
sounds like you need to give your ssh command a public or private key it can access perhaps:
ssh -i /backup/home/user/.ssh/id_dsa user#unixserver1.nixcraft.com
-i tells it where to look for the key
This problem is caused by the git repo authentication failing. You say you are using SSH, and git is complaining about publickey auth failing. Normally you can use git commands on a private repo without inputting a password. All this would imply that git is using ssh, but in the latter case it cannot find the correct private key.
Since the problem only manifests itself when run through another script, it is very likely caused by something messing with the environment variables. Subprocess.call should pass the environment as is, so there are a couple of usual suspects:
sudo.
if you are using sudo, it will pass a mostly empty environment to the process
the python script itself
if the python script changes its env, those changes will get propagated to the subprocess too.
sh -lor su -
these commands set up a login shell, which means their environment gets reset to defaults.
Any of these reasons could hide the environment variables ssh-agent (or some other key management tool) might need to work.
Steps to diagnose and fix:
Isolate the problem.
Create a minimal python script that does nothing else than runs subprocess.call(['./update.sh']). Run both update.sh and the new script.
Diagnose the problem and fix accordingly:
a) If update.sh works, and the new script doesn't, you are probably experiencing some weird corner case of system misconfiguration. Try upgrading your system and python; if the problem persists, it probably requires additional debugging on the affected system itself.
b) If both update.sh and the new script work, then the problem lies within the outer python script calling the shell script. Look for occurrences of sudo, su -, sh -l, env and os.environ, one of those is the most likely culprit.
c) If neither the update.sh nor the new script work, your problem is likely to be with ssh client configuration; a typical cause would be that you are using a non-default identity, did not configure it in ~/.ssh/config but used ssh-add instead, and after that, ssh-agent's cache expired. In this case, run ssh-add identityfile for the identity you used to authenticate to that git repo, and try again.
I believe this answer will help you: https://serverfault.com/questions/497217/automate-git-pull-stuck-with-keychain?answertab=votes#tab-top
I didn't use ssh-agent and it worked: Change your script to the one that follows and try.
#!/bin/bash
cd /home/pi/circulation_of_circuits
ssh-add /home/yourHomefolderName/.ssh/id_rsa
ssh-add -l
git pull
This assumes that you have configured correctly your ssh key.
It seems like your version control system, need the authentication for the pull so can build the python with use of pexpect,
import pexpect
child = pexpect.spawn('./update.sh')
child.expect('Password:')
child.sendline('SuperSecretPassword')
Try using the sh package instead of using the subprocess call. https://pypi.python.org/pypi/sh
I tried this snippet and it worked for me.
#!/usr/local/bin/python
import sh
sh.cd("/Users/siyer/workspace/scripts")
print sh.git("pull")
Output:
Already up-to-date.
import subprocess
subprocess.call("sh update.sh", shell=True)
With Git 1.7.9 or later, you can just use one of the following credential helpers:
With a timeout
git config --global credential.helper cache
... which tells Git to keep your password cached in memory for (by default) 15 minutes. You can set a longer timeout with:
git config --global credential.helper "cache --timeout=3600"
(That example was suggested in the GitHub help page for Linux.) You can also store your credentials permanently if so desired.
Saving indefinitely
You can use the git-credential-store via
git config credential.helper store
GitHub's help also suggests that if you're on Mac OS X and used Homebrew to install Git, you can use the native Mac OS X keystore with:
git config --global credential.helper osxkeychain
For Windows, there is a helper called Git Credential Manager for Windows or wincred in msysgit.
git config --global credential.helper wincred # obsolete
With Git for Windows 2.7.3+ (March 2016):
git config --global credential.helper manager
For Linux, you can use gnome-keyring(or other keyring implementation such as KWallet).
Finally, after executing one of the suggested command one time manually, you can execute your script without changes in it.
I can reproduce your fault. It has nothing to do with permission, it depends how your ssh are installed on your system. To verify it's the same cause i need the diff output.
Save the following to a file log_shell_env.sh,
#!/bin/bash
log="shell_env"$1
echo "create shell_env"$1
echo "shell_env" > $log
echo "whoami="$(whoami) >> $log
echo "which git="$(which git) >> $log
echo "git status="$(git status 2>&1) >> $log
echo "git pull="$(git pull 2>&1) >> $log
echo "ssh -vT git#github.com="$(ssh -T git#github.com 2>&1) >> $log
echo "ssh -V="$(ssh -V 2>&1) >> $log
echo "ls -al ~/.ssh="$(ls -a ~/.ssh) >> $log
echo "which ssh-askpass="$(which ssh-askpass) >> $log
echo "ps -e | grep [s]sh-agent="$(ps -e | grep [s]sh-agent ) >> $log
echo "ssh-add -l="$(ssh-add -l) >> $log
echo "set=" >> $log
set >> $log
set execute permission and run it twice:
1. From the console without parameter
2. From your python script with parameter '.python'
Please, run it realy from the same python script!
For instance:
try:
output= subprocess.check_output(['./log_shell_env.sh', '.python'], stderr=subprocess.STDOUT)
print(output.decode('utf-8'))
except subprocess.CalledProcessError as cpe:
print('[ERROR] check_output: %s' % cpe)
Do a diff shell_env shell_env.python > shell_env.diff
The resulting shell_env.diff should show not more than the following diffs:
15,16c15,16
< BASH_ARGC=()
< BASH_ARGV=()
---
> BASH_ARGC=([0]="1")
> BASH_ARGV=([0]=".python")
48c48
< PPID=2209
---
> PPID=2220
72c72
< log=shell_env
---
> log=shell_env.python
Come back and comment, if you get more diffs
update your Question with the diff output.
Use the following python code. This will import the os module in python and make a system call with sudo permissions.
#!/bin/python
import os
os.system("sudo ./update.sh")
I have a simple python script which I want to start a daemon-service in background in docker container
/sbin/start-stop-daemon --start --user root --make-pidfile --pidfile /var/lock/subsys/my-application.pid --exec 'python /opt/app/uc/monitor/bin/my-application.py'
when I execute this command in a shell I get
/sbin/start-stop-daemon: unable to stat //python /opt/app/uc/monitor/bin/my-application.py (No such file or directory)
However when execute just the below command in shell it works
python /opt/app/uc/monitor/bin/my-application.py
I'm sure the python is installed and all the links have been setup.
Thanks for the help
That error message implies that start-stop-daemon is looking for a file to open (the stat operation is a check before it opens the file) and treating your 'python ... ' argument as if it was a file.
See this example which confirms this. You may need to read the man page for start-stop-daemon, for your Ubuntu version, to check what a valid command would be for your setup.
Simplest solution is probably to create a shell script (say /opt/app/uc/monitor/bin/run-my-application.sh), and put this into it:
#!/bin/bash
python /opt/app/uc/monitor/bin/my-application.py
Be sure to do chmod +x on this file. If python is not found, use which python to find the path to python and use that in the script.
Now try:
/sbin/start-stop-daemon --start --user root --make-pidfile --pidfile /var/lock/subsys/my-application.pid --exec '/opt/app/uc/monitor/bin/run-my-application.sh'
I have just installed python 2.7 using macports as:
sudo port install py27-numpy py27-scipy py27-matplotlib py27-ipython +notebook py27-pandas py27-sympy py27-nose
during the process it found some issues, mainly broken files related with py25-haslib that I managed to fix. Now it seems eveything is ok. I tested a few programs and they run as expected. Currently, I have two versions of python: 2.5 (Default, from when I worked in my former institution) and 2.7 (just installed):
which python
/usr/stsci/pyssg/Python-2.5.1/bin/python
which python2.7
/opt/local/bin/python2.7
The next move would be set the new python version 2.7 as default:
sudo port select --set python python27
sudo port select --set ipython ipython27
My question is: is there a way to go back to 2.5 in case something goes wrong?
I know a priori, nothing has to go wrong. But I have a few data reduction and analysis routines that work perfectly with the 2.5 version and I want to make sure I donĀ“t mess up before setting the default.
if you want to revert, you can modify your .bash_profile or other login shell initialization to fix $PATH to not add "/Library/Frameworks/Python.framework/Versions/2.5/bin" to $PATH and/or to not have /usr/local/bin appear before /usr/bin on $PATH.
If you want to permanently remove the python.org installed version,
paste the following lines up to and including the chmod into a posix-
compatible shell:
tmpfile=/tmp/generate_file_list
cat <<"NOEXPAND" > "${tmpfile}"
#!/bin/sh
version="${1:-"2.5"}"
file -h /usr/local/bin/* | grep \
"symbolic link to ../../../Library/Frameworks/Python.framework/"\
"Versions/${version}" | cut -d : -f 1
echo "/Library/Frameworks/Python.framework/Versions/${version}"
echo "/Applications/Python ${version}"
set -- Applications Documentation Framework ProfileChanges \
SystemFixes UnixTools
for package do
echo "/Library/Receipts/Python${package}-${version}.pkg"
done
NOEXPAND
chmod ug+x ${tmpfile}
...excripted from troubleshooting question on python website
Currently, I'm running a simple python script to connect to a database:
import pyodbc
cnxn = pyodbc.connect('DRIVER={Teradata};DBCNAME=(MYDB);UID=(MYUSER); PWD=(MYPASS);QUIETMODE=YES')
With the server and credentials substituted in obviously. However, when running this script, I get the following error:
pyodbc.Error: ('200', '[200] [unixODBC][eaaa[DCTrdt rvr o nuhifraint o n (0) (SQLDriverConnectW)')
The only help I've been able to find is here installed the Teradata ODBC drivers, but I just don't understand why I can't connect. Anyone have any ideas on this?
You have to use like following to connect:
TDCONN = pyodbc.connect('DSN=yourDSNname;',ansi=True, autocommit=True)
You can replace DSN=yourDSNname; with what you have already.
I hit this "non english" error message problem. I think it was due to the wrong versions of libodbc.so and libodbcinst.so being used. Changing the links in /usr/lib/... to point to the teradata installed versions worked for me. Commands using the default install directories in Ubuntu 12.04, 64bit were:
cd /usr/lib/x86_64-linux-gnu
ls -lha | grep odbc (To should see the files below which are to be replaced).
sudo mv libodbc.so.1.0.0 Xlibodbc.so.1.0.0
sudo ln -s /opt/teradata/client/14.10/odbc_64/lib/libodbc.so libodbc.so.1.0.0
sudo mv libodbcinst.so.1.0.0 Xlibodbcinst.so.1.0.0
sudo ln -s /opt/teradata/client/14.10/odbc_64/lib/libodbcinst.so libodbcinst.so.1.0.0
I had also previously installed (through apt-get) odbcinst, so I redirected both files. But possibly only the first is needed.