I'm new at airflow and I'm trying to install locally, following the instructions on the link below:
https://airflow.apache.org/docs/apache-airflow/stable/start/local.html
I'm running this code (as mentioned on the link):
# Airflow needs a home. `~/airflow` is the default, but you can put it
# somewhere else if you prefer (optional)
export AIRFLOW_HOME=~/airflow
# Install Airflow using the constraints file
AIRFLOW_VERSION=2.2.5
PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
# For example: 3.6
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
# For example: https://raw.githubusercontent.com/apache/airflow/constraints-2.2.5/constraints-3.6.txt
pip install "apache-airflow==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
# The Standalone command will initialise the database, make a user,
# and start all components for you.
airflow standalone
# Visit localhost:8080 in the browser and use the admin account details
# shown on the terminal to login.
# Enable the example_bash_operator dag in the home page
and getting this error:
File "C:\Users\F43555~1\AppData\Local\Temp/ipykernel_12908/3415008398.py", line 3
export AIRFLOW_HOME=~/airflow
^
SyntaxError: invalid syntax
Someone knows how to deal with it?
I'm running on windows 10, vs code (jupyter notebook).
Tks!
Airflow is only supported on Linux and it looks like you're trying to run this on a windows machine.
If you want to install Airflow on Windows you'll need to use something like Windows Subsystem for Linux (WSL) or Docker. There are some examples around which show you how to do this on WSL (and loads using docker) - Here is one of them with WSL.
Related
I had an issue with setting up crontab for my Django application for a week and I have almost figured out the same. (Issue linked with Unable to make a function call using Django-cron)
My crontab -e syntax is
* * * * * /Users/ashwin/dashboard/proj_application/exec.sh >> /Users/ashwin/dashboard/proj_application/data.log 2>&1
And in my exec.sh, I have
#!/bin/bash
cd "$(dirname "$0")";
CWD="$(pwd)"
echo $CWD
python -c 'import proj_application.cron as cron; cron.test()'
And in cron.py,
from django.core.mail import send_mail
from smtplib import SMTP
from email.mime.text import MIMEText
import datetime
def test():
message = "<p>This is test mail scheduled to send every minute</p>"
my_email = MIMEText(message, "html")
my_email["From"] = "xxx#domain.com"
my_email["To"] = "yyy#domain.com"
my_email["Subject"] = "Title"
sender = "person1#domain.com"
receivers = ["person2#domain.com"]
with SMTP("localhost") as smtp:
smtp.login(sender, "yyy#1234")
smtp.sendmail(sender, receivers, my_email.as_string())
Actual Problem:
The crontab is now able to call the exec.sh file and I am able to print $CWD in echo, when the call comes to cron.py, It is unable to recognize django.core.mail and throws the following error.
from django.core.mail import send_mail
ImportError: No module named django.core.mail
I think, I need to set up virtual environment or python variable path somewhere, but since I am new to crontab, I am not sure how to do that.
Any assistance is appreciated. Thanks in advance.
The way I interpret this is that you are normally running your script in a virutal environment, and it is failing now that you added it to a cron job. If this is incorrect, just skip to the last paragraph.
To run a cron job through a virutal environment, you need to use the virtual environment's python as the python you want. E.g to run the cron.py file:
* * * * * /path/to/venv/bin/python3 /path/to/cron.py >> /Users/ashwin/dashboard/proj_application/data.log 2>&1
This is the way I would recommend doing it, as it doesn't seem like that shell script is entirely necessary (anything in there can easily be done at the top of the python script), but if it is you can do something similar where you call the virtual environment's python instead of the default python. like this:
/path/to/venv/bin/python3 -c 'import proj_application.cron as cron; cron.test()'
However, if this doesn't work, this may not be an issue with the cron job, and instead with the django setup of the mail client (e.g. making sure it is included in the installed apps, etc.)
Since it is a little unclear where your issue is (mail client setup, cron setup, venv configuration, etc), if you are sure it is an issue with running your scripts you usually run in a virtual environment, than these steps should help, otherwise I would make sure your mail client is configured correctly. Make sure your scripts are correct by running them in the console (not through a cron job), and then you can come back and update the post with more detailed info about where the problem you have lies.
yes you are correct you may need to use virtual environment(although its optional but best practice).
To create virtual environment(expects python to be installed prior)
python -m venv venv
To activate (path may differ based on OS(mine is windows))
source venv/Scripts/activate
To install dependency in virtual environment
pip install Django
Now you all set to use virtual env's Django.
which python should map to venv path.
Now you have two ways to run python script
#1 use direct python path
absolute/path/of/venv/bin/python3 -c 'import proj_application.cron as cron; cron.test()'
#2 activate virtual environment and use the bash script as same.
#!/bin/bash
cd "$(dirname "$0")";
CWD="$(pwd)"
echo $CWD
source venv/Scripts/python # this will be in windows
python -c 'import proj_application.cron as cron; cron.test()'
I have installed Windows agent and I need to be able run Python scripts. I know I need to install Python, but I have no idea how.
I added Python files from standard installation to
$AGENT_TOOLSDIRECTORY/
Python/
3.8.2/
x64/
{tool files}
x64.complete
Restarted agent, but what now? How to put it to Capabilities?
What I'm missing?
EDIT:
I need to run this YAML task
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.x'
addToPath: true
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- task: BatchScript#1
displayName: 'Run script make.bat'
inputs:
filename: make.bat
arguments: html
I have set up a self-hosted agent on a Windows 10 laptop, (for which I have admin access), and I'm running Azure DevOps Express 2020.
I found, downloaded and installed an agent according to the instructions at Download and configure the agent. I used vsts-agent-win-x64-2.170.1.zip and set this up to run as a service, (I guess anyone running it manually needs to double check that it's runnning at show time). I also ran the install command as admin in powershell!
To install a Python version I need to download the appropriate installer from the ftp site at Python.org, eg. for 3.7.9 I've used python-3.7.9-amd64.exe.
I then run this from the command line (CMD run as administrator) without UI with:
python-3.7.9-amd64.exe /quiet InstallAllUsers=0 TargetDir=$AGENT_TOOLSDIRECTORY\Python\3.7.9\x64 Include_launcher=0
(other options for install available in python docs)
Once this is complete, (and it runs in background so will take longer than the initial command), you need to create an empty {platform}.complete file (as described here), in my case this is x64.complete.
This then worked! I did restart the server for this first version, but I've added other python versions since without needing to. My pipeline task was simply:
steps:
- task: UsePythonVersion#0
displayName: 'Use Python $(python.version)'
inputs:
versionSpec: '$(python.version)'
(with a variable python.version set us as a list of versions 3.7.9, 3.8.8)
One key element for me was the file structure, where the documentation says {tool files} this means the python.exe file and other common dirs such as Lib and Scripts. I initially installed these in a sub-dir which didn't work. So it should look like this:
$AGENT_TOOLSDIRECTORY/
Python/
3.7.9/
x64/
Doc/
Lib/
Scripts/
python.exe
...etc...
x64.complete
To be honest I'm mostly relieved that this worked without too much trouble. I gave up trying to get Artifacts to work on-prem. In my limited experience all of this is much easier, and better, on the cloud version. Haven't yet persuaded my employer to take that leap however...
For this issue, in order to use the python version installed in your on-premise machine. You either need to point to the python.exe physical path in cmd task. Or add the python.exe path to environment variable path manually in powershell task. For example:
To use local python in powershell task:
$env:Path += ";c:\{local path to}\Python\Python38\; c:\{local path to}\Python\Python38\Scripts\"
python -V
To use python in CMD task:
c:\{local path to}\Python\Python38\python.exe -V
c:\{local path to}\Python\Python38\Scripts\pip.exe install
So, I think to run python script with private agent, just make sure python is installed locally, then point to python.exe path. You can refer to this case for details.
I added these 4 Tasks before being able to execute python on my pipeline with a vs2017-win2016 agent:
Use Python 3.x
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.x'
Use Pip Authenticate
steps:
- task: PipAuthenticate#1
displayName: 'Pip Authenticate'
Use Commandline task
steps:
- script: |
python -m pip install --upgrade pip setuptools wheel
failOnStderr: true
displayName: 'install pip for setup of python framework'
Use Commandline task
steps:
- script: 'pip install -r _python-test-harness/requirements.txt'
failOnStderr: true
displayName: 'install python framework project''s specific requirements'
Hope that helps
How to connect oracle database server from python inside unix server ?
I cant install any packages like cx_Orcale, pyodbc etc.
Please consider even PIP is not available to install.
It my UNIX PROD server, so I have lot of restriction.
I tried to run the sql script from sqlplus command and its working.
Ok, so there is sqlplus and it works, this means that oracle drivers are there.
Try to proceed as follows:
1) create a python virtualenv in your $HOME. In python3
python -m venv $HOME/my_venv
2) activate it
source $HOME/my_venv/bin/activate[.csh] # .csh is for cshell, for bash otherwise
3) install pip using python binary from you new virtualenv, it is well described here: https://pip.pypa.io/en/stable/installing/
TL;DR:
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get_pip.py (this should install pip into your virtualenv as $HOME/my_env/bin/pip[3]
4) install cx_Oracle:
pip install cx_Oracle
Now you should be able to import it in your python code and connect to an oracle DB.
I tried to connect Oracle database via SQLPLUS and I am calling the script with below way :
os.environ['ORACLE_HOME'] = '<ORACEL PATH>'
os.chdir('<DIR NAME>')
VARIBALE=os.popen('./script_to_Call_sql_script.sh select.sql').read()
My shell script: script_to_Call_sql_script.sh
#!/bin/bash
envFile=ENV_FILE_NAME
envFilePath=<LOACTION_OF_ENV>${envFile}
ORACLE_HOME=<ORACLE PATH>
if [[ $# -eq 0 ]]
then
echo "USAGES: Please provide the positional parameter"
echo "`$basename $0` <SQL SCRIPT NAME>"
fi
ECR=`$ORACLE_HOME/bin/sqlplus -s /#<server_name><<EOF
set pages 0
set head off
set feed off
#$1;
exit
EOF`
echo $ECR
Above things help me to do my work done on Production server.
Recently, when I was installing openstack on 3 vm on centos 7 using answer file I had the following error:
10.7.35.174_osclient.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 10.7.35.174_osclient.pp
Error: Execution of '/usr/bin/yum -d 0 -e 0 -y list python-iso8601' returned 1: Error: No matching Packages to list
You will find full trace in log /var/tmp/packstack/20160318-124834-91QzZC/manifests/10.7.35.174_osclient.pp.log
Please check log file /var/tmp/packstack/20160318-124834-91QzZC/openstack-setup.log for more information
Additional information:
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host 10.7.35.174. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://10.7.35.174/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
I have already manually installed that module, but the same problem occures anyway.
That command only runs like that:
/usr/bin/yum -d 0 -e 0 -y list python2-iso8601
Is there any way to parse it to python?
Do you have any ideas how to solve it?
Found that kilo version works fine.
I am trying to set up Salt Stack for local development, but in masterless mode.
I have copied my states (top.sls, mystate.sls) to /srv/salt.
I have followed the instructions on the local development page and the salt masterless quickstart page, but when I run
$ sudo /home/vagrant/.virtualenvs/myenv/bin/salt-call -c /home/vagrant/.virtualenvs/myenv/etc/salt --local salt.highstate -l debug
All I get is
[DEBUG ] Could not LazyLoad salt.highstate
'salt.highstate' is not available.
I'm running salt in a vagrant ubuntu/trusty64 virtualbox virtual machine on a Mac.
It seems like other modules load (I see them in the debug listing) but for some reason highstate (highstate.py?) is not being loaded.
What am I doing wrong? Is there something additional I have to do for masterless development?
I got help on #salt IRC channel from whytewolf - the problem was that the command should be state.highstate (not salt.highstate):
$ sudo /home/vagrant/.virtualenvs/myenv/bin/salt-call -c /home/vagrant/.virtualenvs/myenv/etc/salt --local state.highstate -l debug
Problem solved!