BashOperator dosen't run bash file requires SUDO privileges - python

i have script called CC that collects the data and push it into a data warehouse .
I created a dag for it
Task_I = BashOperator(
task_id="CC",
run_as_user="koa",
bash_command="sudo /home/koa/CC"
)
and I've added the permission to run it without typing password by modifying /etc/sudoers
koa ALL = (ALL) NOPASSWD: /home/koa/CC
however when the task fails in airflow and the log states that a password is needed
{bash_operator.py:146} INFO - Running command: sudo /home/koa/CC
{bash_operator.py:153} INFO - Output:
{bash_operator.py:157} INFO - sudo: a terminal is required to read
the password; either use the -S option to read from standard input or configure an askpass
helper
{bash_operator.py:159} INFO - Command exited with return code 1
It will be great if you can help me guys , i'am new to airflow and have been struggling with this for the last few hours

Related

how to run multiple fedora commands in python

So I'm trying to have Python run multiple commands to install programs and enable SSH to setup my Linux computer. I would type all this in, but I'll be doing this to more devices, so I figured why not put in a Python script, but so far it's easier said than done. I did a boatload of research on this and I can't find anything like this.
So here's what I got so far.
--import subprocess
--SSH = "systemctl enable sshd"
--payload = "nmap" # it'll be one of a few I'll be installing
--subprocess.call(["sudo", "yum", "install", "-y", payload])
--subprocess.call(["sudo", SSH])
The first part of this works perfectly. It asks for my password it'll update and install nmap. But for some reason the command "systemctl enable sshd" seems to always throw it off. I know the command works because I can just type it out and it'll work just fine by itself, but for some reason it won't work through this script. I've used subprocess.run as well. What am I missing here?
Here's the error that I get:
--sudo: systemctl start sshd: command not found
What you want is Ansible.
Ansible uses SSH to connect to list of machines and perform configuration tasks. Tasks are described in YAML, which is readable and scale. You can have playbooks and ad hoc commands. For example ad hoc to install package will be
ansible -i inventory.file -m yum -a "name=payload state=present"
In a playbook will look like Install and enable openssh-server
---
- hosts: all # Single or group of hosts from inventory file
become: yes # Become sudo
tasks: # List of tasks
- name: Install ssh-server # Description free text
yum: # Module name
name: openssh-server # Name of the package
state: present # State " state: absent will uninstall the package"
- name: Start and enable service # Description of the task free text
service: # Service
name: sshd # Name of the service
state: started # Started or Stopped
enabled: yes # Start the service on boot
- name: Edit config file sshd_config # Description of the task
lineinfile: # Name of the module
path: /etc/sshd/sshd_config # Which file to edit
regex: ^(# *)?PasswordAuthentication # Which line to edit
line: PasswordAuthentication no # Whit what to change it
Ansible have great documentation https://docs.ansible.com/ in a few days you will be up to speed.
Best regards.

How to check the status of docker-compose up -d command

When we run docker-compose up-d command to run dockers using docker-compose.yml file, it starts building images or pulling images from the registry. We can see each and every step of this command on the terminal.
I am trying to run this command from a python script. The command starts successfully but after the command, I do not have any idea of how much the process has been completed. Is there any way I can monitor the status of docker-compose up -d command so that script can let the user (who is using the script) know how much the process has completed or if the docker-compose command has failed due to some reasons.?
Thanks
CODE:
from pexpect import pxssh
session = pxssh.pxssh()
if not session.login(ip_address,<USERNAME>, <PASSWORD>):
print("SSH session failed on login")
print(str(session))
else:
print("SSH session login successfull")
session.sendline("sudo docker-compose up -d")
session.prompt()
resp = session.before
print(resp)
You can view docker compose logs with following ways
Use docker compose up -d to start all services in detached mode (-d)
(you won't see any logs in detached mode)
Use docker compose logs -f -t to attach yourself to the logs of all
running services, whereas -f means you follow the log output and the
-t option gives you nice timestamps (Docs)
credit
EDIT: Docker Compose is now available as part of the core Docker CLI. docker-compose is still supported for now but most documentation I have seen now refers to docker compose as standard. See https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command for more.
I think should use the command docker-compose top, could check the result, It shoul not be empty when the container is running.
If the containers is stop or exit or Create, it should return empty
What I do to debug small issues is to run:
docker-compose up {service_name}
This way I get to see the output for an individual service. If the service has a dependency you can always start multiple services like so:
docker-compose up {service_name1} {service_name2}
Additionally I use:
docker-compose logs -f -t {service_name1}
To see the logs of an already running service or alternatively:
docker logs -t -f {container_name}
Notice that the command above needs the container name and not the service name
This way you can make sure service by service that everything works as expected and then you can launch them all in detached mode as suggested in the other answers
If you need a programmatic way with bash, this is the fastest implementation:
sleep 2 seconds
check the container was up several seconds ago => Means you've just successfully deployed it
docker ps will look like:
a6f088b1567e lc_fe_isr-app "docker-entrypoint.s…" 2 seconds ago Up 2 seconds 0.0.0.0:10001->3000/tcp lc_fe_isr-app-1
#!/bin/bash
#
# Check if the a single container was started successfully
#
CONTAINER_NAME="lc_fe_isr-app-1"
sleep 2
docker ps | grep $CONTAINER_NAME
UP_SECONDS_AGO=`docker ps | grep $CONTAINER_NAME | grep ' seconds'`
echo $UP_SECONDS_AGO
if [ -n "$UP_SECONDS_AGO" ]
then
echo "Deploy successfully"
else
echo "Deploy FAILED"
exit 1
fi

Notify user while python running in background

So i have a code that i'm trying to run and when it finish it should display a message to the user saying what changes have happend. This works fine if i run it on terminal. But when i add it to a cronjob nothing happens and no error is shown.
It seems to me that it is losing the display session, but cant figure it how to solve it. Here is the show message code:
def sendmessage(message):
subprocess.Popen(['notify-send', message])
return
I also tried this version:
def sendmessage(message):
Notify.init("Changes")
n = Notify.Notification.new(message)
n.show()
return
Which gives the error (only in background also):
cannot open display:
Activated service 'org.freedesktop.Notifications' failed: Process org.freedesktop.Notifications exited with status
Would be very thankful for any help or alternative given.
I had a similar issue running a system daemon, where I wanted the user to be notified about a network bandwidth upload being exceeded.
The program was designed to run under systemd but at a push also under upstart.
You may get some mileage out of my configuration files:
systemd - bandwidth.service file
[Unit]
Description=Bandwidth traffic monitor
Documentation=man:bandwidth(7)
After=graphical.target
[Service]
Type=simple
Environment="DISPLAY=:0" "XAUTHORITY=/home/#USER#/.Xauthority"
PIDFile=/var/run/bandwidth.pid
ExecStart=/usr/bin/dbus-launch /usr/sbin/bandwidth_logd
ExecReload=/bin/kill -HUP $MAINPID
User=root
[Install]
WantedBy=graphical.target
upstart - bandwidth.conf file
#
# These are the scripts that run when a network appears.
# Use this on an upstart system in /etc/init
# test for systemd or upstart system with
# ps -p1 | grep systemd && echo systemd || echo upstart
# better
# ps -p1 | grep systemd >/dev/null && echo systemd || echo upstart
# Using upstart the script will need to daemonise which is a bugger
# so test for it and import Daemon_server.py
description "Bandwidth upstart events"
start on net-device-up # Start a daemon or run a script
stop on net-device-down # (Optional) Stop a daemon, scripts already self-terminate.
# Automatically restart process if crashed
respawn
# Essentially lets upstart know the process will detach itself to the background
expect fork
#Set environment variables
env DISPLAY=":0"
export DISPLAY
env XAUTHORITY="/home/#USER#/.Xauthority"
export XAUTHORITY
script
# You can put shell script in here, including if/then and tests.
# replace #USER# with your name and ensure that you have .Xauthority in $HOME
/usr/sbin/bandwidth_logd
end script
You will note that in both configuration files the environment becomes key and #USER# is replaced by a real user name with a valid .Xauthority file in their $HOME directory.
In the python code I use the following to emit the message(import notify2).
def warning(msg):
result = True
try:
notify2.init("Bandwidth")
mess = notify2.Notification("Bandwidth",msg,'/usr/share/bandwidth/bandwidth.png')
mess.set_urgency(2)
mess.set_timeout(0)
mess.show()
except:
result = False
return result

Airflow quickstart not working

Hi I've just started using Airflow, but I cannot manage to make the task in the quickstart run: airflow run example_bash_operator runme_0 2015-01-01.
I've just created a conda environment with python 2.7.6 and installed airflow through pip which installed airflow==1.8.0. Then I ran the commands listed here https://airflow.incubator.apache.org/start.html.
When I try to run the first task instance, by looking at the UI nothing seems to happen. Here's the output of the command:
(airflow) ✔ se7entyse7en in ~/Projects/airflow  $ airflow run example_bash_operator runme_0 2015-01-01
[2017-07-28 12:06:22,992] {__init__.py:57} INFO - Using executor SequentialExecutor
Sending to executor.
[2017-07-28 12:06:23,950] {__init__.py:57} INFO - Using executor SequentialExecutor
Logging into: /Users/se7entyse7en/airflow/logs/example_bash_operator/runme_0/2015-01-01T00:00:00
On the other hand the backfill works fine: airflow backfill example_bash_operator -s 2015-01-01 -e 2015-01-02.
What am I missing?
I've just found that if a single task is ran then it is listed under Browse > Task Instances as part of any DAG.
The run command is used to run a single task instance.
But it will only be able to run if you have cleared any previous runs.
To clear the run:
go the Airflow UI(Graph View)
Click on the particular task and click Clear
Now you will be able to run the task with the cmd that you initially had.
To view the logs for this task you can run:
vi /Users/se7entyse7en/airflow/logs/example_bash_operator/runme_0/2015-01-01T00:00:00
I had a task like:
t2 = BashOperator(
task_id='sleep',
depends_on_past=False,
bash_command='sleep 35',
dag=dag)
I was able to see the changes in the state of the task as it was getting executed.

Can't connect MongoDb on AWS EC2 using python

I have installed Mongodb 3.0 using this tutorial -
https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/
It has installed fine. I have also given permissions to 'ec2-user' to all the data and log folders ie var/lib/mongo and var/log/mongodb but and have set conf file as well.
Now thing is that mongodb server always fails to start with command
sudo service mongod start
it just say failed, nothing else.
While if I run command -
mongod --dbpath var/lib/mongo
it starts the mongodb server correctly (though I have mentioned same dbpath in .conf file as well)
What is it I am doing wrong here?
When you run sudo mongod it does not load a config file at all, it literally starts with the compiled in defaults - port 27017, database path of /data/db etc. - that is why you got the error about not being able to find that folder. The "Ubuntu default" is only used when you point it at the config file (if you start using the service command, this is done for you behind the scenes).
Next you ran it like this:
sudo mongod -f /etc/mongodb.conf
If there weren't problems before, then there will be now - you have run the process, with your normal config (pointing at your usual dbpath and log) as the root user. That means that there are going to now be a number of files in that normal MongoDB folder with the user:group of root:root.
This will cause errors when you try to start it as a normal service again, because the mongodb user (which the service will attempt to run as) will not have permission to access those root:root files, and most notably, it will probably not be able to write to the log file to give you any information.
Therefore, to run it as a normal service, we need to fix those permissions. First, make sure MongoDB is not currently running as root, then:
cd /var/log/mongodb
sudo chown -R mongodb:mongodb .
cd /var/lib/mongodb
sudo chown -R mongodb:mongodb .
That should fix it up (assuming the user:group is mongodb:mongodb), though it's probably best to verify with an ls -al or similar to be sure. Once this is done you should be able to get the service to start successfully again.
If you’re starting mongod as a service using:
sudo service mongod start
Make sure the directories defined for logpath, dbpath, and pidfilepath in your mongod.conf exist and are owned by mongod:mongod.

Categories

Resources