how to run multiple fedora commands in python - python

So I'm trying to have Python run multiple commands to install programs and enable SSH to setup my Linux computer. I would type all this in, but I'll be doing this to more devices, so I figured why not put in a Python script, but so far it's easier said than done. I did a boatload of research on this and I can't find anything like this.
So here's what I got so far.
--import subprocess
--SSH = "systemctl enable sshd"
--payload = "nmap" # it'll be one of a few I'll be installing
--subprocess.call(["sudo", "yum", "install", "-y", payload])
--subprocess.call(["sudo", SSH])
The first part of this works perfectly. It asks for my password it'll update and install nmap. But for some reason the command "systemctl enable sshd" seems to always throw it off. I know the command works because I can just type it out and it'll work just fine by itself, but for some reason it won't work through this script. I've used subprocess.run as well. What am I missing here?
Here's the error that I get:
--sudo: systemctl start sshd: command not found

What you want is Ansible.
Ansible uses SSH to connect to list of machines and perform configuration tasks. Tasks are described in YAML, which is readable and scale. You can have playbooks and ad hoc commands. For example ad hoc to install package will be
ansible -i inventory.file -m yum -a "name=payload state=present"
In a playbook will look like Install and enable openssh-server
---
- hosts: all # Single or group of hosts from inventory file
become: yes # Become sudo
tasks: # List of tasks
- name: Install ssh-server # Description free text
yum: # Module name
name: openssh-server # Name of the package
state: present # State " state: absent will uninstall the package"
- name: Start and enable service # Description of the task free text
service: # Service
name: sshd # Name of the service
state: started # Started or Stopped
enabled: yes # Start the service on boot
- name: Edit config file sshd_config # Description of the task
lineinfile: # Name of the module
path: /etc/sshd/sshd_config # Which file to edit
regex: ^(# *)?PasswordAuthentication # Which line to edit
line: PasswordAuthentication no # Whit what to change it
Ansible have great documentation https://docs.ansible.com/ in a few days you will be up to speed.
Best regards.

Related

Issues trying to install AirFlow locally

I'm new at airflow and I'm trying to install locally, following the instructions on the link below:
https://airflow.apache.org/docs/apache-airflow/stable/start/local.html
I'm running this code (as mentioned on the link):
# Airflow needs a home. `~/airflow` is the default, but you can put it
# somewhere else if you prefer (optional)
export AIRFLOW_HOME=~/airflow
# Install Airflow using the constraints file
AIRFLOW_VERSION=2.2.5
PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
# For example: 3.6
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-${AIRFLOW_VERSION}/constraints-${PYTHON_VERSION}.txt"
# For example: https://raw.githubusercontent.com/apache/airflow/constraints-2.2.5/constraints-3.6.txt
pip install "apache-airflow==${AIRFLOW_VERSION}" --constraint "${CONSTRAINT_URL}"
# The Standalone command will initialise the database, make a user,
# and start all components for you.
airflow standalone
# Visit localhost:8080 in the browser and use the admin account details
# shown on the terminal to login.
# Enable the example_bash_operator dag in the home page
and getting this error:
File "C:\Users\F43555~1\AppData\Local\Temp/ipykernel_12908/3415008398.py", line 3
export AIRFLOW_HOME=~/airflow
^
SyntaxError: invalid syntax
Someone knows how to deal with it?
I'm running on windows 10, vs code (jupyter notebook).
Tks!
Airflow is only supported on Linux and it looks like you're trying to run this on a windows machine.
If you want to install Airflow on Windows you'll need to use something like Windows Subsystem for Linux (WSL) or Docker. There are some examples around which show you how to do this on WSL (and loads using docker) - Here is one of them with WSL.

boto3 gives error when trying to stop an AWS EC2 instance using Ansible

I am trying to create an ansible playbook to install docker & docker-compose on the host server, stop and start the AWS EC2 instance and then restart docker.
Everything goes well until I try to stop the instance, then this happens:
TASK [docker_setup : Gather facts] ******************************************************************************************************************************************
[DEPRECATION WARNING]: The 'ec2_instance_facts' module has been renamed to 'ec2_instance_info'. This feature will be removed in version 2.13. Deprecation warnings can be
disabled by setting deprecation_warnings=False in ansible.cfg.
fatal: [172.31.25.50]: FAILED! => {"changed": false, "msg": "boto3 required for this module"}
Those steps to stop the instance look like this on the playbook:
- name: Install boto3 and botocore with pip3 module for Gather facts
pip:
name:
- boto3
- botocore
executable: pip-3.7
- name: Gather facts
action: ec2_instance_facts
- name: Stop myserver instance
local_action:
module: ec2
region: "{{region}}"
instance_ids: "{{ansible_ec2_instance_id}}"
state: stopped
The reason I installed boto3 is because it was complaining for not being installed but even when installed it still gives an error. I also read around the Internet that I should add ansible_python_interpreter=/usr/bin/python on the host file next to each host and so I did. But it didn`t work. It looks like this:
[webservers]
172.31.25.50 ansible_python_interpreter=/usr/bin/python
Any ideas? Thank you!
What I did to solve this:
Instead of using the ansible_python_interpreter in the hosts file, I learned that you can actually add it to a specific action of the task via vars:
- name: Stop instance(s)
vars:
ansible_python_interpreter: /usr/bin/python3
ec2_instance:
aws_access_key: xxxxx
aws_secret_key: xxxxx
region: "{{region}}"
instance_ids: "{{ansible_ec2_instance_id}}"
state: stopped
Also used python3 instead of python on the ansible_python_interpreter. If I used ansible_python_interpreter: /usr/bin/python3 on the hosts file as I was doing, it would give another error because the default interpreter for the whole task was using it, but this way you can direct it to when you want to use it.

Install Python to Self-hosted Windows build agent

I have installed Windows agent and I need to be able run Python scripts. I know I need to install Python, but I have no idea how.
I added Python files from standard installation to
$AGENT_TOOLSDIRECTORY/
Python/
3.8.2/
x64/
{tool files}
x64.complete
Restarted agent, but what now? How to put it to Capabilities?
What I'm missing?
EDIT:
I need to run this YAML task
steps:
- task: UsePythonVersion#0
inputs:
versionSpec: '3.x'
addToPath: true
- script: |
python -m pip install --upgrade pip
pip install -r requirements.txt
displayName: 'Install dependencies'
- task: BatchScript#1
displayName: 'Run script make.bat'
inputs:
filename: make.bat
arguments: html
I have set up a self-hosted agent on a Windows 10 laptop, (for which I have admin access), and I'm running Azure DevOps Express 2020.
I found, downloaded and installed an agent according to the instructions at Download and configure the agent. I used vsts-agent-win-x64-2.170.1.zip and set this up to run as a service, (I guess anyone running it manually needs to double check that it's runnning at show time). I also ran the install command as admin in powershell!
To install a Python version I need to download the appropriate installer from the ftp site at Python.org, eg. for 3.7.9 I've used python-3.7.9-amd64.exe.
I then run this from the command line (CMD run as administrator) without UI with:
python-3.7.9-amd64.exe /quiet InstallAllUsers=0 TargetDir=$AGENT_TOOLSDIRECTORY\Python\3.7.9\x64 Include_launcher=0
(other options for install available in python docs)
Once this is complete, (and it runs in background so will take longer than the initial command), you need to create an empty {platform}.complete file (as described here), in my case this is x64.complete.
This then worked! I did restart the server for this first version, but I've added other python versions since without needing to. My pipeline task was simply:
steps:
- task: UsePythonVersion#0
displayName: 'Use Python $(python.version)'
inputs:
versionSpec: '$(python.version)'
(with a variable python.version set us as a list of versions 3.7.9, 3.8.8)
One key element for me was the file structure, where the documentation says {tool files} this means the python.exe file and other common dirs such as Lib and Scripts. I initially installed these in a sub-dir which didn't work. So it should look like this:
$AGENT_TOOLSDIRECTORY/
Python/
3.7.9/
x64/
Doc/
Lib/
Scripts/
python.exe
...etc...
x64.complete
To be honest I'm mostly relieved that this worked without too much trouble. I gave up trying to get Artifacts to work on-prem. In my limited experience all of this is much easier, and better, on the cloud version. Haven't yet persuaded my employer to take that leap however...
For this issue, in order to use the python version installed in your on-premise machine. You either need to point to the python.exe physical path in cmd task. Or add the python.exe path to environment variable path manually in powershell task. For example:
To use local python in powershell task:
$env:Path += ";c:\{local path to}\Python\Python38\; c:\{local path to}\Python\Python38\Scripts\"
python -V
To use python in CMD task:
c:\{local path to}\Python\Python38\python.exe -V
c:\{local path to}\Python\Python38\Scripts\pip.exe install
So, I think to run python script with private agent, just make sure python is installed locally, then point to python.exe path. You can refer to this case for details.
I added these 4 Tasks before being able to execute python on my pipeline with a vs2017-win2016 agent:
Use Python 3.x
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.x'
Use Pip Authenticate
steps:
- task: PipAuthenticate#1
displayName: 'Pip Authenticate'
Use Commandline task
steps:
- script: |
python -m pip install --upgrade pip setuptools wheel
failOnStderr: true
displayName: 'install pip for setup of python framework'
Use Commandline task
steps:
- script: 'pip install -r _python-test-harness/requirements.txt'
failOnStderr: true
displayName: 'install python framework project''s specific requirements'
Hope that helps

Local GitLab runner freezes while Shared GitLab.com runner succeeds

EDIT: As Rekovni pointed out, using a GitLab runner with Docker on a Windows machine is a problem. Installing the runner in a Linux-based virtual machine solved the problem.
I am developing a Python program using a conda environment. It is hosted on GitLab.com and I am using GitLab-CI to generate the documentation.
I configured the following .gitlab-ci.yml file for it:
image: continuumio/miniconda3:latest
before_script:
# Update conda and create environment, which is then activated.
- conda update -vvv -y -c conda-forge conda
- conda env create -f helpers/NAME.yml
- source activate NAME
# Correct installation.
- conda install -q -y gsl=2.2.1
pages:
script:
# Install make.
- apt-get update
- apt-get install -q -y build-essential
# Install Spinx-related packages.
- conda install -q -y sphinx sphinx_rtd_theme
# Create documentation.
- cd REPO/doc
- sphinx-apidoc -o source/ ../REPO --force --separate
- make html
# Transfer documentation to public pages folder.
- mv build/html/ ../../public/
artifacts:
paths:
- public
# only:
# - master
Running this script with a shared GitLab runner that is supplied with GitLab.com works and the documentation is generated and placed in the public folder.
For future unit tests (which take much longer), I want to provide a local runner on a Win 10 machine in my network. For this, I installed the gitlab-runner.exe and Docker Desktop. I successfully registered the runner with the project on GitLab.com.
The runner is using the following config.toml configuration file:
concurrent = 1
check_interval = 0
log_level = "info"
[session_server]
session_timeout = 1800
[[runners]]
name = "NAME"
url = "https://gitlab.com"
token = "TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
The problem is now that the local runner freezes during the execution of the above script without producing any error messages and I am at a loss on how to debug it. What I have is
The log of the script that is shown on the Job page on GitLab.com; and
The console output of the gitlab-runner.exe on the local machine.
Regarding 1., I see
[0KRunning with gitlab-runner 11.10.0 (3001a600)
...
[32;1mChecking out COMMIT_HASH as BRANCH_NAME...[0;m
...
[0K[32;1m$ conda update -vvv -y -c conda-forge conda[0;m
DEBUG conda.gateways.logging:set_verbosity(148): verbosity set to 3
...
...
...
TRACE conda.gateways.disk.update:rename(52): renaming /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_new.html => /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_new.html.c~
TRACE conda.core.path_actions:execute(1041): renaming share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html => share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html.c~
TRACE conda.gateways.disk.update:rename(52): renaming /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html => /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html.c~
TRACE conda.core.path_actions:execute(1041): renaming share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_ctrl.html => share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_ctrl.html.c~
where it abruptly stops without reaching the - conda env create -f helpers/NAME.yml line.
Regarding 2., I see
C:\GitLab-Runner>gitlab-runner.exe --debug run
Runtime platform arch=amd64 os=windows pid=14116 revision=3001a600 version=11.10.0Starting multi-runner from C:\GitLab-Runner\config.toml ... builds=0
Checking runtime mode GOOS=windows uid=-1
Configuration loaded builds=0
...
Feeding runners to channel builds=0
Checking for jobs... nothing runner=TOKEN
Feeding runners to channel builds=0
Checking for jobs... received job=203033130 repo_url=REPO_URL.git runner=TOKEN
...
Attaching to container HASH ... job=203033130 project=6249897 runner=TOKEN
Starting container HASH ... job=203033130 project=6249897 runner=TOKEN
Waiting for attach to finish HASH ... job=203033130 project=6249897 runner=TOKEN
Waiting for container HASH ... job=203033130 project=6249897 runner=TOKEN
Appending trace to coordinator... ok code=202 job=203033130 job-log=0-10348 job-status=running runner=TOKEN sent-log=1801-10347 status=202 Accepted
Appending trace to coordinator... ok code=202 job=203033130 job-log=0-19445 job-status=running runner=TOKEN sent-log=10348-19444 status=202 Accepted
...
Appending trace to coordinator... ok code=202 job=203033130 job-log=0-933150 job-status=running runner=TOKEN sent-log=241860-933149 status=202 Accepted
Submitting job to coordinator... ok code=200 job=203033130 job-status= runner=TOKEN
Submitting job to coordinator... ok code=200 job=203033130 job-status= runner=TOKEN
where it seems that the switch from Appending trace to coordinator to Submitting job to coordinator happens around the time when it gets stuck.
After this, 1. is not updated with any further information and 2. is stuck in a Submitting job to coordinator loop.
Does anyone know:
What the reason for the failure with a local runner could be (when the same script works with a shared runner)?
What I could do to debug this problem?
Thanks and all the best,
Thomas
GitLab CI doesn't currently offer a solution for using its runner with Docker on a Windows environment, however there is an epic at the moment which is tracking progress for this.
In one of the issues of the epic, a contributer has managed to get a working version of a gitlab-runner which uses Docker for Windows, with which more details can be found here.
A more common (and potentially easier) way of using Docker in a Windows environment, would be to install the gitlab-runner as a Shell runner, and call the Docker commands manually to run your tests.
Conversely, if you just want to keep using the same CI script, you could install a Linux VM on your Windows 10 machine, and have that host the docker runner!

Can't connect MongoDb on AWS EC2 using python

I have installed Mongodb 3.0 using this tutorial -
https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/
It has installed fine. I have also given permissions to 'ec2-user' to all the data and log folders ie var/lib/mongo and var/log/mongodb but and have set conf file as well.
Now thing is that mongodb server always fails to start with command
sudo service mongod start
it just say failed, nothing else.
While if I run command -
mongod --dbpath var/lib/mongo
it starts the mongodb server correctly (though I have mentioned same dbpath in .conf file as well)
What is it I am doing wrong here?
When you run sudo mongod it does not load a config file at all, it literally starts with the compiled in defaults - port 27017, database path of /data/db etc. - that is why you got the error about not being able to find that folder. The "Ubuntu default" is only used when you point it at the config file (if you start using the service command, this is done for you behind the scenes).
Next you ran it like this:
sudo mongod -f /etc/mongodb.conf
If there weren't problems before, then there will be now - you have run the process, with your normal config (pointing at your usual dbpath and log) as the root user. That means that there are going to now be a number of files in that normal MongoDB folder with the user:group of root:root.
This will cause errors when you try to start it as a normal service again, because the mongodb user (which the service will attempt to run as) will not have permission to access those root:root files, and most notably, it will probably not be able to write to the log file to give you any information.
Therefore, to run it as a normal service, we need to fix those permissions. First, make sure MongoDB is not currently running as root, then:
cd /var/log/mongodb
sudo chown -R mongodb:mongodb .
cd /var/lib/mongodb
sudo chown -R mongodb:mongodb .
That should fix it up (assuming the user:group is mongodb:mongodb), though it's probably best to verify with an ls -al or similar to be sure. Once this is done you should be able to get the service to start successfully again.
If you’re starting mongod as a service using:
sudo service mongod start
Make sure the directories defined for logpath, dbpath, and pidfilepath in your mongod.conf exist and are owned by mongod:mongod.

Categories

Resources