I have a python script that take two path, one as input folder and other as output folder via sys.argv. For example
python script.py from to
If no path is provided let say python script.py. It take default folder which is from and to.
I have created a docker image and i am mounting my local folder this way
docker run -v "$(pwd):/folder" myimage
As in this case I am not providing folder name argument, it take them by default and put them in folder folder of docker. This is working
But if i want to pass custom path,let how can i do that?
EDIT:
Let say here is the code
argl = len(sys.argv)
if argl==1:
dir_from = 'from'
dir_to = 'to'
elif argl == 3:
dir_from = sys.argv[1]
dir_to = sys.argv[2]
So if i pass
python script.py the first if condition will work, and if i pass argument like python script.py abc/from abc/to the second elif condition work.
docker run -v "$(pwd):/folder" myimage This command pick the first condition, but how to pass custom path to it.
For example some thing link that
docker run -v "abc/from abc/to:/folder" myimage
Here's how to pass in default values for your from and to path parameters at the time that you launch your container.
Define the two env vars as part of launching your container:
FROM_PATH = <get the from path default from wherever is appropriate>
TO_PATH = <get the from path default from wherever is appropriate>
Launch your container:
docker run -e FROM_PATH -e TO_PATH ... python script.py
You could specify the values for the env vars in the run command itself via something like -e FROM_PATH=/a/b/c. If you don't provide values, the values are assumed to be already defined in local env vars with the same name, as I did above.
Then inside your container code:
argl = len(sys.argv)
if argl==1:
dir_from = os.getenv('FROM_PATH')
dir_to = os.getenv('TO_PATH')
elif argl == 3:
dir_from = sys.argv[1]
dir_to = sys.argv[2]
Related
This is a follow-up question of Use tkinter based PySimpleGUI as root user via pkexec.
I have a Python GUI application. It should be able to run as user and as root. For the latter I know I have to set $DISPLAY and $XAUTHORITY to get a GUI application work under root. I use pkexec to start that application as root.
I assume the problem is how I use os.getexecvp() to call pkexec with all its arguments. But I don't know how to fix this. In the linked previous question and answer it works when calling pkexec directly via bash.
For that example the full path of the script should be/home/user/x.py.
#!/usr/bin/env python3
# FILENAME need to be x.py !!!
import os
import sys
import getpass
import PySimpleGUI as sg
def main_as_root():
# See: https://stackoverflow.com/q/74840452
cmd = ['pkexec',
'env',
f'DISPLAY={os.environ["DISPLAY"]}',
f'XAUTHORITY={os.environ["XAUTHORITY"]}',
f'{sys.executable} /home/user/x.py']
# output here is
# ['pkexec', 'env', 'DISPLAY=:0.0', 'XAUTHORITY=/home/user/.Xauthority', '/usr/bin/python3 ./x.py']
print(cmd)
# replace the process
os.execvp(cmd[0], cmd)
def main():
main_window = sg.Window(title=f'Run as "{getpass.getuser()}".',
layout=[[]], margins=(100, 50))
main_window.read()
if __name__ == '__main__':
if len(sys.argv) == 2 and sys.argv[1] == 'root':
main_as_root() # no return because of os.execvp()
# else
main()
Calling that script as /home/user/x.py root means that the script will call itself again via pkexec. I got this output (self translated to English from German).
['pkexec', 'env', 'DISPLAY=:0.0', 'XAUTHORITY=/home/user/.Xauthority', '/usr/bin/python3 /home/user/x.py']
/usr/bin/env: „/usr/bin/python3 /home/user/x.py“: File or folder not found
/usr/bin/env: Use -[v]S, to takeover options via #!
For me it looks like that the python3 part of the command is interpreted by env and not pkexec. Some is not as expected while interpreting the cmd via os.pkexec().
But when I do this on the shell it works well.
pkexec env DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY python3 /home/user/x.py
Based on #TheLizzard comment.
The approach itself is fine and has no problem.
Just the last element in the command array cmd. It should be splitted.
cmd = ['pkexec',
'env',
f'DISPLAY={os.environ["DISPLAY"]}',
f'XAUTHORITY={os.environ["XAUTHORITY"]}',
f'{sys.executable}',
'/home/user/x.py']
I've seen this post, which covers "Passing file as argument to Docker container", and have followed that. But I'd like to be able to write out to a different file rather than overwriting the file that was passed in.
An example docker image:
FROM python:3.8-slim
RUN useradd --create-home --shell /bin/bash app_user
WORKDIR /home/app_user
USER app_user
# I guess I have to assume that this image is built wherever this module is...
COPY script.py .
where script.py contains (sorry if it's a bit long for what it is):
import sys
import pathlib
import argparse
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="meh",
formatter_class=argparse.RawTextHelpFormatter,
)
parser.add_argument(
"-c",
"--convert_file",
type=str,
help="Convert this file.",
)
parser.add_argument(
"-o",
"--output_file",
type=str,
help="Output converted file here.",
)
args = parser.parse_args()
if args.convert_file:
if not args.output_file:
output_file = '/something.txt'
else:
output_file = args.output_file
file_path = pathlib.Path(args.convert_file)
text = pathlib.Path(file_path).read_text().upper()
pathlib.Path(output_file).write_text(text)
raise SystemExit()
And this can be run as:
docker run -v <abs path local system>/something.txt:/home/app_user/something.txt \
-t app:latest \
python -m script -c /home/app_user/something.txt -o /home/app_user/something.txt
where something.txt contains:
this is a test, should start lowercase and finish uppercase.
before running, and after running it contains:
THIS IS A TEST, SHOULD START LOWERCASE AND FINISH UPPERCASE.
So, it works to a degree, but instead of overwriting something.txt I'd like to be able to create something_upper.txt (or whatever).
So - how can I take a file as input to a docker container, and write out to a different file on the local system?
As of now i do
kubectl --context <cluster context> get pod -A
to get pod in specific cluster
is there a python way to set kubernetes context for a virtual env , so we can use multiple context at the same time
example :
Terminal 1:
(cluster context1) user#machine #
Terminal 2:
(cluster context2) user#machine #
This should be equivalent of
Terminal 1:
user#machine # kubectl --context <cluster context1> get pod -A
Terminal 2:
user#machine # kubectl --context <cluster context1> get pod -A
This isn't probably a rational solution, but anyway... At some time I used different kubectl versions for different clusters and I came up with a venv-like solution to switch between them. I wrote text files like this:
export KUBECONFIG="/path/to/kubeconfig"
export PATH="/path/including/the/right/kubectl"
And activated them in the same fashion as venv: source the_file. If you can split your contexts to separate files, you can add export KUBECONFIG="/path/to/kubeconfig" to your venv/bin/activate and it will use the desired config when you activate the venv.
i would try initializing multiple objects for the cluster as suggested in the official client repo
from pick import pick # install pick using `pip install pick`
from kubernetes import client, config
from kubernetes.client import configuration
def main():
contexts, active_context = config.list_kube_config_contexts()
if not contexts:
print("Cannot find any context in kube-config file.")
return
contexts = [context['name'] for context in contexts]
active_index = contexts.index(active_context['name'])
cluster1, first_index = pick(contexts, title="Pick the first context",
default_index=active_index)
cluster2, _ = pick(contexts, title="Pick the second context",
default_index=first_index)
client1 = client.CoreV1Api(
api_client=config.new_client_from_config(context=cluster1))
client2 = client.CoreV1Api(
api_client=config.new_client_from_config(context=cluster2))
print("\nList of pods on %s:" % cluster1)
for i in client1.list_pod_for_all_namespaces().items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
print("\n\nList of pods on %s:" % cluster2)
for i in client2.list_pod_for_all_namespaces().items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
if __name__ == '__main__':
main()
Read more here
You can also use python with pick for picking up the contexts
from pick import pick # `pip install pick`
from kubernetes import client, config
from kubernetes.client import configuration
def main():
contexts, active_context = config.list_kube_config_contexts()
if not contexts:
print("Cannot find any context in kube-config file.")
return
contexts = [context['name'] for context in contexts]
active_index = contexts.index(active_context['name'])
option, _ = pick(contexts, title="Pick the context to load",
default_index=active_index)
# Configs can be set in Configuration class directly or using helper
# utility
config.load_kube_config(context=option)
print("Active host is %s" % configuration.Configuration().host)
You can also try using the Environment variables in different terminals storing the different K8s contexts details.
First create separate config files for cluster context you would like to switch
Terminal 1:
user#machine $ kubectl config view --minify --flatten --context=context-1 > $HOME/.kube/config-context-1
Terminal 2:
user#machine $ kubectl config view --minify --flatten --context=context-2 > $HOME/.kube/config-context-2
Create Different virtual environments for different for different clusters and activate them
Terminal 1:
user#machine $ python3 -m venv context-1
user#machine $ . ./context-1/bin/activate
Terminal 2:
user#machine $ python3 -m venv context-2
user#machine $ . ./context-2/bin/activate
Export the new config files to the respective environments
Terminal 1:
(context-1) user#machine $ export KUBECONFIG="$HOME/.kube/config-context-1"
Terminal 2:
(context-2) user#machine $ export KUBECONFIG="$HOME/.kube/config-context-2"
You check your pods now its would be of different contexts
I've been trying to trouble this for days now, and would appreciate some help --
Basically, I wrote the following Python script
import os, sys
# =__=__=__=__=__=__=__ START MAIN =__=__=__=__=__=__=__
if __name__ == '__main__':
# initialize variables
all_files = []
# directory to download data siphon files to
dDir = '/path/to/download/directory/'
# my S3 bucket
s3bucket = "com.mybucket/"
foldername = "test"
# get a list of available feeds
feeds = <huge JSON object with URLs to feeds>
for item in range(feeds['count']):
# ...check if the directory exists, and if not, create the directory...
if not os.path.exists(folderName):
os.makedirs(folderName)
... ... ...
# Loop through all the splits
for s in dsSplits:
... ... ...
location = requestFeedLocation(name, timestamp)
... ... ...
downloadFeed(location[0], folderName, nameNotGZ)
# THIS IS WHERE I AM HAVING PROBLEMS!!!!!!!!!!!
cmd = 's3cmd sync 'dDir+folderName+'/ s3://'+s3bucket+'/'
os.system(cmd)
Everything in my code works...when I run this straight from the command line, everything runs as expected...however, when I have it executed via cron -- the following DOES NOT execute (everything else does)
# THIS IS WHERE I AM HAVING PROBLEMS!!!!!!!!!!!
cmd = 's3cmd sync 'dDir+folderName+'/ s3://'+s3bucket+'/'
os.system(cmd)
To answer a few questions, I am running the cron as root, s3cmd is configured for the root user, OS is Ubuntu 12.04, python version is 2.7, all of the necessary directories have Read / Write permissions...
What am I missing?
First check variable 'foldername' in command you have used N in variable.
In command syntax you need to add plus (+) sign in prefix like +dDir+folderName+
So I hope command would be like below..
cmd = 's3cmd sync '+dDir+foldername+'/ s3://'+s3bucket+'/'
os.system(cmd)
Here I found some more s3cmd sync help: http://tecadmin.net/s3cmd-file-sync-with-s3bucket/
OpenShift has these default dir's:
# $_ENV['OPENSHIFT_INTERNAL_IP'] - IP Address assigned to the application
# $_ENV['OPENSHIFT_GEAR_NAME'] - Application name
# $_ENV['OPENSHIFT_GEAR_DIR'] - Application dir
# $_ENV['OPENSHIFT_DATA_DIR'] - For persistent storage (between pushes)
# $_ENV['OPENSHIFT_TMP_DIR'] - Temp storage (unmodified files deleted after 10 days)
How do reference them in a python script?
Example script "created a log file in log directory and log in data directory?
from time import strftime
now= strftime("%Y-%m-%d %H:%M:%S")
fn = "${OPENSHIFT_LOG_DIR}/test.log"
fn2 = "${OPENSHIFT_DATA_DIR}/test.log"
#fn = "test.txt"
input = "appended text " + now + " \n"
with open(fn, "ab") as f:
f.write(input)
with open(fn2, "ab") as f:
f.write(input)
Can these script be used with cron?
EDIT the BASH File:
#! /bin/bash
#date >> ${OPENSHIFT_LOG_DIR}/new.log
source $OPENSHIFT_HOMEDIR/python-2.6/virtenv/bin/activate
python file.py
date >> ${OPENSHIFT_DATA_DIR}/new2data.log
import os
os.getenv("OPENSHIFT_INTERNAL_IP")
should work.
So with your example, modify to:-
import os
OPENSHIFT_LOG_DIR = os.getenv("OPENSHIFT_LOG_DIR")
fn = os.path.join(OPENSHIFT_LOG_DIR, "test.log")
And, yes, you can call this python script with a cron by referencing your bash script if you want... Like this for example:-
#!/bin/bash
date >> ${OPENSHIFT_LOG_DIR}/status.log
chmod +x status
cd ${OPENSHIFT_REPO_DIR}/wsgi/crawler
nohup python file.py 2>&1 &
Those variables OPENSHIFT_* are provided as environment variables on OpenShift -- so the $_ENV["OPENSHIFT_LOG_DIR"] is an example to get the value inside a php script.
In python, the equivalent would just be os.getenv("OPENSHIFT_LOG_DIR").
Made edits to Calvin's post above and submitted 'em.
Re: the question of where file.py exists -- use os.getenv("OPENSHIFT_REPO_DIR") as the base directory where all your code would be located on the gear where you app is running.
So if your file is located in .openshift/misc/file.py -- then just use:
os.path.join(os.getenv("OPENSHIFT_REPO_DIR"), ".openshift", "misc", "file.py")
to get the full path.
Or in bash, the equivalent would be:
$OPENSHIFT_REPO_DIR/.openshift/misc/file.py
HTH