how read a bash array into python [duplicate] - python

I can not export an array from a bash script to another bash script like this:
export myArray[0]="Hello"
export myArray[1]="World"
When I write like this there are no problem:
export myArray=("Hello" "World")
For several reasons I need to initialize my array into multiple lines. Do you have any solution?

Array variables may not (yet) be exported.
From the manpage of bash version 4.1.5 under ubuntu 10.04.
The following statement from Chet Ramey (current bash maintainer as of 2011) is probably the most official documentation about this "bug":
There isn't really a good way to encode an array variable into the environment.
http://www.mail-archive.com/bug-bash#gnu.org/msg01774.html

TL;DR: exportable arrays are not directly supported up to and including bash-5.1, but you can (effectively) export arrays in one of two ways:
a simple modification to the way the child scripts are invoked
use an exported function to store the array initialisation, with a simple modification to the child scripts
Or, you can wait until bash-4.3 is released (in development/RC state as of February 2014, see ARRAY_EXPORT in the Changelog). Update: This feature is not enabled in 4.3. If you define ARRAY_EXPORT when building, the build will fail. The author has stated it is not planned to complete this feature.
The first thing to understand is that the bash environment (more properly command execution environment) is different to the POSIX concept of an environment. The POSIX environment is a collection of un-typed name=value pairs, and can be passed from a process to its children in various ways (effectively a limited form of IPC).
The bash execution environment is effectively a superset of this, with typed variables, read-only and exportable flags, arrays, functions and more. This partly explains why the output of set (bash builtin) and env or printenv differ.
When you invoke another bash shell you're starting a new process, you loose some bash state. However, if you dot-source a script, the script is run in the same environment; or if you run a subshell via ( ) the environment is also preserved (because bash forks, preserving its complete state, rather than reinitialising using the process environment).
The limitation referenced in #lesmana's answer arises because the POSIX environment is simply name=value pairs with no extra meaning, so there's no agreed way to encode or format typed variables, see below for an interesting bash quirk regarding functions , and an upcoming change in bash-4.3(proposed array feature abandoned).
There are a couple of simple ways to do this using declare -p (built-in) to output some of the bash environment as a set of one or more declare statements which can be used reconstruct the type and value of a "name". This is basic serialisation, but with rather less of the complexity some of the other answers imply. declare -p preserves array indexes, sparse arrays and quoting of troublesome values. For simple serialisation of an array you could just dump the values line by line, and use read -a myarray to restore it (works with contiguous 0-indexed arrays, since read -a automatically assigns indexes).
These methods do not require any modification of the script(s) you are passing the arrays to.
declare -p array1 array2 > .bash_arrays # serialise to an intermediate file
bash -c ". .bash_arrays; . otherscript.sh" # source both in the same environment
Variations on the above bash -c "..." form are sometimes (mis-)used in crontabs to set variables.
Alternatives include:
declare -p array1 array2 > .bash_arrays # serialise to an intermediate file
BASH_ENV=.bash_arrays otherscript.sh # non-interactive startup script
Or, as a one-liner:
BASH_ENV=<(declare -p array1 array2) otherscript.sh
The last one uses process substitution to pass the output of the declare command as an rc script. (This method only works in bash-4.0 or later: earlier versions unconditionally fstat() rc files and use the size returned to read() the file in one go; a FIFO returns a size of 0, and so won't work as hoped.)
In a non-interactive shell (i.e. shell script) the file pointed to by the BASH_ENV variable is automatically sourced. You must make sure bash is correctly invoked, possibly using a shebang to invoke "bash" explicitly, and not #!/bin/sh as bash will not honour BASH_ENV when in historical/POSIX mode.
If all your array names happen to have a common prefix you can use declare -p ${!myprefix*} to expand a list of them, instead of enumerating them.
You probably should not attempt to export and re-import the entire bash environment using this method, some special bash variables and arrays are read-only, and there can be other side-effects when modifying special variables.
(You could also do something slightly disagreeable by serialising the array definition to an exportable variable, and using eval, but let's not encourage the use of eval ...
$ array=([1]=a [10]="b c")
$ export scalar_array=$(declare -p array)
$ bash # start a new shell
$ eval $scalar_array
$ declare -p array
declare -a array='([1]="a" [10]="b c")'
)
As referenced above, there's an interesting quirk: special support for exporting functions through the environment:
function myfoo() {
echo foo
}
with export -f or set +a to enable this behaviour, will result in this in the (process) environment, visible with printenv:
myfoo=() { echo foo
}
The variable is functionname (or functioname() for backward compatibility) and its value is () { functionbody }.
When a subsequent bash process starts it will recreate a function from each such environment variable. If you peek into the bash-4.2 source file variables.c you'll see variables starting with () { are handled specially. (Though creating a function using this syntax with declare -f is forbidden.) Update: The "shellshock" security issue is related to this feature, contemporary systems may disable automatic function import from the environment as a mitigation.
If you keep reading though, you'll see an #if 0 (or #if ARRAY_EXPORT) guarding code that checks variables starting with ([ and ending with ), and a comment stating "Array variables may not yet be exported". The good news is that in the current development version bash-4.3rc2 the ability to export indexed arrays (not associative) is enabled. This feature is not likely to be enabled, as noted above.
We can use this to create a function which restores any array data required:
% function sharearray() {
array1=(a b c d)
}
% export -f sharearray
% bash -c 'sharearray; echo ${array1[*]}'
So, similar to the previous approach, invoke the child script with:
bash -c "sharearray; . otherscript.sh"
Or, you can conditionally invoke the sharearray function in the child script by adding at some appropriate point:
declare -F sharearray >/dev/null && sharearray
Note there is no declare -a in the sharearray function, if you do that the array is implicitly local to the function, not what is wanted. bash-4.2 supports declare -g that makes a variable declared in a function into a global, so declare -ga can then be used. (Since associative arrays require a declare -A you won't be able to use this method for global associative arrays prior to bash-4.2, from v4.2 declare -Ag will work as hoped.) The GNU parallel documentation has useful variation on this method, see the discussion of --env in the man page.
Your question as phrased also indicates you may be having problems with export itself. You can export a name after you've created or modified it. "exportable" is a flag or property of a variable, for convenience you can also set and export in a single statement. Up to bash-4.2 export expects only a name, either a simple (scalar) variable or function name are supported.
Even if you could (in future) export arrays, exporting selected indexes (a slice) may not be supported (though since arrays are sparse there's no reason it could not be allowed). Though bash also supports the syntax declare -a name[0], the subscript is ignored, and "name" is simply a normal indexed array.

Jeez. I don't know why the other answers made this so complicated. Bash has nearly built-in support for this.
In the exporting script:
myArray=( ' foo"bar ' $'\n''\nbaz)' ) # an array with two nasty elements
myArray="${myArray[#]#Q}" ./importing_script.sh
(Note, the double quotes are necessary for correct handling of whitespace within array elements.)
Upon entry to importing_script.sh, the value of the myArray environment variable comprises these exact 26 bytes:
' foo"bar ' $'\n\\nbaz)'
Then the following will reconstitute the array:
eval "myArray=( ${myArray} )"
CAUTION! Do not eval like this if you cannot trust the source of the myArray environment variable. This trick exhibits the "Little Bobby Tables" vulnerability. Imagine if someone were to set the value of myArray to ) ; rm -rf / #.

The environment is just a collection of key-value pairs, both of which are character strings. A proper solution that works for any kind of array could either
Save each element in a different variable (e.g. MY_ARRAY_0=myArray[0]). Gets complicated because of the dynamic variable names.
Save the array in the file system (declare -p myArray >file).
Serialize all array elements into a single string.
These are covered in the other posts. If you know that your values never contain a certain character (for example |) and your keys are consecutive integers, you can simply save the array as a delimited list:
export MY_ARRAY=$(IFS='|'; echo "${myArray[*]}")
And restore it in the child process:
IFS='|'; myArray=($MY_ARRAY); unset IFS

Based on #mr.spuratic use of BASH_ENV, here I tunnel $# through script -f -c
script -c <command> <logfile> can be used to run a command inside another pty (and process group) but it cannot pass any structured arguments to <command>.
Instead <command> is a simple string to be an argument to the system library call.
I need to tunnel $# of the outer bash into $# of the bash invoked by script.
As declare -p cannot take #, here I use the magic bash variable _ (with a dummy first array value as that will get overwritten by bash). This saves me trampling on any important variables:
Proof of concept:
BASH_ENV=<( declare -a _=("" "$#") && declare -p _ ) bash -c 'set -- "${_[#]:1}" && echo "$#"'
"But," you say, "you are passing arguments to bash -- and indeed I am, but these are a simple string of known character. Here is use by script
SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$#") && declare -p _ && echo 'set -- "${_[#]:1}"') script -f -c 'echo "$#"' /tmp/logfile
which gives me this wrapper function in_pty:
in_pty() {
SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$#") && declare -p _ && echo 'set -- "${_[#]:1}"') script -f -c 'echo "$#"' /tmp/logfile
}
or this function-less wrapper as a composable string for Makefiles:
in_pty=bash -c 'SHELL=/bin/bash BASH_ENV=<( declare -a _=("" "$$#") && declare -p _ && echo '"'"'set -- "$${_[#]:1}"'"'"') script -qfc '"'"'"$$#"'"'"' /tmp/logfile' --
...
$(in_pty) test --verbose $# $^

I was editing a different post and made a mistake. Augh. Anyway, perhaps this might help?
https://stackoverflow.com/a/11944320/1594168
Note that because the shell's array format is undocumented on bash or any other shell's side,
it is very difficult to return a shell array in platform independent way.
You would have to check the version, and also craft a simple script that concatinates all
shell arrays into a file that other processes can resolve into.
However, if you know the name of the array you want to take back home then there is a way, while a bit dirty.
Lets say I have
MyAry[42]="whatever-stuff";
MyAry[55]="foo";
MyAry[99]="bar";
So I want to take it home
name_of_child=MyAry
take_me_home="`declare -p ${name_of_child}`";
export take_me_home="${take_me_home/#declare -a ${name_of_child}=/}"
We can see it being exported, by checking from a sub-process
echo ""|awk '{print "from awk =["ENVIRON["take_me_home"]"]"; }'
Result :
from awk =['([42]="whatever-stuff" [55]="foo" [99]="bar")']
If we absolutely must, use the env var to dump it.
env > some_tmp_file
Then
Before running the another script,
# This is the magic that does it all
source some_tmp_file

As lesmana reported, you cannot export arrays. So you have to serialize them before passing through the environment. This serialization useful other places too where only a string fits (su -c 'string', ssh host 'string'). The shortest code way to do this is to abuse 'getopt'
# preserve_array(arguments). return in _RET a string that can be expanded
# later to recreate positional arguments. They can be restored with:
# eval set -- "$_RET"
preserve_array() {
_RET=$(getopt --shell sh --options "" -- -- "$#") && _RET=${_RET# --}
}
# restore_array(name, payload)
restore_array() {
local name="$1" payload="$2"
eval set -- "$payload"
eval "unset $name && $name=("\$#")"
}
Use it like this:
foo=("1: &&& - *" "2: two" "3: %# abc" )
preserve_array "${foo[#]}"
foo_stuffed=${_RET}
restore_array newfoo "$foo_stuffed"
for elem in "${newfoo[#]}"; do echo "$elem"; done
## output:
# 1: &&& - *
# 2: two
# 3: %# abc
This does not address unset/sparse arrays.
You might be able to reduce the 2 'eval' calls in restore_array.

Although this question/answers are pretty old, this post seems to be the top hit when searching for "bash serialize array"
And, although the original question wasn't quite related to serializing/deserializing arrays, it does seem that the answers have devolved in that direction.
So with that ... I offer my solution:
Pros
All Core Bash Concepts
No Evals
No Sub-Commands
Cons
Functions take variable names as arguments (vs actual values)
Serializing requires having at least one character that is not present in the array
serialize_array.bash
# shellcheck shell=bash
##
# serialize_array
# Serializes a bash array to a string, with a configurable seperator.
#
# $1 = source varname ( contains array to be serialized )
# $2 = target varname ( will contian the serialized string )
# $3 = seperator ( optional, defaults to $'\x01' )
#
# example:
#
# my_arry=( one "two three" four )
# serialize_array my_array my_string '|'
# declare -p my_string
#
# result:
#
# declare -- my_string="one|two three|four"
#
function serialize_array() {
declare -n _array="${1}" _str="${2}" # _array, _str => local reference vars
local IFS="${3:-$'\x01'}"
# shellcheck disable=SC2034 # Reference vars assumed used by caller
_str="${_array[*]}" # * => join on IFS
}
##
# deserialize_array
# Deserializes a string into a bash array, with a configurable seperator.
#
# $1 = source varname ( contains string to be deserialized )
# $2 = target varname ( will contain the deserialized array )
# $3 = seperator ( optional, defaults to $'\x01' )
#
# example:
#
# my_string="one|two three|four"
# deserialize_array my_string my_array '|'
# declare -p my_array
#
# result:
#
# declare -a my_array=([0]="one" [1]="two three" [2]="four")
#
function deserialize_array() {
IFS="${3:-$'\x01'}" read -r -a "${2}" <<<"${!1}" # -a => split on IFS
}
NOTE: This is hosted as a gist here:
https://gist.github.com/TekWizely/c0259f25e18f2368c4a577495cd566cd
[edits]
Logic simplified after running through shellcheck + shfmt.
Added URL for hosted GIST

you (hi!) can use this, dont need writing a file, for ubuntu 12.04, bash 4.2.24
Also, your multiple lines array can be exported.
cat >>exportArray.sh
function FUNCarrayRestore() {
local l_arrayName=$1
local l_exportedArrayName=${l_arrayName}_exportedArray
# if set, recover its value to array
if eval '[[ -n ${'$l_exportedArrayName'+dummy} ]]'; then
eval $l_arrayName'='`eval 'echo $'$l_exportedArrayName` #do not put export here!
fi
}
export -f FUNCarrayRestore
function FUNCarrayFakeExport() {
local l_arrayName=$1
local l_exportedArrayName=${l_arrayName}_exportedArray
# prepare to be shown with export -p
eval 'export '$l_arrayName
# collect exportable array in string mode
local l_export=`export -p \
|grep "^declare -ax $l_arrayName=" \
|sed 's"^declare -ax '$l_arrayName'"export '$l_exportedArrayName'"'`
# creates exportable non array variable (at child shell)
eval "$l_export"
}
export -f FUNCarrayFakeExport
test this example on terminal bash (works with bash 4.2.24):
source exportArray.sh
list=(a b c)
FUNCarrayFakeExport list
bash
echo ${list[#]} #empty :(
FUNCarrayRestore list
echo ${list[#]} #profit! :D
I may improve it here
PS.: if someone clears/improve/makeItRunFaster I would like to know/see, thx! :D

For arrays with values without spaces, I've been using a simple set of functions to iterate through each array element and concatenate the array:
_arrayToStr(){
array=($#)
arrayString=""
for (( i=0; i<${#array[#]}; i++ )); do
if [[ $i == 0 ]]; then
arrayString="\"${array[i]}\""
else
arrayString="${arrayString} \"${array[i]}\""
fi
done
export arrayString="(${arrayString})"
}
_strToArray(){
str=$1
array=${str//\"/}
array=(${array//[()]/""})
export array=${array[#]}
}
The first function with turn the array into a string by adding the opening and closing parentheses and escaping all of the double quotation marks. The second function will strip the quotation marks and the parentheses and place them into a dummy array.
In order export the array, you would pass in all the elements of the original array:
array=(foo bar)
_arrayToStr ${array[#]}
At this point, the array has been exported into the value $arrayString. To import the array in the destination file, rename the array and do the opposite conversion:
_strToArray "$arrayName"
newArray=(${array[#]})

Much thanks to #stéphane-chazelas who pointed out all the problems with my previous attempts, this now seems to work to serialise an array to stdout or into a variable.
This technique does not shell-parse the input (unlike declare -a/declare -p) and so is safe against malicious insertion of metacharacters in the serialised text.
Note: newlines are not escaped, because read deletes the \<newlines> character pair, so -d ... must instead be passed to read, and then unescaped newlines are preserved.
All this is managed in the unserialise function.
Two magic characters are used, the field separator and the record separator (so that multiple arrays can be serialized to the same stream).
These characters can be defined as FS and RS but neither can be defined as newline character because an escaped newline is deleted by read.
The escape character must be \ the backslash, as that is what is used by read to avoid the character being recognized as an IFS character.
serialise will serialise "$#" to stdout, serialise_to will serialise to the varable named in $1
serialise() {
set -- "${#//\\/\\\\}" # \
set -- "${#//${FS:-;}/\\${FS:-;}}" # ; - our field separator
set -- "${#//${RS:-:}/\\${RS:-:}}" # ; - our record separator
local IFS="${FS:-;}"
printf ${SERIALIZE_TARGET:+-v"$SERIALIZE_TARGET"} "%s" "$*${RS:-:}"
}
serialise_to() {
SERIALIZE_TARGET="$1" serialise "${#:2}"
}
unserialise() {
local IFS="${FS:-;}"
if test -n "$2"
then read -d "${RS:-:}" -a "$1" <<<"${*:2}"
else read -d "${RS:-:}" -a "$1"
fi
}
and unserialise with:
unserialise data # read from stdin
or
unserialise data "$serialised_data" # from args
e.g.
$ serialise "Now is the time" "For all good men" "To drink \$drink" "At the \`party\`" $'Party\tParty\tParty'
Now is the time;For all good men;To drink $drink;At the `party`;Party Party Party:
(without a trailing newline)
read it back:
$ serialise_to s "Now is the time" "For all good men" "To drink \$drink" "At the \`party\`" $'Party\tParty\tParty'
$ unserialise array "$s"
$ echo "${array[#]/#/$'\n'}"
Now is the time
For all good men
To drink $drink
At the `party`
Party Party Party
or
unserialise array # read from stdin
Bash's read respects the escape character \ (unless you pass the -r flag) to remove special meaning of characters such as for input field separation or line delimiting.
If you want to serialise an array instead of a mere argument list then just pass your array as the argument list:
serialise_array "${my_array[#]}"
You can use unserialise in a loop like you would read because it is just a wrapped read - but remember that the stream is not newline separated:
while unserialise array
do ...
done

I've wrote my own functions for this and improved the method with the IFS:
Features:
Doesn't call to $(...) and so doesn't spawn another bash shell process
Serializes ? and | characters into ?00 and ?01 sequences and back, so can be used over array with these characters
Handles the line return characters between serialization/deserialization as other characters
Tested in cygwin bash 3.2.48 and Linux bash 4.3.48
function tkl_declare_global()
{
eval "$1=\"\$2\"" # right argument does NOT evaluate
}
function tkl_declare_global_array()
{
local IFS=$' \t\r\n' # just in case, workaround for the bug in the "[#]:i" expression under the bash version lower than 4.1
eval "$1=(\"\${#:2}\")"
}
function tkl_serialize_array()
{
local __array_var="$1"
local __out_var="$2"
[[ -z "$__array_var" ]] && return 1
[[ -z "$__out_var" ]] && return 2
local __array_var_size
eval declare "__array_var_size=\${#$__array_var[#]}"
(( ! __array_var_size )) && { tkl_declare_global $__out_var ''; return 0; }
local __escaped_array_str=''
local __index
local __value
for (( __index=0; __index < __array_var_size; __index++ )); do
eval declare "__value=\"\${$__array_var[__index]}\""
__value="${__value//\?/?00}"
__value="${__value//|/?01}"
__escaped_array_str="$__escaped_array_str${__escaped_array_str:+|}$__value"
done
tkl_declare_global $__out_var "$__escaped_array_str"
return 0
}
function tkl_deserialize_array()
{
local __serialized_array="$1"
local __out_var="$2"
[[ -z "$__out_var" ]] && return 1
(( ! ${#__serialized_array} )) && { tkl_declare_global $__out_var ''; return 0; }
local IFS='|'
local __deserialized_array=($__serialized_array)
tkl_declare_global_array $__out_var
local __index=0
local __value
for __value in "${__deserialized_array[#]}"; do
__value="${__value//\?01/|}"
__value="${__value//\?00/?}"
tkl_declare_global $__out_var[__index] "$__value"
(( __index++ ))
done
return 0
}
Example:
a=($'1 \n 2' "3\"4'" 5 '|' '?')
tkl_serialize_array a b
tkl_deserialize_array "$b" c

I think you can try it this way (by sourcing your script after export):
export myArray=(Hello World)
. yourScript.sh

Related

Variable notation when running python commands with arguments in a bash script

I have a bash script which is running a bunch of python script all with arguments. In order to have a clean code, I wanted to use variables along the scripts
#!/bin/bash
START=0
SCRIPT_PATH="/opt/scripts/"
IP="192.168.1.111"
if [ "$START" = "0" ]; then
printf "%s: Starting\n" "$DATE_TIME"
PORT=1234
TEST_FILE="$SCRIPT_PATH/Test Scripts/test.TXT"
SCRIPT="$SCRIPT_PATH/script1.py"
ARGS="-P $SCRIPT_PATH/script2.py -n 15 -p $PORT -i $IP"
python "$SCRIPT" ${ARGS} -f "${TEST_FILE}" > ./out.log 2>&1 &
fi
This code is actually working but few things I don't understand :
Why, if I add quotes around ${ARGS}, the arguments are not parsed correctly by python ? What would be the best way to write this ?
What is the best method to add -f "${TEST_FILE}" to the ARGS variable without python blocking on the whitespace and throwing the error: "$SCRIPT_PATH/Test " not found
When you wrap quotes around an argument list, the argument vector receives a single argument with everything that is wrapped in quotes and so, the argument parser fails to do its job properly and you have your issue.
Regarding your second question, it is not easy to embed the quotes into the array, because the quotes will be parsed before being stored in the array, and then when you perform the array expansion to run the command, they will be missing and fail. I have tried this several times with no success.
An alternative approach would mean that you modify a little your script to use a custom internal field separator (IFS) to manually tell what should be considered an argument and what not:
#!/bin/bash
START=0
SCRIPT_PATH="/opt/scripts/"
IP="192.168.1.111"
if [ "$START" = "0" ]; then
printf "%s: Starting\n" "$DATE_TIME"
PORT=1234
TEST_FILE="$SCRIPT_PATH/Test Scripts/test.TXT"
SCRIPT="$SCRIPT_PATH/script1.py"
OLD_IFS=$IFS
IFS=';'
ARGS="$SCRIPT;-P;$SCRIPT_PATH/script2.py;-n;15;-p;$PORT;-i;$IP;-f;$TEST_FILE"
python ${ARGS} > ./out.log 2>&1 &
IFS=$OLD_IFS
fi
As you can see, I replace the spaces in ARGS with semicolons. This way, TEST_FILE variable contents will be considered as a single argument for bash and will be properly populated in argument vector. I'm also moving the script to the argument vector for simplicity, otherwise, Python will not get the proper script path and fail, due to this modification we did to IFS.
I was thinking something like this (with some cruft edited out to make it a standalone example):
#!/bin/bash
SCRIPT_PATH="/opt/scripts/"
IP="192.168.1.111"
PORT=1234
TEST_FILE="$SCRIPT_PATH/Test Scripts/test.TXT"
SCRIPT="$SCRIPT_PATH/script1.py"
set -a ARGS
ARGS=(-P "$SCRIPT_PATH/script2.py" -n 15 -p "$PORT" -i "$IP")
ARGS+=(-f "${TEST_FILE}")
python3 -c "import sys; print(*enumerate(sys.argv), sep='\n')" "${ARGS[#]}"

Using ssh and sed within a python script with os.system properly

I am trying to run an ssh command within a python script using os.system to add a 0 at the end of a fully matched string in a remote server using ssh and sed.
I have a file called nodelist in a remote server that's a list that looks like this.
test-node-1
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
I want to use sed to make the following modification, I want to search test-node-1, and when a full match is found I want to add a 0 at the end, the file must end up looking like this.
test-node-1 0
test-node-2
...
test-node-11
test-node-12
test-node-13
...
test-node-21
However, when I run the first command,
hostname = 'test-node-1'
function = 'nodelist'
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/{hostname}/s/$/ 0/' ~/{function}.txt\"")
The result becomes like this,
test-node-1 0
test-node-2
...
test-node-11 0
test-node-12 0
test-node-13 0
...
test-node-21
I tried adding a \b to the command like this,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\b{hostname}\b/s/$/ 0/' ~/{function}.txt\"")
The command doesn't work at all.
I have to manually type in the node name instead of using a variable like so,
os.system(f"ssh -i ~/.ssh/my-ssh-key username#serverlocation \"sed -i '/\btest-node-1\b/s/$/ 0/' ~/{function}.txt\"")
to make my command work.
What's wrong with my command, why can't I do what I want it to do?
This code has serious security problems; fixing them requires reengineering it from scratch. Let's do that here:
#!/usr/bin/env python3
import os.path
import shlex # note, quote is only here in Python 3.x; in 2.x it was in the pipes module
import subprocess
import sys
# can set these from a loop if you choose, of course
username = "whoever"
serverlocation = "whereever"
hostname = 'test-node-1'
function = 'somename'
desired_cmd = ['sed', '-i',
f'/\\b{hostname}\\b/s/$/ 0/',
f'{function}.txt']
desired_cmd_str = ' '.join(shlex.quote(word) for word in desired_cmd)
print(f"Remote command: {desired_cmd_str}", file=sys.stderr)
# could just pass the below direct to subprocess.run, but let's log what we're doing:
ssh_cmd = ['ssh', '-i', os.path.expanduser('~/.ssh/my-ssh-key'),
f"{username}#{serverlocation}", desired_cmd_str]
ssh_cmd_str = ' '.join(shlex.quote(word) for word in ssh_cmd)
print(f"Local command: {ssh_cmd_str}", file=sys.stderr) # log equivalent shell command
subprocess.run(ssh_cmd) # but locally, run without a shell
If you run this (except for the subprocess.run at the end, which would require a real SSH key, hostname, etc), output looks like:
Remote command: sed -i '/\btest-node-1\b/s/$/ 0/' somename.txt
Local command: ssh -i /home/yourname/.ssh/my-ssh-key whoever#whereever 'sed -i '"'"'/\btest-node-1\b/s/$/ 0/'"'"' somename.txt'
That's correct/desired output; the funny '"'"' idiom is how one safely injects a literal single quote inside a single-quoted string in a POSIX-compliant shell.
What's different? Lots:
We're generating the commands we want to run as arrays, and letting Python do the work of converting those arrays to strings where necessary. This avoids shell injection attacks, a very common class of security vulnerability.
Because we're generating lists ourselves, we can change how we quote each one: We can use f-strings when it's appropriate to do so, raw strings when it's appropriate, etc.
We aren't passing ~ to the remote server: It's redundant and unnecessary because ~ is the default place for a SSH session to start; and the security precautions we're using (to prevent values from being parsed as code by a shell) prevent it from having any effect (as the replacement of ~ with the active value of HOME is not done by sed itself, but by the shell that invokes it; because we aren't invoking any local shell at all, we also needed to use os.path.expanduser to cause the ~ in ~/.ssh/my-ssh-key to be honored).
Because we aren't using a raw string, we need to double the backslashes in \b to ensure that they're treated as literal rather than syntactic by Python.
Critically, we're never passing data in a context where it could be parsed as code by any shell, either local or remote.

Pass variable from Python to Bash

I am writing a bash script in which a small python script is embedded. I want to pass a variable from python to bash. After a few search I only found method based on os.environ.
I just cannot make it work. Here is my simple test.
#!/bin/bash
export myvar='first'
python - <<EOF
import os
os.environ["myvar"] = "second"
EOF
echo $myvar
I expected it to output second, however it still outputs first. What is wrong with my script? Also is there any way to pass variable without export?
summary
Thanks for all answers. Here is my summary.
A python script embedded inside bash will run as child process which by definition is not able to affect parent bash environment.
The solution is to pass assignment strings out from python and eval it subsequently in bash.
An example is
#!/bin/bash
a=0
b=0
assignment_string=$(python -<<EOF
var1=1
var2=2
print('a={};b={}'.format(var1,var2))
EOF
)
eval $assignment_string
echo $a
echo $b
Unless Python is used to do some kind of operation on the original data, there's no need to import anything. The answer could be as lame as:
myvar=$(python - <<< "print 'second'") ; echo "$myvar"
Suppose for some reason Python is needed to spit out a bunch of bash variables and assignments, or (cautiously) compose code on-the-fly. An eval method:
myvar=first
eval "$(python - <<< "print('myvar=second')" )"
echo "$myvar"
Complementing the useful Cyrus's comment in question, you just can't do it. Here is why,
Setting an environment variable sets it only for the current process and any child processes it launches. os.environ will set it only for the shell that is running to execute the command you provided. When that command finishes, the shell goes away, and so does the environment variable.
You can pretty much do that with a shell script itself and just source it to reflect it on the current shell.
There are a few "dirty" ways of getting something like this done. Here is an example:
#!/bin/bash
myvar=$(python - <<EOF
print "second"
EOF
)
echo "$myvar"
The output of the python process is stored in a bash variable. It gets a bit messy if you want to return more complex stuff, though.
You can make python return val and pass it to bash:
pfile.py
print(100)
bfile.sh
var=$(python pfile.py)
echo "$var"
output: 100
Well, this may not be what you want but one option could be running the other batch commands in python using subprocess
import subprocess
x =400
subprocess.call(["echo", str(x)])
But this is more of a temporary work around. The other solutions are more along what you are looking for.
Hope I was able to help!

Sed with variables isn't working exactly how I want it to work

('cd /etc/squid/ && new_val=9 && old_val=3 && sed -i "s/$old_val/$new_val/g" *.conf')
this gives me an error
ExecutionError: sed: -e expression #1, char 0: no previous regular expression
I am not sure what the issue is.
The above is being used in a Python script.
Change it to:
('cd /etc/squid/ && new_val=9 && old_val=3 && sed -i "s/$old_val/$new_val/g" *.conf')
The variable assignment needs to be a separate statement from sed. When you put a variable assignment at the beginning of a statement, it only sets an environment variable that gets inherited by the child process. But you need the variable to be expanded by the original shell, so you need to set the variable before executing the sed command.
Your problem is that:
old_val=3 sed "s/$old_val/$new_val/g"
is relying on the shell to expand the variables, not sed. But setting variables via command prefix only affects the environment of the command, not bash, so old_val is never defined for the purposes of string interpolation. Per the bash reference manual (emphasis added):
The environment for any simple command or function may be augmented temporarily by prefixing it with parameter assignments, as described in Shell Parameters. These assignment statements affect only the environment seen by that command.
So if sed tried to read old_val from its own environment it would see the correct value. But what sed is receiving is the post-interpolation string passed, which is s//9/g, because bash interpolation doesn't see old_val (that exists only for sed).
To fix, set the variable in bash by performing the assignment as a separate command, not a sed prefix:
('cd /etc/squid/ && new_val=9 && old_val=3 && sed -i "s/$old_val/$new_val/g" *.conf')
Or more correctly, you should really avoid relying on shell=True (it's dangerous/easy to misuse). Even if you must use sed, all the stuff you were using the shell for can be done at the Python layer:
import os
# Get the (unqualified) names of all the entries with the desired name
files = [f for f in os.listdir('/etc/squid') if f.endswith('.conf')]
# Run w/o shell=True, in list form, letting Python handle the working directory
# and variable formatting
subprocess.Popen(['sed', '-i', 's/{}/{}/g'.format(old_val, new_val)] + files, cwd='/etc/squid')
This has the same behavior (operates in /etc/squid, and passes the unqualified file names so you won't have new command line length issues if there are a lot of files in a deeply nested directory).
Of course, you could go even further, and just use the fileinput module to do the work of sed in Python too; it features editing files in place just like sed (though it will likely be slightly slower if the files are of meaningful size).
Actually i found something that worked, putting each of the variables in single quotes and escaping the single quotes.

How to get the current Linux process ID from the command line a in shell-agnostic, language-agnostic way

How does one get their current process ID (pid) from the Linux command line in a shell-agnostic, language-agnostic way?
pidof(8) appears to have no option to get the calling process' pid. Bash, of course, has $$ - but for my generic usage, I can't rely on a shell (Bash or otherwise). And in some cases, I can't write a script or compilable program, so Bash / Python / C / C++ (etc.) will not work.
Here's a specific use case: I want to get the pid of the running, Python-Fabric-based, remote SSH process (where one may want to avoid assuming bash is running), so that among other things I can copy and/or create files and/or directories with unique filenames (as in mkdir /tmp/mydir.$$).
If we can solve the Fabric-specific problem, that's helpful - but it doesn't solve my long-term problem. For general-purpose usage in all future scenarios, I just want a command that returns what $$ delivers in Bash.
From python:
$ python
>>> import os
>>> os.getpid()
12252
$$ isn't bash-specific -- I believe that it's available in all POSIX-compliant shells, which amounts to pretty much every shell that isn't deliberately being weird.
Hope this is portable enough, it relies on the PPID being the fourth field of /proc/[pid]/stat:
cut -d ' ' -f 4 /proc/self/stat
It assumes a Linux with the right shape of /proc, that the layout of /proc/[pid]/stat won't be incompatibly different from whatever Debian 6.0.1 has, that cut is a separate executable and not a shell builtin, and that cut doesn't spawn subprocesses.
As an alternative, you can get field 6 instead of field 4 to get the PID of the "session leader". Interactive shells apparently set themselves to be session leaders, and this id should remain the same across pipes and subshell invocations:
$ echo $(echo $( cut -f 6 -d ' ' /proc/self/stat ) )
23755
$ echo $(echo $( cut -f 4 -d ' ' /proc/self/stat ) )
24027
$ echo $$
23755
That said, this introduces a dependency on the behaviour of the running shell - it has to set the session id only when it's the one whose PID you actually want. Obviously, this also won't work in scripts if you want the PID of the shell executing the script, and not the interactive one.
Great answers + comments here and here. Thx all. Combining both into one answer, providing two options with tradeoffs in POSIX-shell-required vs no-POSIX-shell-required contexts:
POSIX shell available: use $$
General cmdline: employ cut -d ' ' -f 4 /proc/self/stat
Example session with both methods (along with other proposed, non-working methods) shown here.
(Not sure how pertinent/useful it is to be so concerned with being shell independent, but have simply experienced many times the "run system call without shell" constraint that now seek shell-independent options whenever possible.)
Fewer characters and guaranteed to work:
sh -c 'echo $PPID'
If you have access to the proc filesystem, then /proc/self is a symlink to the current /proc/$pid. You could read the pid out of, for instance, the first column of /proc/self/stat.
If you are in python, you could use os.getpid().

Categories

Resources