I have a program i am launching from a Jenkins server.
I created an exe using pyinstaller and installed it on two computers. and then Jenkins calls them both.
At first i used os.path.expanduser('~') to get the user path but it returned
"C:\Windows\System32\config\systemprofile"
After i tried to just get the user name using os.environ['USERPROFILE']
still got:
"C:\Windows\System32\config\systemprofile"
Lastly i found a different solution that didnt require os module and tried:
import ctypes.wintypes
CSIDL_PERSONAL = 5 # My Documents
SHGFP_TYPE_CURRENT = 0 # Get current, not default value
buf= ctypes.create_unicode_buffer(ctypes.wintypes.MAX_PATH)
ctypes.windll.shell32.SHGetFolderPathW(None, CSIDL_PERSONAL, None,
SHGFP_TYPE_CURRENT, buf)
print(buf.value)
That gave no value back in my log value.
If i run the exe locally, all the value return back normally.
The batch command i run is
start /wait C:\JenkinsResources\MarinaMain.exe
So im drawing a blank as how i can get this program to find the user folder of the computer it is on, when i call it remotely.
So to get this to work I first downloaded the Windows Power Shell plugin.
After the restart I used the command:
$PCAndUserName = Get-WMIObject -class Win32_ComputerSystem | select username
$UserName = [string]$PCAndUserName
$UserName = $UserName.split("\\")[-1]
$UserName = $UserName -replace ".{1}$"
$UserName
That gave me the name of the current user on the computer regardless of if I launch remotely with Jenkins.
Related
I am running several python scripts via rundeck (in-line) on windows 2012 target node. These scripts used to run locally but now in the process of moving to rundeck.
One of the python scripts opens subprocess to invoke a powershell script and read the output.
import subprocess
CMD = [r'C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe ', '-File', r'C:\Users\Osman\Code\mop.ps1']
cmd = CMD[:]
cmd.append('arg1')
cmd.append('arg2')
cmd.append('arg3')
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
r = p.communicate()
print(r)
The mop.ps1 is
Import-Module MSOnline
$domain = $args[0]
$login = $args[1]
$pass = $args[2]
$EPass = $pass | ConvertTo-SecureString -AsPlainText -Force
$Cred = New-Object System.Management.Automation.PsCredential($login, $EPass)
Connect-MsolService -Credential $Cred
$TenantId = Get-MsolPartnerContract -Domain $domain | Select-Object -ExpandProperty TenantId
Get-MsolAccountSKU -TenantId $TenantId | select SkuPartNumber,ActiveUnits,ConsumedUnits | ConvertTo-Csv -NoTypeInformation
This part of the code always fails to execute and if I check stderr it says:
Connect-MsolService : Exception of type 'Microsoft.Online.Administration.Automation.MicrosoftOnlineException' was thrown.
At C:\Users\Osman\Code\mop.ps1:7 char:1
+ Connect-MsolService -Credential $Cred
I'm not sure why it fails. I tried
Import-Module MSOnline -Verbose
And I can see Cmdlets being loaded. I tried creating profile.ps1 file in C:\WINDOWS\system32\WindowsPowerShell\v1.0\ location.
Everything works fine if I execute the code locally. I tried running a regular test .ps1 file 'disk.ps1' instead of my code, and that works fine because it doesn't load any modules:
get-WmiObject win32_logicaldisk -Computername $env:computername
What's the workaround to get a script with module to run properly? stdout is always empty.
Node is registered as 64 bit so I tried changing the cmd to
C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe
I tried to copy profile.ps1 there, copied the Module there but still doesn't work via rundeck.
From your description, since you are able to get a valid output when running the script directly from the server, it may be possible that you error could be related to the “Second Hop” that you use to login into MS-Online. Currently, the Rundeck Python-Winrm plugin supports Basic, ntlm or CredSSP Authentication, and CredSSP authentication allows you to perform the second hop successfully.
I'm executing a python script as root with pkexec and I'm using working_dir = os.getenv('HOME') to get the username but it always returns root instead of test1 which is the current user.
How can I get the user that ran pkexec instead?
Script is located in /usr/bin if that information is any use.
The sudo man page describes the environment variables which hold the information of the user who invoked it.
import os
print os.environ["SUDO_USER"]
Thanks to that other guy, in the end, below is what I did to get the current user:
import os
import pwd
user = pwd.getpwuid(int(os.environ["PKEXEC_UID"])).pw_name
working_dir = '/home/' + user
For those looking for a pure bash solution, this finds the user run with sudo or pkexec:
if test -z "$SUDO_USER"
then
USER=$(getent passwd $PKEXEC_UID | cut -d: -f1)
else
USER=$SUDO_USER
fi
I've been trying to trouble this for days now, and would appreciate some help --
Basically, I wrote the following Python script
import os, sys
# =__=__=__=__=__=__=__ START MAIN =__=__=__=__=__=__=__
if __name__ == '__main__':
# initialize variables
all_files = []
# directory to download data siphon files to
dDir = '/path/to/download/directory/'
# my S3 bucket
s3bucket = "com.mybucket/"
foldername = "test"
# get a list of available feeds
feeds = <huge JSON object with URLs to feeds>
for item in range(feeds['count']):
# ...check if the directory exists, and if not, create the directory...
if not os.path.exists(folderName):
os.makedirs(folderName)
... ... ...
# Loop through all the splits
for s in dsSplits:
... ... ...
location = requestFeedLocation(name, timestamp)
... ... ...
downloadFeed(location[0], folderName, nameNotGZ)
# THIS IS WHERE I AM HAVING PROBLEMS!!!!!!!!!!!
cmd = 's3cmd sync 'dDir+folderName+'/ s3://'+s3bucket+'/'
os.system(cmd)
Everything in my code works...when I run this straight from the command line, everything runs as expected...however, when I have it executed via cron -- the following DOES NOT execute (everything else does)
# THIS IS WHERE I AM HAVING PROBLEMS!!!!!!!!!!!
cmd = 's3cmd sync 'dDir+folderName+'/ s3://'+s3bucket+'/'
os.system(cmd)
To answer a few questions, I am running the cron as root, s3cmd is configured for the root user, OS is Ubuntu 12.04, python version is 2.7, all of the necessary directories have Read / Write permissions...
What am I missing?
First check variable 'foldername' in command you have used N in variable.
In command syntax you need to add plus (+) sign in prefix like +dDir+folderName+
So I hope command would be like below..
cmd = 's3cmd sync '+dDir+foldername+'/ s3://'+s3bucket+'/'
os.system(cmd)
Here I found some more s3cmd sync help: http://tecadmin.net/s3cmd-file-sync-with-s3bucket/
I can't seem to figure this one out but when I do a very simple test to localhost to have fabric execute this command run('history'), the resulting output on the command line is blank.
Nor will this work either: run('history > history_dump.log')
Here is the complete FabFile script below, obviously I'm missing something here.
-- FabFile.py
from fabric.api import run, env, hosts, roles, parallel, cd, task, settings, execute
from fabric.operations import local,put
deploymentType = "LOCAL"
if (deploymentType == "LOCAL"):
env.roledefs = {
'initial': ['127.0.0.1'],
'webservers': ['127.0.0.1'],
'dbservers' : ['127.0.0.1']
}
env.use_ssh_config = False
# Get History
# -------------------------------------------------------------------------------------
#task
#roles('initial')
def showHistoryCommands():
print("Logging into %s and accessing the command history " % env.host_string)
run('history') #does not display anything
run('history > history_dump.log') #does not write anything out
print "Completed displaying the command history"
Any suggestions/solutions would be most welcomed.
History is a shell builtin, so it doesn't work like a normal command. I think your best bet would be to try and read the history file from the filesystem.
local('cat ~/.bash_history')
or
run('cat ~/.bash_history')
Substitute for the appropriate history file path.
To expand a bit after some research, the command succeeds when run, but for some reason, be it that fabric neither captures or prints the output. Or the way history prints it's output. While other builtins commands like env work fine. So for now I don't know what exactly is going on.
How would I go about getting a privilege elevation dialog to pop up in my Python app? I want the UAC dialog on Windows and the password authentication dialog on Mac.
Basically, I need root privileges for part of my application and I need to get those privileges through the GUI. I'm using wxPython. Any ideas?
On Windows you cannot get the UAC dialog without starting a new process, and you cannot even start that process with CreateProcess.
The UAC dialog can be brought about by running another application that has the appropriate manifest file - see Running compiled python (py2exe) as administrator in Vista for an example of how to do this with py2exe.
You can also programatically use the runas verb with the win32 api ShellExecute http://msdn.microsoft.com/en-us/library/bb762153(v=vs.85).aspx - you can call this by using ctypes http://python.net/crew/theller/ctypes/ which is part of the standard library on python 2.5+ iirc.
Sorry don't know about Mac. If you give more detail on what you want to accomplish on Windows I might be able to provide more specific help.
I know the post is a little old, but I wrote the following as a solution to my problem (running a python script as root on both Linux and OS X).
I wrote the following bash-script to execute bash/python scripts with administrator privileges (works on Linux and OS X systems):
#!/bin/bash
if [ -z "$1" ]; then
echo "Specify executable"
exit 1
fi
EXE=$1
available(){
which $1 >/dev/null 2>&1
}
platform=`uname`
if [ "$platform" == "Darwin" ]; then
MESSAGE="Please run $1 as root with sudo or install osascript (should be installed by default)"
else
MESSAGE="Please run $1 as root with sudo or install gksu / kdesudo!"
fi
if [ `whoami` != "root" ]; then
if [ "$platform" == "Darwin" ]; then
# Apple
if available osascript
then
SUDO=`which osascript`
fi
else # assume Linux
# choose either gksudo or kdesudo
# if both are avilable check whoch desktop is running
if available gksudo
then
SUDO=`which gksudo`
fi
if available kdesudo
then
SUDO=`which kdesudo`
fi
if ( available gksudo && available kdesudo )
then
if [ $XDG_CURRENT_DESKTOP = "KDE" ]; then
SUDO=`which kdesudo`;
else
SUDO=`which gksudo`
fi
fi
# prefer polkit if available
if available pkexec
then
SUDO=`which pkexec`
fi
fi
if [ -z $SUDO ]; then
if available zenity; then
zenity --info --text "$MESSAGE"
exit 0
elif available notify-send; then
notify-send "$MESSAGE"
exit 0
elif available xmessage notify-send; then
xmessage -buttons Ok:0 "$MESSAGE"
exit 0
else
echo "$MESSAGE"
fi
fi
fi
if [ "$platform" == "Darwin" ]
then
$SUDO -e "do shell script \"$*\" with administrator privileges"
else
$SUDO $#
fi
Basically, the way I set up my system is that I keep subfolders inside the bin directories (e.g. /usr/local/bin/pyscripts in /usr/local/bin), and create symbolic links to the executables. This has three benefits for me:
(1) If I have different versions, I can easily switch which one is executed by changing the symbolic link and it keeps the bin directory cleaner (e.g. /usr/local/bin/gcc-versions/4.9/, /usr/local/bin/gcc-versions/4.8/, /usr/local/bin/gcc --> gcc-versions/4.8/gcc)
(2) I can store the scripts with their extension (helpful for syntax highlighting in IDEs), but the executables do not contain them because I like it that way (e.g. svn-tools --> pyscripts/svn-tools.py)
(3) The reason I will show below:
I name the script "run-as-root-wrapper" and place it in a very common path (e.g. /usr/local/bin) so python doesn't need anything special to locate it. Then I have the following run_command.py module:
import os
import sys
from distutils.spawn import find_executable
#===========================================================================#
def wrap_to_run_as_root(exe_install_path, true_command, expand_path = True):
run_as_root_path = find_executable("run-as-root-wrapper")
if(not run_as_root_path):
return False
else:
if(os.path.exists(exe_install_path)):
os.unlink(exe_install_path)
if(expand_path):
true_command = os.path.realpath(true_command)
true_command = os.path.abspath(true_command)
true_command = os.path.normpath(true_command)
f = open(exe_install_path, 'w')
f.write("#!/bin/bash\n\n")
f.write(run_as_root_path + " " + true_command + " $#\n\n")
f.close()
os.chmod(exe_install_path, 0755)
return True
In my actual python script, I have the following function:
def install_cmd(args):
exe_install_path = os.path.join(args.prefix,
os.path.join("bin", args.name))
if(not run_command.wrap_to_run_as_root(exe_install_path, sys.argv[0])):
os.symlink(os.path.realpath(sys.argv[0]), exe_install_path)
So if I have a script called TrackingBlocker.py (actual script I use to modify the /etc/hosts file to re-route known tracking domains to 127.0.0.1), when I call "sudo /usr/local/bin/pyscripts/TrackingBlocker.py --prefix /usr/local --name ModifyTrackingBlocker install" (arguments handled via argparse module), it installs "/usr/local/bin/ModifyTrackingBlocker", which is a bash script executing
/usr/local/bin/run-as-root-wrapper /usr/local/bin/pyscripts/TrackingBlocker.py [args]
e.g.
ModifyTrackingBlocker add tracker.ads.com
executes:
/usr/local/bin/run-as-root-wrapper /usr/local/bin/pyscripts/TrackingBlocker.py add tracker.ads.com
which then displays the authentification dialog needed to get the privileges to add:
127.0.0.1 tracker.ads.com
to my hosts file (which is only writable by a superuser).
If you want to simplify/modify it to run only certain commands as root, you could simply add this to your script (with the necessary imports noted above + import subprocess):
def run_as_root(command, args, expand_path = True):
run_as_root_path = find_executable("run-as-root-wrapper")
if(not run_as_root_path):
return 1
else:
if(expand_path):
command = os.path.realpath(command)
command = os.path.abspath(command)
command = os.path.normpath(command)
cmd = []
cmd.append(run_as_root_path)
cmd.append(command)
cmd.extend(args)
return subprocess.call(' '.join(cmd), shell=True)
Using the above (in run_command module):
>>> ret = run_command.run_as_root("/usr/local/bin/pyscripts/TrackingBlocker.py", ["status", "display"])
>>> /etc/hosts is blocking approximately 16147 domains
I'm having the same problem on Mac OS X. I have a working solution, but it's not optimal. I will explain my solution here and continue looking for a better one.
At the beginning of the program I check if I'm root or not by executing
def _elevate():
"""Elevate user permissions if needed"""
if platform.system() == 'Darwin':
try:
os.setuid(0)
except OSError:
_mac_elevate()
os.setuid(0) will fail if i'm not already root and that will trigger _mac_elevate() which relaunch my program from the start as administrator with the help of osascript. osascript can be used to execute applescript and other stuff. I use it like this:
def _mac_elevate():
"""Relaunch asking for root privileges."""
print "Relaunching with root permissions"
applescript = ('do shell script "./my_program" '
'with administrator privileges')
exit_code = subprocess.call(['osascript', '-e', applescript])
sys.exit(exit_code)
The problem with this is if I use subprocess.call as above I keep the current process running and there will be two instances of my app running giving two dock icons. If I use subprocess.Popen instead and let the non-priviledged process die instantly I can't make use of the exit code, nor can I fetch the stdout/stderr streams and propagate to the terminal starting the original process.
Using osascript with with administrator privileges is actually just Apple Script wrapping a call to AuthorizationExecuteWithPrivileges().
But you can call AuthorizationExecuteWithPrivileges() directly from Python3 with ctypes.
For example, the following parent script spawn_root.py (run as a non-root user) spawns a child process root_child.py (run with root privileges).
The user will be prompted to enter their password in the OS GUI pop-up. Note that this will not work on a headless session (eg over ssh). It must be run inside the GUI (eg Terminal.app).
After entering the user's password into the MacOS challenge dialog correctly, root_child.py executes a soft shutdown of the system, which requires root permission on MacOS.
Parent (spawn_root.py)
#!/usr/bin/env python3
import sys, ctypes
import ctypes.util
from ctypes import byref
sec = ctypes.cdll.LoadLibrary(ctypes.util.find_library("Security"))
kAuthorizationFlagDefaults = 0
auth = ctypes.c_void_p()
r_auth = byref(auth)
sec.AuthorizationCreate(None,None,kAuthorizationFlagDefaults,r_auth)
exe = [sys.executable,"root_child.py"]
args = (ctypes.c_char_p * len(exe))()
for i,arg in enumerate(exe[1:]):
args[i] = arg.encode('utf8')
io = ctypes.c_void_p()
sec.AuthorizationExecuteWithPrivileges(auth,exe[0].encode('utf8'),0,args,byref(io))
Child (root_child.py)
#!/usr/bin/env python3
import os
if __name__ == "__main__":
f = open( "root_child.out", "a" )
try:
os.system( "shutdown -h now" )
f.write( "SUCCESS: I am root!\n" )
except Exception as e:
f.write( "ERROR: I am not root :'(" +str(e)+ "\n" )
f.close()
Security Note
Obviously, any time you run something as root, you need to be very careful!
AuthorizationExecuteWithPrivileges() is deprecated, but it can be used safely. But it can also be used unsafely!
It basically boils down to: do you actually know what you're running as root? If the script you're running as root is located in a Temp dir that has world-writeable permissions (as a lot of MacOS App installers have done historically), then any malicious process could gain root access.
To execute a process as root safely:
Make sure that the permissions on the process-to-be-launched are root:root 0400 (or writeable only by root)
Specify the absolute path to the process-to-be-launched, and don't allow any malicious modification of that path
Sources
https://github.com/cloudmatrix/esky/blob/master/esky/sudo/sudo_osx.py
https://github.com/BusKill/buskill-app/issues/14
https://www.jamf.com/blog/detecting-insecure-application-updates-on-macos/