Simple, maybe not so simple issue. How can I run GDB with the results of a script?
What I mean is that instead of saying:
run arg1
You would say:
run "python generatingscript.py"
which would output the args. I'm aware I could find a way to do this by sleeping the program after run the args from the command line, but it would be really darn convenient if there was a way to do this kind of thing directly in gdb.
Just for context, my use case is a situation where I writing crash test cases that look like long strings of hex data. Putting them in by hand at the command line isn't the most convenient thing.
You can use gdb with the --args option like this: gdb --args your_program $(python generatingscript.py.
You could also use bash to generate an output file
$python generatingscript.py > testinput
then in gdb
gdb$ run < testinput
This works for my ubuntu machine which deploys "<<<" instead of "<".
(gdb) r <<< $(python exploit.py )
Related
I am trying to use subprocess in my python script to open Julia and then run a script.
To run on my machine, I enter this in terminal:
$ julia
$ include(test.jl); func("in.csv", "out.csv")
How do I replicate this process and chain both of these commands so that I can run from subprocess in a single call?
I've tried julia; include(test.jl); func("in.csv", "out.csv") and julia && include(test.jl) && func("in.csv", "out.csv")
but both result in
-bash: syntax error near unexpected token `"test.jl"`
The key here is that you're not really chaining two commands from the standpoint of Python's subprocess. There's just one command: julia. You want to pass a somewhat complicated argument to Julia that will execute multiple Julia expressions.
In short, you just want to do:
subprocess.run(['julia','-e','include("test.jl"); func("in.csv", "out.csv")'])
What's happening here is that you're just executing one subprocess, julia, started up with the -e command line flag that just runs whatever comes next in Julia. You can optionally use the capitalized -E flag instead which will print out whatever func (your last expression there) returns.
It's worth pointing out, though, that there are better ways of getting Julia and Python interoperating — especially if you need to transfer data back and forth.
I would like to execute a Common Lisp (SBCL) code from Python e.g. via shell. Also I need to run a Lisp-library called Shop3 to execute my Lisp code. I tried:
os.system('sbcl && (asdf:load-system "shop3") && (in-package:SHOP-USER) && (load "/Users/kiliankramer/Desktop/Shop-Planer/planner-new")')
But it's not working, it's only starting sbcl but then stop before to load the asdf library "shop3".
Can you tell how to execute my Lisp code or what alternatives I have to run an external Lisp program (including the Lisp library) to execute it?
Thanks in forward. :)
&& chains shell commands. I.e., it starts sbcl and waits for it to terminate, and if the termination was successful, then it will try to execute (asdf:load-system "shop3") as a shell command (not what you want!)
You need to use sbcl command line arguments:
os.system("sbcl --eval '(asdf:load-system \\"shop3\\")' --eval '(in-package :SHOP-USER)' --load /Users/kiliankramer/Desktop/Shop-Planer/planner-new")
However, you might want to use the more modern interface instead of os.system.
It will also avoid the need for escaping quotes &c:
subprocess.run(["sbcl","--eval",'(asdf:load-system "shop3")',
"--eval",'(in-package :SHOP-USER)',
"--load","/Users/kiliankramer/Desktop/Shop-Planer/planner-new")
I have python-script, which run bash-scripts via subprocess library. I need to collect stdout and stderr to files, so I have wrapper like:
def execute_chell_script(stage_name, script):
subprocess.check_output('{} &>logs/{}'.format(script, stage_name), shell=True)
And it works correct when I launch my python script on mac. But If I launch it in docker-container (FROM ubuntu:18.04) I cant see any log-files. I can fix it if I use bash -c 'command &>log_file' instead of just command &>log_file inside subprocess.check_output(...). But it looks like too much magic.
I thought about the default shell for user, which launches python-script (its root), but cat /etc/passwd shows root ... /bin/bash.
It would be nice if someone explain me what happened. And maybe I can add some lines to dockerfile to use the same python-script inside and outside docker-container?
As the OP reported in a comment that this fixed their problem, I'm posting it as an answer so they can accept it.
Using check_output when you don't get expect any output is weird; and requiring shell=True here is misdirected. You want
with open(os.path.join('logs', stage_name)) as output:
subprocess.run([script], stdout=ouput, stderr=output)
I've been looking for a while, but I haven't found anything in Ruby like python's -i flag.
Common behaviour for me if I'm testing something is to run the unfinished python script with a -i flag so that I can see and play around with the values in each variable.
If I try irb <file>, it still terminates at EOF, and, obviously ruby <file> doesn't work either. Is there a command-line flag that I'm missing, or some other way this functionality can be achieved?
Edit: Added an explanation of what kind of functionality I'm talking about.
Current Behaviour in Python
file.py
a = 1
Command Prompt
$ python -i file.py
>>> a
1
As you can see, the value of the variable a is available in the console too.
You can use irb -r ./filename.rb (-r for "require"), which should basically do the same as python -i ./filename.py.
Edit to better answer the refined question:
Actually, irb -r ./filename.rb does the equivalent of running irb and subsequently running
irb(main):001:0> require './filename.rb'. Thus, local variables from filename.rb do not end up in scope for inspection.
python -i ./filename.py seems to do the equivalent of adding binding.irb to the last line of the file and then running it with ruby ./filename.rb. There seems to be no one-liner equivalent to achieve this exact behaviour for ruby.
Is there a command-line flag that I'm missing, or some other way this functionality can be achieved?
Yes, there are both. I'll cover an "other way".
Starting with ruby 2.5, you can put a binding.irb in some place of your code and then the program will go into an interactive console at that point.
% cat stop.rb
puts 'hello'
binding.irb
Then
% ruby stop.rb
hello
From: stop.rb # line 3 :
1: puts 'hello'
2:
=> 3: binding.irb
irb(main):001:0>
It was possible for a long time before, with pry. But now it's in the standard package.
You can use the command irb. When that has started you can load and execute any ruby file with load './filename.rb'
I have this python script:
#!/usr/bin/python
print 'hi'
I'm trying to send this script as a job to be executed on a computing cluster. I'm sending it with qsub like this: qsub myscript.py
Before running it I executed the following:
chmod +x myscript.py
However when I open the output file I find this:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
And when I open the error file I find this:
print: Command not found.
So what's wrong?!
Edit: I followed the instructions in this question
It looks like qsub isn't reading your shebang line, so is simply executing your script using the shell.
This answer provides a few options on how to deal with this, depending on your system: How can I use qsub with Python from the command line?
An option is to set the interpreter to python like so:
qsub -S /usr/bin/python myscript.py
I am quite sure there is an alternate way to do this without the -S option and have SGE execute the code based on interpreter in the shebang; however, this solution might be enough for you needs.
Also, concerning this output:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
It seems safe to ignore this:
http://www.linuxquestions.org/questions/linux-software-2/warning-no-access-to-tty-bad-file-descriptor-702671/
EDIT:
Also works:
qsub <<< "./myscript.py"
qsub <<< "python ./myscript.py"