The program I write should be able to run both directly via the interpreter and via Transcrypt. The problem is that I need to skip some lines when Transcrypt is running and "try" does not work in Transcrypt. Is there any other way to skip lines when running the program via Transcypt? Is it possible to use if ?:
if transctypt is activated:
Thanks in advance!
If there's no built-in method otherwise, you could probably e. g. look for the existence of the document variable. There should be no such thing when not running in the browser. (I haven't tested this.)
try:
assert document
in_transcrypt = True
except Exception:
in_transcrypt = False
Thanks to the fzzylogic's comment I solved the problem like this:
from org.transcrypt.stubs.browser import __pragma__
#__pragma__('skip')
import subprocess
import os
#__pragma__('noskip')
Related
I'm trying to learn how to use variables from Jenkins in Python scripts. I've already learned that I need to call the variables, but I'm not sure how to implement them in the case of using os.path.join().
I'm not a developer; I'm a technical writer. This code was written by somebody else. I'm just trying to adapt the Jenkins scripts so they are parameterized so we don't have to modify the Python scripts for every release.
I'm using inline Jenkins python scripts inside a Jenkins job. The Jenkins string parameters are "BranchID" and "BranchIDShort". I've looked through many questions that talk about how you have to establish the variables in the Python script, but with the case of os.path.join(),I'm not sure what to do.
Here is the original code. I added the part where we establish the variables from the Jenkins parameters, but I don't know how to use them in the os.path.join() function.
# Delete previous builds.
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc192CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc192CS", "Output"))
I expect output like: c:\Doc192CS\Output
I am afraid that if I do the following code:
if os.path.exists(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output")):
shutil.rmtree(os.path.join("C:\\Doc",BranchIDshort,"CS", "Output"))
I'll get: c:\Doc\192\CS\Output.
Is there a way to use the BranchIDshort variable in this context to get the output c:\Doc192CS\Output?
User #Adonis gave the correct solution as a comment. Here is what he said:
Indeed you're right. What you would want to do is rather:
os.path.exists(os.path.join("C:\\","Doc{}CS".format(BranchIDshort),"Output"))
(in short use a format string for the 2nd argument)
So the complete corrected code is:
import os
import shutil
BranchID = os.getenv("BranchID")
BranchIDshort = os.getenv("BranchIDshort")
print "Delete any output from a previous build."
if os.path.exists(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output")):
shutil.rmtree(os.path.join("C:\\Doc{}CS".format(BranchIDshort), "Output"))
Thank you, #Adonis!
This seems like it should be so simple, but i am having some serious issues. All i want to do is see if the user input matches a 2 letter expression. I guess my biggest problem i have is that i am not very familiar with the re library and the documentation does not really help me too much.
This is what i have tried so far:
try 1
if re.match(sys.argv[3], "GL", re.I):
input_file_path = "V:\\test"
try 2
if re.ignorecase(sys.argv[3], "GL"):
input_file_path = "V:\\test"
try 3
if sys.argv[3] == "GL":
input_file_path = "V:\\test"
The way i call the program to run: filename.py tester test GL
"tester" and "test" are not really used yet.
EDIT: I found my main problem. I was calling a bunch of if statements rather than elif. So the last one that said else: exit() always got hit (cause i was testing the first if). rookie mistake
Just convert the string to test to lowercase before comparing and you should be fine:
if sys.argv[3].lower() == "gl":
input_file_path = "V:\\test"
More notably, regular expressions are not the right tool for this job.
Your re.match is backward. The pattern comes first. Try:
if re.match('GL', sys.argv[3], re.I):
input_file_path = "V:\\test"
Obviously the third argument is not 'GL'. print sys.argv and you will see that. My guess is that you are off by one in your index.
Show us the commandline you use to run your script.
printing the sys.argv[3] prints exactly GL – LiverpoolFTW
Then the bug is elsewhere. If you print sys.argv[3].lower() == "gl" just before, and input_file_path just after, you will see the expected values. What you really need here is a debugger. pdb is the built-in standard, but I highly recommend pudb.
For quick setup, paste these into a terminal. virtualenv is a industry standard for keeping project dependencies separate.
cd ~
wget https://raw.github.com/pypa/virtualenv/1.6.3/virtualenv.py
python virtualenv.py mypy
source mypy/bin/activate
pip install pudb
Source that activate file whenever you want to get into the environment. Run deactivate (an alias defined by activate) to get out. Make sure to use the python in the environment (ie #!/usr/bin/env python) rather than hard-coding a particular python instance.
I'm trying to save myself just a few keystrokes for a command I type fairly regularly in Python.
In my python startup script, I define a function called load which is similar to import, but adds some functionality. It takes a single string:
def load(s):
# Do some stuff
return something
In order to call this function I have to type
>>> load('something')
I would rather be able to simply type:
>>> load something
I am running Python with readline support, so I know there exists some programmability there, but I don't know if this sort of thing is possible using it.
I attempted to get around this by using the InteractivConsole and creating an instance of it in my startup file, like so:
import code, re, traceback
class LoadingInteractiveConsole(code.InteractiveConsole):
def raw_input(self, prompt = ""):
s = raw_input(prompt)
match = re.match('^load\s+(.+)', s)
if match:
module = match.group(1)
try:
load(module)
print "Loaded " + module
except ImportError:
traceback.print_exc()
return ''
else:
return s
console = LoadingInteractiveConsole()
console.interact("")
This works with the caveat that I have to hit Ctrl-D twice to exit the python interpreter: once to get out of my custom console, once to get out of the real one.
Is there a way to do this without writing a custom C program and embedding the interpreter into it?
Edit
Out of channel, I had the suggestion of appending this to the end of my startup file:
import sys
sys.exit()
It works well enough, but I'm still interested in alternative solutions.
You could try ipython - which gives a python shell which does allow many things including automatic parentheses which gives you the function call as you requested.
I think you want the cmd module.
See a tutorial here:
http://wiki.python.org/moin/CmdModule
Hate to answer my own question, but there hasn't been an answer that works for all the versions of Python I use. Aside from the solution I posted in my question edit (which is what I'm now using), here's another:
Edit .bashrc to contain the following lines:
alias python3='python3 ~/py/shellreplace.py'
alias python='python ~/py/shellreplace.py'
alias python27='python27 ~/py/shellreplace.py'
Then simply move all of the LoadingInteractiveConsole code into the file ~/py/shellreplace.py Once the script finishes executing, python will cease executing, and the improved interactive session will be seamless.
in the python console the following statement works perfectly fine (i guess using eval that way is not really good, but its just for testing purpose in this case and will be replaced with proper parsing)
$ python
>>> import subprocess
>>> r = subprocess.Popen(['/pathto/plugin1.rb'], stdout=subprocess.PIPE, close_fds=True).communicate()[0]
>>> data = eval(r)
>>> data
{'test': 1}
when i convert this into a Serverdensity plugin however it keeps crashing the agent.py daemon everytime it executes the plugin. i was able to narrow it down to the subprocess line but could not find out why. exception catching did not seem to work also.
here how the plugin looks like:
class plugin1:
def run(self):
r = subprocess.Popen(['/pathto/plugin1.rb'], stdout=subprocess.PIPE, close_fds=True).communicate()[0]
data = eval(r)
return data
I'm quite new to work with python and cant really figure out why this wont work. Thanks a lot for ideas :)
Do you have subprocess imported in the module? Also what error are you getting could you post the error message ?
After switching my dev box (maybe because of the different python version?) i finally was able to get some proper error output.
Then it was rather simple: I really just needed to import the missing subprocess module.
For who is interested in the solution:
http://github.com/maxigs/Serverdensity-Wrapper-Plugin/blob/master/ruby_plugin.py
Not quite production ready yet, but works already for save input
Is there a simple way to detect, within Python code, that this code is being executed through the Python debugger?
I have a small Python application that uses Java code (thanks to JPype). When I'm debugging the Python part, I'd like the embedded JVM to be passed debug options too.
Python debuggers (as well as profilers and coverage tools) use the sys.settrace function (in the sys module) to register a callback that gets called when interesting events happen.
If you're using Python 2.6, you can call sys.gettrace() to get the current trace callback function. If it's not None then you can assume you should be passing debug parameters to the JVM.
It's not clear how you could do this pre 2.6.
Other alternative if you're using Pydev that also works in a multithreading is:
try:
import pydevd
DEBUGGING = True
except ImportError:
DEBUGGING = False
A solution working with Python 2.4 (it should work with any version superior to 2.1) and Pydev:
import inspect
def isdebugging():
for frame in inspect.stack():
if frame[1].endswith("pydevd.py"):
return True
return False
The same should work with pdb by simply replacing pydevd.py with pdb.py. As do3cc suggested, it tries to find the debugger within the stack of the caller.
Useful links:
The Python Debugger
The interpreter stack
Another way to do it hinges on how your python interpreter is started. It requires you start Python using -O for production and with no -O for debugging. So it does require an external discipline that might be hard to maintain .. but then again it might fit your processes perfectly.
From the python docs (see "Built-in Constants" here or here):
__debug__
This constant is true if Python was not started with an -O option.
Usage would be something like:
if __debug__:
print 'Python started without optimization'
If you're using Pydev, you can detect it in such way:
import sys
if 'pydevd' in sys.modules:
print "Debugger"
else:
print "commandline"
From taking a quick look at the pdb docs and source code, it doesn't look like there is a built in way to do this. I suggest that you set an environment variable that indicates debugging is in progress and have your application respond to that.
$ USING_PDB=1 pdb yourprog.py
Then in yourprog.py:
import os
if os.environ.get('USING_PDB'):
# debugging actions
pass
You can try to peek into your stacktrace.
https://docs.python.org/library/inspect.html#the-interpreter-stack
when you try this in a debugger session:
import inspect
inspect.getouterframes(inspect.currentframe()
you will get a list of framerecords and can peek for any frames that refer to the pdb file.
I found a cleaner way to do it,
Just add the following line in your manage.py
#!/usr/bin/env python
import os
import sys
if __debug__:
sys.path.append('/path/to/views.py')
if __name__ == "__main__":
....
Then it would automatically add it when you are debugging.
Since the original question doesn't specifically call out Python2 - This is to confirm #babbageclunk's suggested usage of sys also works in python3:
from sys import gettrace as sys_gettrace
DEBUG = sys_gettrace() is not None
print("debugger? %s" % DEBUG)
In my perllib, I use this check:
if 'pdb' in sys.modules:
# We are being debugged
It assumes the user doesn't otherwise import pdb