Odoo - how to use it interactively in python interpreter? - python

I read here that it might be possible to use python interpreter to access Odoo and test things interactively (https://www.odoo.com/forum/help-1/question/how-to-get-a-python-shell-with-the-odoo-environment-54096), but doing this in terminal:
ipython
import sys
import openerp
sys.argv = ['', '--addons-path=~/my-path/addons', '--xmlrpc-port=8067', '--log-level=debug', '-d test',]
openerp.cli.main()
it starts Odoo server, but I can't write anything in that terminal tab to use it interactively. If for example I write anything like print 'abc', I don't get any output. Am I missing something here?

Sometime I use "logging" library for print output on the console/terminal.
For example:
import logging
logging.info('Here is your message')
logging.warning('Here is your message')
For more details, You may checkout this reference link.

The closest thing I have found to interactive is put the line
import pdb; pdb.set_trace()
in the method I want to inspect, and then trigger that method.
It's clunky, but it works.
As an example, I was just enhancing the OpenChatter implementation for our copy of OpenERP, and during the "figure things out" stage I had that line in .../addons/mail/mail_thread.py::mail_thread.post_message so I could get a better idea of what was happening in that method.

The correct way to do this is with shell:
./odoo-bin shell -d <yourdatabase>
Please, be aware that if you already have an instance of odoo, the port will be busy. In that case, the instance you are opening should be using a different port. So the command should be something like this:
./odoo-bin shell --xmlrpc-port=8888 -d <yourdatabase>
But if you want to have your addons available in the new instance, yo can make something similar to the following:
./odoo-bin shell -c ~/odooshell.conf -d <yourdatabase>
This way you can have in your odooshell.conf whatever you need to have configured (port, addons_path, etc). This way you can work smoothly with your shell.
As I always use docker, this is what I do to have my shell configured in docker:
docker exec -ti <mycontainer> odoo shell -c /etc/odoo/odooshell.conf -d <mydatabase>
You will have the env available to do anything. You can create express python code to make whatever you need. The syntax is very similar to server actions. For example:
partner_ids = env['res.partner'].search([])
for partner in partner_ids:
partner['name'] = partner.name + '.'
env.cr.commit()
Remember to env.cr.commit() if you make any data change.

Related

How to pass a variable from a shell script to a python script and save it then

I want to ask you guys how to pass a variable from a shell script to a python script and save it then as local variable. I have a Variable in the shell script:
#!/bin/bash
userpem=$(egrep "CN=$1/" index.txt|awk '{print $3}').pem
output='openssl x509 -in $userpem -noout -text'
export output
I read on some posts that I can do that with os.environ(foo) but I just saw examples like this:
from django.shortcuts import render
import os
import subprocess
from django.http import HttpResponse
def info(request, benutzername):
os.chdir("/var/www/openvpn/examples/easy-rsa/2.0/keys")
subprocess.Popen(["/var/www/openvpn/examples/easy-rsa/2.0/keys/getinfo.sh",benutzername])
output = os.environ['output']
return HttpResponse(output)
You can't do what your trying to do.
fedorqui's answer shows how to read environment variables from Python, but that won't help you.
First, you're just starting the shell script and immediately assuming it's already done its work, when it may not even have finished launching yet. You might get lucky and have it work sometimes, but not reliably. You need to wait for the Popen object (which also means you need to store it in a variable)—or, even more simply, call it (which waits until it finishes) instead of just kicking it off.
And you probably want to check the return value, or just use check_call, so you'll know if it fails.
Meanwhile, if you fix that, it still won't do you any good. export doesn't export variables to your parent process, it exports them to your children.
If you want to pass a value back to your parent, the simplest way to do that is by writing it to stdout. Then, of course, your Python code will have to read your output. The easiest way to do that is to use check_output instead of check_call.
Finally, I'm pretty sure you wanted to actually run openssl and capture its output, not just set output to the literal string openssl x509 -in $userpem -noout -text. (Especially not with single quotes, which will prevent $userpem from being substituted, meaning you looked it up for nothing.) To do that, you need to use backticks or $(), as you did in the previous line, not quotes.
So:
#!/bin/bash
userpem=$(egrep "CN=$1/" index.txt|awk '{print $3}').pem
output=$(openssl x509 -in $userpem -noout -text)
echo $output
And:
def info(request, benutzername):
os.chdir("/var/www/openvpn/examples/easy-rsa/2.0/keys")
output = subprocess.check_output(["/var/www/openvpn/examples/easy-rsa/2.0/keys/getinfo.sh",benutzername])
return HttpResponse(output)
As a side note, os.chdir is usually a bad idea in web servers. That will set the current directory for all requests, not just this one. (It's especially bad if you're using a semi-preemptive greenlet server framework, like something based on gevent, because a different request could chdir you somewhere else between your chdir call and your subprocess call…)
You need to use os.environ['variablename'] to work with an environment variable.
For example here the v variable is created and exported:
$ export v="hello"
Let's create a script a.py:
import os
d=os.environ['v']
print "my environment var v is --> " + d
And calling it:
$ python a.py
my environment var v is --> hello

Using cdrecord through popen won't eject

So, I'm making a cd burning app and I need to eject the drive to let the user put the disk in. It's a little more complicated, but simplest case I run into is this; I can use cdrecord via the command line to eject the cd tray using this command:
cdrecord --eject dev='/dev/sg1'
which should mean that I can do the same thing with subprocess.call, like this:
subprocess.call(["cdrecord", "--eject", "dev='/dev/sg1'"])
however, when I do that, I get this error:
wodim: No such file or directory.
Cannot open SCSI driver!
For possible targets try 'wodim --devices' or 'wodim -scanbus'.
For possible transport specifiers try 'wodim dev=help'.
For IDE/ATAPI devices configuration, see the file README.ATAPI.setup from
the wodim documentation.
and the tray doesn't open.
This is a very similar error to one I got before when trying to run it form the command line, but I fixed that error by loading the sg kernel module.
If I just run:
subprocess.call(["cdrecord", "--eject"])
it opens the tray just fine. However, this needs to work with possibly multiple cd trays, so that won't work.
How can I get this to eject the cd correctly?
Try this:
subprocess.call(["cdrecord", "--eject", "dev=/dev/sg1"])
The shell will take care of interpreting the quotes, but cdrecord will not.
The only reason you need the quotes in the first place is that the dev path might have spaces in it, causing the shell to split things into separate arguments. For example, if you type this:
cdrecord --eject dev=/dev/my silly cd name
The arguments to cdrecord will be --eject, dev=/dev/my, silly, cd, name. But if you do this:
cdrecord --eject dev='/dev/my silly cd name'
The arguments to cdrecord will be --eject, dev=/dev/my silly cd name.
When you're using subprocess.call, there's no shell to pull the arguments apart; you're passing them explicitly. So, if you do this:
subprocess.call(["cdrecord", "--eject", "dev=/dev/my silly cd name"])
The arguments to cdrecord will be --eject, dev=/dev/my silly cd name.
In some cases—e.g., because you get things in a hopelessly confused state in the first place (e.g., you're reading a config file that's meant to be used by your program or executed by the shell)—you really have no recourse but to run through the shell. If that happens, do this:
subprocess.call("cdrecord --eject dev='/dev/sg1'", shell=True)
But this generally isn't what you want, and it isn't what you want in this case.
You are not using cdrecord but a buggy fork called "wodim" that
might be the reason for your problems.
I recommend you to use recent original software from:
ftp://ftp.berlios.de/pub/cdrecord/alpha/

Invoking shell-command from function in interactive IPython shell

I have just been playing around with IPython. Currently I am wondering how it would be possible to run a shell-command with a python variable within a function. For example:
def x(go):
return !ls -la {go}
x("*.rar")
This gives me "sh: 1: Syntax error: end of file unexpected". Could anybody please give me a clue on how to let my "x"-function invoke ls like "ls -la *.rar"? There are *.rar files in my working directory.
Thank you in advance,
Rainer
If you look at the history command output, you'll see that to call external programs ipython uses _ip.system method.
Hence, this should work for you:
def x(go):
return _ip.system("ls -la {0}".format(go))
However, please note that outside ipython you should probably use subprocess.Popen.
There was a bug in the "!" shell access that made the expansion of "function scoped variables" fail. Your ipython's version might be affected.
You can avoid it by doing yourself the variable expansion:
def x(go):
return get_ipython().getoutput("ls -la {0}".format(go))
While subprocess.Popen is probably the way to go as #jcollado said, just for completeness there is the os.system command to immediately send a command to the shell. However, the subprocess module is almost always a better choice than os.system or os.spawn.
Also, depending on what you are trying to do you may want to use python commands to interact with the operating system rather than passing commands out to a shell. If you want to deal with lists of files for instance, os.walk would likely result in cleaner and more portable code than grabbing the directory list through shell commands. You can look at the documentation for Python's OS module here.
Depending on what you wanted to accomplish, this may be the better way:
In [50]: %alias x ls -la %l
In [51]: x *.rar
-rw-r--r-- 1 dubbaluga users 45254 Apr 4 15:12 schoolbus.rar
Maybe its easier to use Python for this case:
import glob
files = glob.glob('*.rar')

How can I get a file to autorun before I run any command in ipython?

I have a python file that holds a bunch of functions that I'm continually modifying and then testing in ipython. My current workflow is to run "%run myfile.py" before each command. However, ideally, I'd like that just to happen automatically. Is that possible?
If you really want to use rlwrap for this, write a filter! Just define an input_handler that adds %run myfile.py to the input, and an echo_handler to echo your original input so that you won't see this happening (man RlwrapFilter tells you all you ever wanted to know about filter writing, and then some).
But isn't it more elegant to solve this within ipython itself, using IPython.hooks.pre_runcode_hook?
import os
import IPython
ip = IPython.ipapi.get()
def runMyFile(self):
ip.magic('%run myFile.py')
raise IPython.ipapi.TryNext()
ip.set_hook('pre_runcode_hook', runMyFile)
I can't find any elegant way. This is the ugly way. Run:
rlwrap awk '{print "%run myfile.py"} {print} {fflush()}' |ipython
This reads from STDIN, but prints the command you wanted before each command. fflush is there to disable buffering and pass things to ipython immediately. rlwrap is there to keep the readline bindings; you can remove it if you don't have it, but this will be less convenient (no arrow keys, etc.).
Mind that you will have to type your commands before the ipython prompt appears. There might be other more annoying things which break, I haven't tested thoroughly.

pythonrc in interactive code

I have a .pythonrc in my path, which gets loaded when I run python:
python
Loading pythonrc
>>>
The problem is that my .pythonrc is not loaded when I execute files:
python -i script.py
>>>
It would be very handy to have tab completion (and a few other things) when I load things interactively.
From the Python documentation for -i:
When a script is passed as first argument or the -c option is used, enter interactive mode after executing the script or the command, even when sys.stdin does not appear to be a terminal. The PYTHONSTARTUP file is not read.
I believe this is done so that scripts run predictably for all users, and do not depend on anything in a user's particular PYTHONSTARTUP file.
As Greg has noted, there is a very good reason why -i behaves the way it does. However, I do find it pretty useful to be able to have my PYTHONSTARTUP loaded when I want an interactive session. So, here's the code I use when I want to be able to have PYTHONSTARTUP active in a script run with -i.
if __name__ == '__main__':
#do normal stuff
#and at the end of the file:
import sys
if sys.flags.interactive==1:
import os
myPythonPath = os.environ['PYTHONSTARTUP'].split(os.sep)
sys.path.append(os.sep.join(myPythonPath[:-1]))
pythonrcName = ''.join(myPythonPath[-1].split('.')[:-1]) #the filename minus the trailing extension, if the extension exists
pythonrc = __import__(pythonrcName)
for attr in dir(pythonrc):
__builtins__.__dict__[attr] = getattr(pythonrc, attr)
sys.path.remove(os.sep.join(myPythonPath[:-1]))
del sys, os, pythonrc
Note that this is fairly hacky and I never do this without ensuring that my pythonrc isn't accidentally clobbering variables and builtins.
Apparently the user module provides this, but has been removed in Python 3.0. It is a bit of a security hole, depending what's in your pythonrc...
In addition to Chinmay Kanchi and Greg Hewgill's answers, I'd like to add that IPython and BPython work fine in this case. Perhaps it's time for you to switch? :)

Categories

Resources