I have a simple piece of code which runs a specified launchfile:
roslaunch.configure_logging(uuid)
uuid = roslaunch.rlutil.get_or_generate_uuid(None, False)
file = [(roslaunch.rlutil.resolve_launch_arguments(cli)[0], cli[2:])]
launch = roslaunch.parent.ROSLaunchParent(uuid, file)
Execution of the launchfile stuff generates lots of logging output on stdout/err, so the actual script's output is getting lost.
Is it possible to somehow redirect or disable printing it on the screen?
Two options:
via env vars (export).
via python logging module.
Using env vars
You can set ROS logging to a /dev/null like file system.
See ROS env vars: https://wiki.ros.org/ROS/EnvironmentVariables
export ROS_LOG_DIR=/black/hole
Use https://github.com/abbbi/nullfsvfs to create the dir.
Using python logging
import logging
logging.getLogger("rospy").setLevel(logging.CRITICAL)
https://docs.python.org/3/library/logging.html#logging-levels
How about redirecting stdout and stderr to files? As long as you redirect them before your calls to roslaunch it should put all the output into the files you point to (or /dev/null to ignore it).
import sys
sys.stdout = open('redirect.out','w')
sys.stderr = open('redirect.err','w')
Related
I want to save my console output in a text file, but I want it to be as it happens so that if the program crashes, logs will be saved.
Do you have some ideas?
I can't just specify file in logger because I have a lot of different loggers that are printing into the console.
I think that you indeed can use a logger, just adding a file handler, from the logging module you can read this
As an example you can use something like this, which logs both to the terminal and to a file:
import logging
from pathlib import Path
root_path = <YOUR PATH>
log_level = logging.DEBUG
# Print to the terminal
logging.root.setLevel(log_level)
formatter = logging.Formatter("%(asctime)s | %(levelname)s | %(message)s", "%Y-%m-%d %H:%M:%S")
stream = logging.StreamHandler()
stream.setLevel(log_level)
stream.setFormatter(formatter)
log = logging.getLogger("pythonConfig")
if not log.hasHandlers():
log.setLevel(log_level)
log.addHandler(stream)
# file handler:
file_handler = logging.FileHandler(Path(root_path / "process.log"), mode="w")
file_handler.setLevel(log_level)
file_handler.setFormatter(formatter)
log.addHandler(file_handler)
log.info("test")
If you have multiple loggers, you can still use this solutions as loggers can inherit from other just put the handler in the root logger and ensure that the others take the handler from that one.
As an alternative you can use nohup command which will keep the process running even if the terminal closes and will return the outputs to the desired location:
nohup python main.py > log_file.out &
There are literally many ways to do this. However, they are not all suitable for different reasons (maintainability, ease of use, reinvent the wheel, etc.).
If you don't mind using your operating system built-ins you can:
forward standard output and error streams to a file of your choice with python3 -u ./myscript.py 2>&1 outputfile.txt.
forward standard output and error streams to a file of your choice AND display it to the console too with python3 -u ./myscript.py 2>&1 tee outputfile.txt. The -u option specifies the output is unbuffered (i.e.: whats put in the pipe goes immediately out).
If you want to do it from the Python side you can:
use the logging module to output the generated logs to a file handle instead of the standard output.
override the stdout and stderr streams defined in sys (sys.stdout and sys.stderr) so that they point to an opened file handle of your choice. For instance sys.stdout = open("log-stdout.txt", "w").
As a personnal preference, the simpler, the better. The logging module is made for the purpose of logging and provides all the necessary mechanisms to achieve what you want. So.. I would suggest that you stick with it. Here is a link to the logging module documentation which also provides many examples from simple use to more complex and advanced use.
I'm trying to make a script that's going to do some stuff via command line. Its usage should look somewhat like this:
C:\users\me> appname startprocess
but if I'm trying to release this, I need it to be really handy for everyone to use and that needs the PATH variable to be set to my script's location.
I want my script to handle this task by itself. Can my script edit my PATH variable permanently? If so, how?
Most questions on this topic are mostly about how to set the path variable for python scripts instead of writing script that could do that by itself.
For adding the current working directory to the PATH, you can use this piece of code:
import os
import sys
pwd = os.getcwd()
sys.path.append(pwd)
And for running a shell script, use this:
import subprocess
subprocess.run('Your Command', shell = True)
If you want to check the output of the command:
stdout = subprocess.check_output('Your Command', shell = True)
print(stdout.decode())
I am trying to create new nodes using the CloudClient in saltstack python API. Nodes are created successfully but I don't see any logging happening. Below is the code which I am using.
from salt.cloud import CloudClient
cloud_client = CloudClient()
kwargs = {'parallel': True}
cloud_client.map_run(path="mymap.map",**kwargs)
Is there way to run the same code in debug mode to see the output on console from this python script if logging cannot be done.
logging parameters in cloud
log_level: all
log_level_logfile: all
log_file: /var/logs/salt.log
When I try to run with sal-cloud on cli it is working with the below command:
salt-cloud -m mymap.map -P
I was able make it work by adding the below code
from salt.log.setup import setup_console_logger
setup_console_logger(log_level='debug')
I am new to Python. I'm using Vim with Python-mode to edit and test my code and noticed that if I run the same code more than once, the logging file will only be updated the first time the code is run. For example below is a piece of code called "testlogging.py"
#!/usr/bin/python2.7
import os.path
import logging
filename_logging = os.path.join(os.path.dirname(__file__), "testlogging.log")
logging.basicConfig(filename=filename_logging, filemode='w',
level=logging.DEBUG)
logging.info('Aye')
If I open a new gvim session and run this code with python-mode, then I would get a file called "testlogging.log" with the content
INFO:root:Aye
Looks promising! But if I delete the log file and run the code in pythong-mode again, then the log file won't be re-created. If at this time I run the code in a terminal like this
./testlogging.py
Then the log file would be generated again!
I checked with Python documentation and noticed this line in the logging tutorial (https://docs.python.org/2.7/howto/logging.html#logging-to-a-file):
A very common situation is that of recording logging events in a file, so let’s look at that next. Be sure to try the following in a newly-started Python interpreter, and don’t just continue from the session described above:...
So I guess this problem with logging file only updated once has something to do with python-mode remaining in the same interpreter when I run a code a second time. So my question is: is there anyway to solve this problem, by fiddling with the logging module, by putting something in the code to reset the interpreter or by telling Python-mode to reset it?
I am also curious why the logging module requires a newly-started interpreter to work...
Thanks for your help in advance.
The log file is not recreated because the logging module still has the old log file open and will continue to write to it (even though you've deleted it). The solution is to force the logging module to release all acquired resources before running your code again in the same interpreter:
# configure logging
# log a message
# delete log file
logging.shutdown()
# configure logging
# log a message (log file will be recreated)
In other words, call logging.shutdown() at the end of your code and then you can re-run it within the same interpreter and it will work as expected (re-creating the log file on every run).
You opened log file with "w" mode. "w" means write data from begining, so you saw only log of last execution.
This is why, you see same contents on log file.
You should correct your code on fifth to seventh line to follows.
filename_logging = os.path.join(os.path.dirname(__file__), "testlogging.log")
logging.basicConfig(filename=filename_logging, filemode='a',
level=logging.DEBUG)
Code above use "a" mode, i.e. append mode, so new log data will be added at the end of log file.
I am currently playing with embedding pig into python, and whenever I run the file it works, however it cloggs up the command line with output like the following:
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/hue-plugins-2.3.0-cdh4.3.0.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/paranamer-2.3.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/avro-1.7.4.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/slf4j-api-1.6.1.jar'
*sys-package-mgr*: processing new jar, '/usr/lib/hadoop/lib/commons-configuration-1.6.jar'
Command line input:
pig embedded_pig_testing.py -p /home/cloudera/Documents/python_testing_config_files/test002_config.cfg
the parameter passed is a file that contains a bunch of variables I am using in the test.
Is there any way to get the script to not log these actions to the command line?
Logging in Java programs/libraries is usually configured by means of a configuration or .properties file. I'm sure there's one for Pig. Something that might be what you're looking for is http://svn.apache.org/repos/asf/pig/trunk/conf/pig.properties.
EDIT: looks like this is specific to Jython.
I have not been able to determine if it's possible at all to disable this, but unless I could find something cleaner, I'd consider simply redirecting sys.stderr (or sys.stdout) during the .jar loading phase:
import os
import sys
old_stdout, old_stderr = sys.stdout, sys.stderr
sys.stdout = sys.stderr = open(os.devnull, 'w')
do_init() # load your .jar's here
sys.stdout, sys.stderr = old_stdout, old_stderr
This logging comes from jython scanning your Java packages to build a cache for later use: https://wiki.python.org/jython/PackageScanning
So long as your script only uses full class imports (no import x.y.* wildcards), you can disable package scanning via the python.cachedir.skip property:
pig ... -Dpython.cachedir.skip=true ...
Frustratingly, I believe jython writes these messages to stdout instead of stderr, so piping stderr elsewhere won't help you out.
Another option is to use streaming python instead of jython whenever pig 0.12 ships. See PIG-2417 for more details on that.