I'm using Paramiko in Python to run command on a box through SSH. How to use Paramiko logging? I mean force it to make logs (in a file or terminal) and set the log level.
Paramiko names its loggers, so simply:
import logging
import paramiko
logging.basicConfig()
logging.getLogger("paramiko").setLevel(logging.WARNING) # for example
See the logging cookbook for some more examples.
You can also use log_to_file from paramiko.util to log directly to a file.
paramiko.util.log_to_file("<log_file_path>", level = "WARN")
Related
What I'm trying to do seems really simple but I can't find a way to do it. I'm trying to use the module Pipreqs in a script, but I have to use subprocess.call() because Pipreqs doesn't have a way to use it in a script. Pipreqs uses logging to log info and I don't want that. Pipreqs also doesn't have a quiet mode. I tried to use logger.setLevel(logging.WARNING) but since I'm calling it through subprocess.call() it still prints info. I've also tried importing Pipreqs and setting the logging level to warning and that also doesn't work. Is there any way to disable this output? My code right now is the following:
import subprocess
import logging
import pipreqs
logger = logging.getLogger("pipreqs")
logger.setLevel(logging.WARNING)
subprocess.call(["pipreqs", "--force","/path/to/dir"])
You won't have access to the logger for an external process. The subprocess module does have flags for disabling output though.
subprocess.call(
["pipreqs", "--force","/path/to/dir"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL
)
I'm new in a company for IT and very few people here know Python so I can't ask then for help.
The problem: I need to create a script in Python that connects via ssh from my VM to my client server, after I access with my script I need to find a log file and search for a few data.
I tested my script within my Windows with a copy of that file and it searched everything that I need. However, I don't know how to do that connection via SSH.
I tried like this but I don't know where to start:
from subprocess import Popen, PIPE
import sys
ssh = subprocess.check_output(['ssh', 'my_server', 'password'], shell = True)
ssh.stdin.write("cd /path/")
ssh.stdin.write("cat file | grep err|error")
This generates a error "name 'subprocess' is not defined".
I don't understand how to use the subprocess nor how to begin to develop the solution.
Note: I can't use Paramiko because I don't have permission to install packages via pip or download the package manually.
You didn't import subprocess itself so you can't refer to it.
check_output simply runs a process and waits for it to finish, so you can't use that to run a process you want to interact with. But there is nothing interactive here, so let's use that actually.
The first argument to subprocess.Popen() and friends is either a string for the shell to parse, with shell=True; or a list of token passed directly to exec with no shell involved. (On some platforms, passing a list of tokens with shell=True actually happens to work, but this is coincidental, and could change in a future version of Python.)
ssh myhost password will try to run the command password on myhost so that's not what you want. Probably you should simply set things up for passwordless SSH in the first place.
... But you can use this syntax to run the commands in one go; just pass the shell commands to ssh as a string.
from subprocess import check_output
#import sys # Remove unused import
result = check_output(['ssh', 'my_server',
# Fix quoting and Useless Use of Cat, and pointless cd
"grep 'err|error' /path/file"])
I am using MySQLdb in python. And I found MySQLdb can print some log to my console, mixed with other logs printed by me. So, could I offer a logging.conf file, and specifiy a logger for MySQLdb, and then control where and which level for that?
I know this has been discussed here before, but I haven't found a solution that will work for me. I already have a python script that I wrote, and I currently have it run at boot. What I would like to do is log all outputs, which include any print statements to the console, or any error messages that would come up. I do like the Logging module, and would prefer to use that over looking at all outputs on the console. Any suggestions?
If you manage your script using supervisor it will automatically handle all logging of stdout/stderr for you.
Additionally, it can automatically restart your script if it were to crash
We have recently switched to py.test for python testing (which is fantastic btw). However, I'm trying to figure out how to control the log output (i.e. the built-in python logging module). We have pytest-capturelog installed and this works as expected and when we want to see logs we can pass --nologcapture option.
However, how do you control the logging level (e.g. info, debug etc.) and also filter the logging (if you're only interested in a specific module). Is there existing plugins for py.test to achieve this or do we need to roll our own?
Thanks,
Jonny
Installing and using the pytest-capturelog plugin could satisfy most of your pytest/logging needs. If something is missing you should be able to implement it relatively easily.
As Holger said you can use pytest-capturelog:
def test_foo(caplog):
caplog.setLevel(logging.INFO)
pass
If you don't want to use pytest-capturelog you can use a stdout StreamHandler in your logging config so pytest will capture the log output. Here is an example basicConfig
logging.basicConfig(level=logging.DEBUG, stream=sys.stdout)
A bit of a late contribution, but I can recommend pytest-logging for a simple drop-in logging capture solution. After pip install pytest-logging you can control the verbosity of the your logs (displayed on screen) with
$ py.test -s -v tests/your_test.py
$ py.test -s -vv tests/your_test.py
$ py.test -s -vvvv tests/your_test.py
etc... NB - the -s flag is important, without it py.test will filter out all the sys.stderr information.
Pytest now has native support for logging control via the caplog fixture; no need for plugins.
You can specify the logging level for a particular logger or by default for the root logger:
import pytest
def test_bar(caplog):
caplog.set_level(logging.CRITICAL, logger='root.baz')
Pytest also captures log output in caplog.records so you can assert logged levels and messages. For further information see the official documentation here and here.
A bit of an even later contribution: you can try pytest-logger. Novelty of this plugin is logging to filesystem: pytest provides nodeid for each test item, which can be used to organize test session logs directory (with help of pytest tmpdir facility and it's testcase begin/end hooks).
You can configure multiple handlers (with levels) for terminal and filesystem separately and provide own cmdline options for filtering loggers/levels to make it work for your specific test environment - e.g. by default you can log all to filesystem and small fraction to terminal, which can be changed on per-session basis with --log option if needed. Plugin does nothing by default, if user defines no hooks.