How can one view the Google App Engine logs outside the Admin console?
I'm developing, so using dev_appserver.py/the Admin Console and would like to see the logs as the records are emitted.
I'd like to monitor the logging output in a console with standard Unix tools e.g. less/grep/etc, but there doesn't seem to be an option to direct the logging from the dev_appserver.py command, and I can't open a new file in GAE (e.g. a FileHandler), so file handlers won't work, and I think using a socket/udp handler would be a bit of overkill (if it's even possible).
I'm hopeful there are other options to view the log.
Thanks for reading.
The default logger sends logging output to stderr. Use your shell's method of redirecting stderr to a file (in tcsh, (dev_appserver.py > /dev/tty) >& your_logfile.txt, your shell may vary.)
You can also use the logging module in python to change the logger to send directly to a file if you detect it's running locally (os.environ['SERVER_SOFTWARE'].startswith('Dev'))
You can download the logs using the request_logs parameter of appcfg.py
http://code.google.com/appengine/docs/python/tools/uploadinganapp.html#Downloading_Logs
Edit:
This person came up with a way to send logs over XMPP. His solution is for GAE Java, but this could be adapted to python.
http://www.professionalintellectualdevelopment.com/
http://code.google.com/p/gae-xmpp-logger/
Related
In summary; I want to be able to interact with a non-logged in steam client and log it in, and I dont want to use some type of Windows only interaction. I don't mean the SteamCLI, or logging into steam using the steam library for python. I mean directly interacting with the steam client in some way and physically logging in.
When using SteamCLI and other modules I've noticed it just logs you into their session instead of the client session that you get from physically logging into steam.
for example:
from steam.client import SteamClient
x = SteamClient()
x.login(username=, password=)
Doesn't actually log you in, since it is its own client.
I need this because I have made a script that can connect me to servers and figure out who's on it, and it relies on you being logged in to said client.
Is there any modules/libraries that will allow me to do this? and if pywinauto is the one I should use, are there any guides you know of so I can interact with said application in a good way, even on linux.
So here is what I ended up doing:
First you get a dummy display. I used this https://askubuntu.com/questions/453109/add-fake-display-when-no-monitor-is-plugged-in
After that you create the config (https://askubuntu.com/a/463000) and start up the display.
Next I created a systemd unit to launch steam using command line
[Unit]
Description=example systemd service unit file.
[Service]
User=root
Environment=XAUTHORITY=/var/run/lightdm/root/:0
Environment=DISPLAY=:0
ExecStart=/bin/bash /root/start.sh
[Install]
WantedBy=multi-user.target
(you probably shouldn't be using root for this, but its the easiest way I found to always have perms to the XAUTHORITY env, so you can login to the display)
Here is what /root/start.sh looks like
#!/bin/sh
export DISPLAY=:0
export XAUTHORITY=/var/run/lightdm/root/:0
export PATH="$PATH:/usr/games"
steam -login username_goes_here password_goes_here steam_guard_code
Note that you may not need to do the export DISPLAY and export XAUTHORITY twice. I haven't tested it.
This should just workâ˘, it will log in, but it will ask for a steam guard code, so what you should do is attempt to start the service, receive the code through email, then append it after password.
As for doing this in python, as I said originally. you can probably use it to call start.sh, implementing command line arguments for steam code. You need these arguments because I haven't found a way for it to trust the computer you use, so it might be beneficial to implement python IMAP for reading the codes sent to the email which owns the account and using said arguments.
Theoretical Section
Please note this whole section is untested, but intended to answer my original question.
Heres what a python file might look like (very roughly)
def start_steam(code=""):
# Assuming start.sh includes reading variables from cmd line (my example didnt)
proc = subprocess.Popen(["/bin/bash", "start.sh", code])
# This function would probably just interact iwth an IMAP server to retrieve an email sent by steam support with the code. If its not found within a certain range of time, it would theoretically just stop looking
steam_guard_code = do_something_to_find_steam_guard_code_email(timeout=30)
if code:
proc.terminate()
start_steam(code=code)
else:
# proc should continue to run in the background, until restart, that means this script should ALSO run on startup, or until there isnt a PID detected for steam
with open("/var/run/steam.pid", "w") as f:
f.write(process.pid)
exit()
this python script would have to be run by systemd, it would be a forking process, and PIDfile would have to be specified by systemd (I think) so that it can be monitored.
edit: this solution has become even more theoretical, since it is not possible to pass steam guard code to steam -login username password. You would need to disable steam guard.
I've read the documentation: http://www.tornadoweb.org/en/stable/log.html
But I still don't know how to make a suitable log for my server, which is built with tornado.
For now, I need such a log system:
It can log everything with time format, and for each day it create a new log file.
It seems that TimedRotatingFileHandler is what I need but I don't know how to use it with tornado.
The Tornado logging streams are just standard loggers from the "logging" python module.
There is nice tutorial on the python website https://docs.python.org/3/howto/logging.html#advanced-logging-tutorial
As per how to set the handler (same tutorial)
https://docs.python.org/3/howto/logging.html#handlers
The business case...
The app server (Ubuntu/nginx/postgresql/python) that I use writes gzipped system log files as root to /var/log
I need to present data from these log files to users' browsers
My approach
I need to do a fair bit of searching and string manipulation server side so I have a python script that deals with the opening and processing and then returns a nicely formatted JSON result set. The python (cgi) script is then called using ajax from the web page.
My problem
The script works perfectly when called from the command line as SU but (...obviously) the file opening method I'm using ( gzip.open(filename) ) is failing when invoked as user www-data by the webserver.
Other useful info
The app server concerned is (contractually rather than physically) a bit of a black box - I have SU access, I can write scripts, I can read anything but I can't change file permissions, add additional python libs or or mess with config.
The subset of users who can would use this log extract also have the SU password so could be presented with a login dialog that I could pass to the script.
Given the restrictions I have, how would you go about it?
One option would be to do this somewhat sensitive "su" work in a background process that is disconnected from the web.
Likely running via cron, this script would take the root owned log files, possibly change them to a format that the web-side code could deal with easily like loading them into a database, or merely unzipping them and placing them into a different location with slightly more laxed permissions.
Then the web-side code could easily have access to the data without having to jump through the "su" hoops.
From my perspective this plan does not seem to violate your contractual rules. The web server config, permissions, etc remain intact.
My two cents. You should give a try to paramiko, allowing you to access a host (even "localhost") through SSH:
import paramiko
ssh = paramiko.SSHClient()
ssh.connect('127.0.0.1', username='jesse', password='lol')
As you have the opportunity to ask for a login/password, those would be the one provided by the user querying the log. Accessing the files is then just a matter or reading a file under SSH. And you have the opportunity to close the connection as soon as you have finished that "sensitive" work.
We are realizing that we need error logs and access logs for our service processes. These are long running process like services; they respond to calls made to them.
Hence, we need your help to simply achieve the following:
# for developers
from MyLogger import log
log.error 'something bad something wrong'
log.access 'something something'
I am thinking of designing this MyLogger which will simply redirect an error to stderr and access to stdout, so that I can collect errors to a specific file through configuration for both stderr and stdout.
One more point: these services are nothing but web.py instances.
I guess I'm not looking for a controlling log at various levels, like warn, debug, error, info, etc. My aim is more to have an error log and access log similar to the apache web server. So my developers should not be concerned about using warn, debug, etc. as follows:
log.warn msg
log.debug msg
This is not required.
I just want to have an error log and an access log similar to that of a web server or service.
That "battery" is already included for you in the Python standard logging module.
Python ships with the logging module.
"print" only works in development server.
But what if I want it to work in Apache? Just in case I forget to comment it out...I want to be able to go smoothly without causing errors.
(Just print to nothing)
As for quick print, just can just use:
print >>sys.stderr, 'log msg'
-- then it lands in error.log, of course.
See Graham Dumpleton's post:
WSGI and printing to standard output
What you're proposing is a Bad Idea, but if you insist on doing it anyways, check out the mod_wsgi configuration directives:
WSGIRestrictStdout
Description: Enable restrictions on use of STDOUT.
Syntax: WSGIRestrictStdout On|Off
Default: WSGIRestrictStdout On
Context: server config
Module: mod_wsgi.c
A well behaved Python WSGI application
should never attempt to write any data
directly to sys.stdout or use the
print statement without directing it
to an alternate file object. This is
because ways of hosting WSGI
applications such as CGI use standard
output as the mechanism for sending
the content of a response back to the
web server. If a WSGI application were
to directly write to sys.stdout it
could interfere with the operation of
the WSGI adapter and result in
corruption of the output stream.
In the interests of promoting
portability of WSGI applications,
mod_wsgi restricts access to
sys.stdout and will raise an exception
if an attempt is made to use
sys.stdout explicitly.
The only time that one might want to
remove this restriction is purely out
of convencience of being able to use
the print statement during debugging
of an application, or if some third
party module or WSGI application was
errornously using print when it
shouldn't. If restrictions on using
sys.stdout are removed, any data
written to it will instead be sent
through to sys.stderr and will appear
in the Apache error log file.
If you want to write print statements to Apache error log, you can use sys.stderr:
import sys
sys.stderr.write('log mgs')
then it will be in apache error log file.