pywikipedia logging in? - python

For various reasons I can't use login.py to log me in so I was wondering if anyone knew code so that I could log in to Wikipedia with my script without running a separate script?
Cheers!

The answer is going to be simple: you can't use pywikipedia without being able to run login.py.
That file not only provides a nice User-interface to try your configuration: it contains all the authentication primitives that we use in the framework to log in. Without logging-in, you can't do much, so no.
If you want a more helpful answer, you'll have to be more precise: as in, why you can't use login.py, and what operations you do need to do with Pywikipedia.

One alternative that worked for me, when I wasn't able to interactively use my remote server (and thus not enter my password), was to copy my credentials to the remote server.
By default your remote permissions are stored in ~/.pywikibot/pywikibot.lwp, and it has worked for me in the past to log in locally, then copy this .lwp file to the remote server, and then I no longer had to enter my password on the remote server.
I don't claim this method to be secure at all, but it is a hack.

Related

Is there a way to send ISPF commands and get job statistics using python script on Mainframe?

I'm trying to automate some manual tasks on Mainframe using python script and to do that I need job status.. I know there is FTP library to login mainframe but I'm not able to send commands and get job statistics.. please suggest if there is any documentation..
Thanks in advance!
Not sure exactly what you're after concerning "job statistics", but there are a set of APIs provided by z/OSMF that can be invoked from any REST requester. A jobs interface is included. Docs on these APIs are found here: https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.izua700/IZUHPINFO_RESTServices.htm
z/OSMF must be installed on your z/OS system before using this; it's not always there. Your systems programmer should know whether it's up, running and usable and whether you would have authority to use those services.

Implementing subscriber Pubsubhubbub in python

I have a bunch of Google alerts set up as rss feeds that update in real time. What I want is to be able to store the new data the rss feed is sending out in a database.
After looking around I found Google and Superfeedr both offer hubs that do most of the work for you; however they both require a callback url (obviously). I do have an Apache server running on the machine I'm working off, it already has python enabled so I can run python scripts on my server. However at the moment its only accessible from within my LAN.
What my real question is, what do I do next? I know that in php you would just have a call back file that handles requests but I'm lost as to what to do in python. Would I write a script and give the google/superfeedr services a url to that script? What would be in the script? Specific imports needed?
Also, I just read that if you use XMPP you don't need a callback url. How does that work?
For the local LAN problem, the most commonly used solution is to use tuneling solutions like Passageway. They will temporarily expose a local port of your machine to the "outer" web.
Now, as for implementation, it's fairly easy to set things up. Python is similar to PHP in the sense that you'll have to write a script that listen on networking connection and then handles the HTTP requests you're getting from Superfeedr or Google. (it looks like you're not familiar with Python, why not stick to PHP then?)
Finally XMPP is a feature that only us (Superfeedr) offer. It solves the problem of exposing local ports because it works behind the firewall.

Running web.py as a service on linux

I've used web.py to create a web service that returns results in json.
I run it on my local box as python scriptname.py 8888
However, I now want to run it on a linux box.
How can I run it as a service on the linux box?
update
After the answers it seems like the question isn't right. I am aware of the deployment process, frameworks, and the webserver. Maybe the following back story will help:
I had a small python script that takes as input a file and based on some logic splits that file up. I wanted to use this script with a web front end I already have in place (Grails). I wanted to call this from the grails application but did not want to do it by executing a command line. So I wrapped the python script as a webservice. which takes in two parameters and returns, in json, the number of split files. This webservice will ONLY be used by my grails front end and nothing else.
So, I simply wish to run this little web.py service so that it can respond to my grails front end.
Please correct me if I'm wrong, but would I still need ngix and the like after the above? This script sounds trivial but eventually i will be adding more logic to it so I wanted it as a webservice which can be consumed by a web front end.
In general, there are two parts of this.
The "remote and event-based" part: Service used remotely over network needs certain set of skills: to be able to accept (multiple) connections, read requests, process, reply, speak at least basic TCP/HTTP, handle dead connections, and if it's more than small private LAN, it needs to be robust (think DoS) and maybe also perform some kind of authentication.
If your script is willing to take care of all of this, then it's ready to open its own port and listen. I'm not sure if web.py provides all of these facilities.
Then there's the other part, "daemonization", when you want to run the server unattended: running at boot, running under the right user, not blocking your parent (ssh, init script or whatever), not having ttys open but maybe logging somewhere...
Servers like nginx and Apache are built for this, and provide interfaces like mod_python or WSGI, so that much simpler applications can give up as much of the above as possible.
So the answer would be: yes, you still need Nginx or the likes, unless:
you can implement it yourself in Python,
or you are using the script on localhost only and are willing to take some
risks of instability.
Then probably you can do on your own.
try this
python scriptname.py 8888 2>/dev/null
it will run as daemon

Open root owned system files for reading with python

The business case...
The app server (Ubuntu/nginx/postgresql/python) that I use writes gzipped system log files as root to /var/log
I need to present data from these log files to users' browsers
My approach
I need to do a fair bit of searching and string manipulation server side so I have a python script that deals with the opening and processing and then returns a nicely formatted JSON result set. The python (cgi) script is then called using ajax from the web page.
My problem
The script works perfectly when called from the command line as SU but (...obviously) the file opening method I'm using ( gzip.open(filename) ) is failing when invoked as user www-data by the webserver.
Other useful info
The app server concerned is (contractually rather than physically) a bit of a black box - I have SU access, I can write scripts, I can read anything but I can't change file permissions, add additional python libs or or mess with config.
The subset of users who can would use this log extract also have the SU password so could be presented with a login dialog that I could pass to the script.
Given the restrictions I have, how would you go about it?
One option would be to do this somewhat sensitive "su" work in a background process that is disconnected from the web.
Likely running via cron, this script would take the root owned log files, possibly change them to a format that the web-side code could deal with easily like loading them into a database, or merely unzipping them and placing them into a different location with slightly more laxed permissions.
Then the web-side code could easily have access to the data without having to jump through the "su" hoops.
From my perspective this plan does not seem to violate your contractual rules. The web server config, permissions, etc remain intact.
My two cents. You should give a try to paramiko, allowing you to access a host (even "localhost") through SSH:
import paramiko
ssh = paramiko.SSHClient()
ssh.connect('127.0.0.1', username='jesse', password='lol')
As you have the opportunity to ask for a login/password, those would be the one provided by the user querying the log. Accessing the files is then just a matter or reading a file under SSH. And you have the opportunity to close the connection as soon as you have finished that "sensitive" work.

Python - How to use Conch to create a Virtual SSH server

I'm looking at creating a server in python that I can run, and will work as an SSH server. This will then let different users login, and act as if they'd logged in normally, but only had access to one command.
I want to do this so that I can have a system where I can add users to without having to create a system wide account, so that they can then, for example, commit to a VCS branch, or similar.
While I can work out how to do this with conch to get it to a "custom" shell... I can't figure out how to make it so that the SSH stream works as if it were a real one (I'm preferably wanting to limit to /bin/bzr so that bzr+ssh will work.
It needs to be in python (which i can get to do the authorisation) but don't know how to do the linking to the app.
This needs to be in python to work within the app its designed for, and to be able to be used for those without access to add new users
When you write a Conch server, you can control what happens when the client makes a shell request by implementing ISession.openShell. The Conch server will request IConchUser from your realm and then adapt the resulting avatar to ISession to call openShell on it if necessary.
ISession.openShell's job is to take the transport object passed to it and associate it with a protocol to interpret the bytes received from it and, if desired, to write bytes to it to be sent to the client.
In an unfortunate twist, the object passed to openShell which represents the transport is actually an IProcessProtocol provider. This means that you need to call makeConnection on it, passing an IProcessTransport provider. When data is received from the client, the IProcessProtocol will call writeToChild on the transport you pass to makeConnection. When you want to send data to the client, you should call childDataReceived on it.
To see the exact behavior, I suggest reading the implementation of the IProcessProtocol that is passed in. Don't depend on anything that's not part of IProcessProtocol, but seeing the implementation can make it easier to understand what's going on.
You may also want to look at the implementation of the normal shell-creation to get a sense of what you're aiming for. This will give you a clue about how to associate the stdio of the bzr child process you launch with the SSH channel.
While Python really is my favorite language, I think you need not create you own server for this. When you look at the OpenSSH Manualpage for sshd you'll find the "command" options for the authorized keys file that lets you define a specific command to run on login.
Using keys, you can use one system account to allow many user to log in, just put their public keys in the account's authorized keys file.
We are using this to create SSH tunnels for SVN and it works just great.

Categories

Resources