Internet Explorer URL blocking with Python? - python

I need to be able to block the urls that are stored in a text file on the hard disk using Python. If the url the user tries to visit is in the file, it redirects them to another page instead. How is this done?

There are several proxies written in Python: you can pick one of them and modify it so that it proxies most URLs normally but redirects those in your text file. You'll also need to set IE to use that proxy, of course.

Doing this at the machine level is a weak solution, it would be pretty easy for a technically inclined user to bypass.
Even with a server side proxy it will be very easy to bypass unless you firewall normal http traffic, at a bare minimum block ports 80, 443.
You could program a proxy in python as Alex suggested, but this is a pretty common problem and there are plenty of off the shelf solutions.
That being said, I think that restricting web access will do nothing but aggravate your users.

Related

Python Proxy Firefox

I am trying to make a "proxy" in Python that allows the user to route all of their web traffic through a host machine (this is mainly for me and a couple people I know to confuse/avoid hackers and/or spies, who would only see web pages and so on coming in through one IP). I have run into several difficulties. The first is that I would like to be able to use the final, compiled product with Firefox, which can be set to route all of its traffic through an installed proxy program. I don't know what kind of configuration my proxy needs to have to do this. Second, the way the proxy works is by using urllib.requests.urlretrieve (yes, going to die soon, but I like it) to download a webpage onto the host computer (it's inefficient and slow, but it's only going to be used for a max of 7-10 clients) and then sending the file to the client. However, this results in things like missing pictures or broken submission forms. What should I be using to get the webpages right (I want things like SSL and video streaming to work as well as pictures and whatnot).
(wince) this sounds like "security through obscurity", and a lot of unnecessary work.
Just set up an ssh tunnel and proxy your web traffic through that.
See http://www.linuxjournal.com/content/use-ssh-create-http-proxy

How to hide url after domain in web2py?

I am building a website using web2py. For security reasons I would like to hide the url after the domain to the visitors. For example, when a person clicks a link to "domain.com/abc", it will go to that page and the address bar shows "domain.com".
I have played with the routes_in and routes_out, but it only seems to map your typed url to a destination but not hiding the url.
How can I do that? Thanks!
Well I guess you're going to have a build the worlds most remarkable single page application :) Security through obscurity is never a good design pattern.
There is absolutely no security "reason" for hiding a URL if your system is designed in a such a way that the use of the URLs is meaningless unless the access control layer defines permissions for such use (usually through an authentication and role/object based permission architecture).
Keep in mind - anyone these days can use Chrome inspector to see whatever you are trying to hide in the address bar.
For example. Say you want to load domain.com/adduser
Sure you make an AJAX call to that URL, and the browser address bar would never change from domain.com/ - but a quick look in the source will uncover /adduser pretty quickly.
Sounds like you need to have a think about what these addresses really expose and start locking them down.

nodejs or python proxy lib with relative url support

I am looking for a lib that lets me roughly:
connect to localhost:port, but see http://somesite.com
rewrite all static assets to point to localhost:port instead of somesite.com
support cookies / authentication
i know that http://betterinternet.co/ does this already, but they wont give me their source code for some reason.
I assume this doesnt exist as free code, so if i were to write one, are there any nuances to it? If i replace all occurrences of somesite.com in html and headers, will that be enough?
So...you want an http proxy that does link rewriting? Sounds like Apache and mod_proxy_html. It's not written in node or Python, but I think it will do what you're asking.
I don't see any straight forward solution to your problem. If I've understood correctly, you want a caching HTTP proxy which serves static contents locally, with URL rewriting rules defined in Python (or nodejs). That's quite a task.
A caching HTTP proxy implementation is not trivial. So I'd use an existing implementation, such as Squid (or Apache if it does caching too).
You could then place a (relatively) simple HTTP server written in Python in front of that (e.g. based on BaseHTTPServer and urllib2) which performs the URL rewriting as you want them and forwards the requests to the proxy (or direct to internet).
The idea would be to rely on the proxy setup to perform all the processing you don't want to modify (including basic rewrite rules, authentication, caching and cache management) and limit your front-end implementation to performing only the custom rewriting you are interested in.

Encrypting URLs in Python

I'm planning to compile my application to an executable file using Py2Exe. However, I have sensitive URL links in my application that I would like to remain hidden as in encrypted. Regardless if my application is decompiled, the links will still remain encrypted. How would I get say urllib2 to open the encrypted link?
Any help would be appreciated, and or example code that could point me in the right direction.
Thanks!
I don't think urllib2 has an option like it, though what you can do is save the link somewhere else (say a simple database, encrypt them (like a password) and when urllib2 calls the link, you check the hash.
Something like an user authentication.
URL encryption will not save you in this situation. As your software runs in the client machine and somehow decrypts this URL and sends an HTTP request to it, any kid using Wireshark will be able to see your URL.
If the design of your system requires sensitive URLs, the safer way probably involves changes in the design of your HTTP server itself! You have to structure your system in a way that the URLs are not sensitive because you cannot control them. As soon as they are used by your code, they can be captured.

how can i redirect a user to a different server with python?

I'm trying to write a script which will monitor packets (using pypcap) and redirect certain URLs/IPs to something I choose. I know I could just edit the hosts file, but that won't work because I'm not an admin.
I'm thinking that CGI might be useful, but this one has really got me confused.
EDIT:
sorry if it sounded malicious or like a MITM attack. The reason I need this is because I have an (old) application which grabs a page from a site, but the domain has changed recently causing it to not function anymore. I didn't write the application, so I can't just change the domain it accesses.
I basically need to accomplish what can be done by editing the hosts file without having access to it.
pypcap needs administrative rights, so this is not an option.
And you don't have access to the pcs internals, to the source code or to the webserver.
There are a few options left:
Modify the host name in the applications files with a hexeditor and disassembler.
Modify the loaded application in memory with Cheat Engine and other memory tools.
Start the application in a virtual environment which can modify os api calls. A modified wine might be able to do this.
Modify the request between the pc and the webserver with a (transparent)proxy / modified router.
If the application supports the usage of proxies, it might be the easiest solution to set up a local squid with a redirector.

Categories

Resources