I'm thinking of writing an application that when running keeps track of all the websites I visit and connections I make.
Basically like a browser history, but I want to do it in a way that utilizes network concepts.
I only have a rudimentary understanding of Http, but would I be able to listen in on Http get requests from the browser and automatically pull information whenever a request is made? If anyone can give me a suggestion or outline of how this can be done, so I can research on implementing it, it would be very helpful! I'm thinking of implementing it in python, and my operating system is Ubuntu
Thank you very much.
You could do that by implementing a proxy.
In your case, basically, an agent that sits between your browser and internet. The proxy receive the request from the client, then, send it to the remote server, the remote server may reply to you and you'll have to send the server response back to the client.
To retrieve the informations you are wanting, reading the Http rfc will be helpful.
Related
I’ve got a standard client-server set-up with ReScript (ReasonML) on the front-end and a Python server on the back-end.
The user is running a separate process on localhost:2000 that I’m connecting to from the browser (UI). I can send requests to their server and receive responses.
Now I need to issue those requests from my back-end server, but cannot do so directly. I’m assuming I need some way of doing it through the browser, which can talk to localhost on the user’s computer.
What are some conceptual ways to implement this (ideally with GraphQL)? Do I need to have a subscription or web sockets or something else?
Are there any specific libraries you can recommend for this (perhaps as examples from other programming languages)?
I think the easiest solution with GraphQL would be to use Subscriptions indeed, the most common Rescript GraphQL clients already have such a feature, at least ReasonRelay, Reason Apollo Hooks and Reason-URQL have it.
I am only told to create a pythonic web service. At the end of the day, I need to offer a HTTPS endpoint which will receive(from a post request), and be able to process/send back json objects from/to another web service.
To be able to receive post requests from other services, what kind of information do I need?
I have seen some examples using httplib2 such as sending HTTP get and post requests when given a website like www.something.com. But in my case, since I do not know the IP address/URL of the data source, should I create a listener waiting for the incoming data? How to achieve this?
I am really new with building python web server and the requirement I am given is really vague. Thank you in advance for helping me break down this problem.
Take a look at the Flask framework, it can do everything you want and then some. I can especially recommend the Quickstart: A Minimal Application and the JSON Support pages.
Enabling the build in debugger will help you a great deal as well.
All services is listening for incoming connections, so you are right about that :-)
Good luck!
I have built a MITM with python and scapy.I want to make the "victim" device be redirected to a specific page each time it tried to access a website. Any suggestions on how to do it?
*Keep in mind that all the traffic from the device already passes through my machine before being routed.
You can directly answer HTTP requests to pages different to that specific webpage with HTTP redirections (e.g. HTTP 302). Moreover, you should only route packets going to the desired webpage and block the rest (you can do so with a firewall such as iptables).
I have an Arducam OV2640
There is a lot of documentation on how the code works wth a HTML and js script which calls the camera IP address, after arducam creates a web server.
This web server listens to get requests and sends a post response of the image.
The only problem is this is all over local wifi...source:
https://github.com/ArduCAM/Arduino/tree/master/ArduCAM/examples/ESP8266/ArduCAM_ESP8266_OV2640_Capture
Has anyone done anything such as sending to Imgur via api or for instance a cloud based program like python anywhere?
I've been trying different methods all day, but a lot it comes to people saying "you need to send each byte and the other side have a handler."
I'm asking because the local wifi method works so easily I thought there must be a way to extend this to external addresses in a nicer way?
Has anyone done or heard of anything like this?
Thanks
I've written a Python application that makes web requests using the urllib2 library after which it scrapes the data. I could deploy this as a web application which means all urllib2 requests go through my web-server. This leads to the danger of the server's IP being banned due to the high number of web requests for many users. The other option is to create an desktop application which I don't want to do. Is there any way I could deploy my application so that I can get my web-requests through the client side. One way was to use Jython to create an applet but I've read that Java applets can only make web-requests to the server it is deployed on and the only way to to circumvent this is to create a server side proxy which leads us back to the problem of the server's ip getting banned.
This might sounds sound like and impossible situation and I'll probably end up creating a desktop application but I thought I'd ask if anyone knew of an alternate solution.
Thanks.
You can use a signed Java applet, they can use the Java security mechanism to enable access to any site.
This tutorial explains exactly what you have to do: http://www-personal.umich.edu/~lsiden/tutorials/signed-applet/signed-applet.html
The same might be possible from a Flash applet. Javascript is also restricted to the published site and doesn't allow being signed or security exceptions like this, AFAIK.
You probably can use AJAX requests made from JavaScript that is a part of client-side.
Use server → client communication to give commands and necessary data to make a request
…and use AJAX communication from client to 3rd party server then.
This depends on the form of "scraping" you intend to do:
You might run into problems running an AJAX call to a third-party site. Please see Screen scraping through AJAX and javascript.
An alternative would be to do it server-side, but to cache the results so that you don't hit the third-party server unnecessarily.
Check out diggstripper on google code.