In my API, I have the following dynamics:
POST a processing request to the API.
Do some processing in the background.
When the processing is done, a computer will PATCH the API's original request status field with cURL.
These steps do work when I test them with a normal server, i.e., python manage.py runserver. However, when I try to automate tests within Django, I get:
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
Port 80 is what is specified under the django.test.client module with 'SERVER_PORT': '80', so I really don't get why that wouldn't work.
I don't know if this can be considered a full answer yet because I haven't tested it enough yet. But, apparently, Django's TestCase dummy server is somewhat simple for more specific cases. You could instead use LiveServerTestCase for something more robust. (If you want more info, check out this StackOverflow answer.)
Within the LiveServerTestCase class, you will find the self.live_server_url attribute, which will have the address of where the server actually is. The port assigned to it will be somewhat random apparently due to security reasons. Then, finally, somehow, you can manage to pass the info into your PATCH operation.
Related
I have a super special proxy i need to use to access certain hosts ( it turns all other traffic away ), and a bunch of complex libraries and applications that can only take a single http proxy configuration parameter for all their http requests. Which are of course a mix of restricted/proxied traffic and traffic that this proxy is refusing to handle.
I've found an example script showing how to manipulate the upstream proxy host/address in upstream mode, but couldn't find any indication in public API, that "breaking out" of upstream mode in a script is possible, to have mitmproxy directly handle traffic instead of sending it upstream, given certain conditions are met ( request target host mostly )
What am i missing? Should i be trying to do this in "regular" mode?
I invoke PAC in the title because it has the DIRECT keyword that allows the library/application to continue processing the request without going to a proxy.
thanks!
i've found evidence that this is in fact not possible and unlikely to be implemented https://github.com/mitmproxy/mitmproxy/issues/2042#issuecomment-280857954 although this issue and comment is very old, there are some recent related and unanswered questions such as How can I switch mitmproxy mode based on attributes of the proxied request
So instead, i'm pivoting to tinyproxy which does seem to provide this exact functionality https://github.com/tinyproxy/tinyproxy/blob/1.10.0/etc/tinyproxy.conf.in#L143
A shame because the replay/monitoring/interactive editing features of mitmproxy would've been amazing to have
I have a gunicorn + uvicorn + fastApi stack.
(Basically, I am using https://hub.docker.com/r/tiangolo/uvicorn-gunicorn-fastapi docker image).
I've already implemented SSL based authentication by providing appropriate gunicorn configuration options: certfile, keyfile, ca_certs, cert_reqs.
And it works fine: user have to provide a client SSL certificate in order to be able to make an API calls.
What I need to do now is to obtain client certificate data and pass it further (add it to request headers) into my application, since it contains some client credentials.
For example, I've found a way to do it using gunicorn worker by overrding gunicorn.workers.sync.SyncWorker: https://gist.github.com/jmvrbanac/089540b255d6b40ca555c8e7ee484c13.
But is there a way to do the same thing using UvicornWorker? I've tried to look through the UvicornWorker's source code, but didn't find a way to do it.
I went deeper into the Uvicorn source code, and as far as I understand, in order to access the client TLS certificate data, I need to do some tricks with python asyncio library (https://docs.python.org/3/library/asyncio-eventloop.html), possibly with Server (https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.Server) class and override some of the UvicornWorker's methods.
I am still not quite sure if it is possible to achieve the desired result though.
I ended up setting the nginx (Openresty) in front of my server and added a script to get a client certificate and put it into header.
Here is a part of my nginx config:
set_by_lua_block $client_cert {
local client_certificate = ngx.var.ssl_client_raw_cert
if (client_certificate ~= nil) then
client_certificate = string.gsub(client_certificate, "\n", "")
ngx.req.set_header("X-CLIENT-ID", client_certificate)
end
return client_certificate
}
It is also possible to extract some specific field from a client certificate (like CN, serial number etc.) directly inside nginx configuration, but I decided to pass the whole certificate further.
My problem is solved without using gunicorn as I originally wanted though, but this is the only good solution I've found so far.
If there is someone out there who has already worked with SOLR and a python library to index/query solr, would you be able to try and answer the following question.
I am using the mySolr python library but there are others out (like pysolr) there and I don't think the problem is related to the library itself.
I have a default multicore SOLR setup, so no authentication required normally. Don't need it to access the admin page at http://localhost:8080/solr/testcore/admin/ either
from mysolr import Solr
solr = Solr('http://localhost:8080/solr/testcore/')
response = solr.search(q='*:*')
print("response")
print(response)
This code used to work but now I get a 401 reply from SOLR ... just like that, no changes have been made to the python virtual env containing mysolr or the SOLR setup. Still...something must have changed somewhere but I'm out of clues.
What could be the causes of a SOLR 401 reponse?
Additional info: This script and mor advanced script do work on another PC, just not on the one I am working on. Also, adding "/select?q=:" behind the url in the browser does return the correct results. So the SOLR is setup correctly, it has probably something to do with my computer itself. Could windows settings (of any kind) have an impact on how SOLR responds to requests from python? The python env itself has been reinstalled several times to no avail.
Thanks in advance!
The problem was: proxy.
If this exact situation was ever to occur to someone and you are behind a proxy, check if your HTTP and HTTPS environmental variables are not set. If they are... this might cause the python session to try using the proxy while it shouldn't (connecting to localhost via proxy).
It didn't cause any trouble for months but out of the blue it did so whether you encounter this or not might be dependent on how your IT setup your proxy or made some other changes...somewhere.
thank you everyone!
In my Pylons config file, I have:
[server:main1]
port = 9090
...config here...
[server:main2]
port = 9091
...config here...
Which are ran using:
paster serve --server-name=main1 ...(more stuff)...
paster serve --server-name=main2 ...(more stuff)...
Now, using Haproxy and Stunnel, I have all http requests going to main1 and all https requests going to main2. I would like some of my controllers to react a little differently based on if they are being requested under http or https but pylons.request.scheme always thinks that it is under http even when it is not.
Seeing as I always know that main2 is always the one handling all https requests, is there a way for the controller to determine what sever name it was ran under or what id it is?
I got around this by just changing the workflow to not have to react differently based on what protocol it's under. It doesn't look like there's a way to pass a unique arbitrary identifier to each separate process that it can read.
In my python script, I am fetching pages but I already know the IP of the server.
So I could save it the hassle of doing a DNS lookup, if I can some how pass in the IP and hostname in the request.
So, if I call
http://111.111.111.111/
and then pass the hostname in the HOST attribute, I should be OK. However the issue I see is on the server side, if the user looks at the incomming request (ie REQUEST_URI) then they will see I went for the IP.
Anyone have any ideas?
First, the main idea is suspicious. Well, you can "know" IP of the server but this knowledge is temporary and its correctness time is controlled by DNS TTLs. For stable configuration, server admin can provide DNS record with long TTL (e.g. a few days) so DNS request will be always fulfilled using the nearest caching resolver or nscd. For changing configuration, TTL can be reduced to a few seconds or ever to 0 (means no caching), and it can be useful for some kind of load balancers. You try to organize your own resolver cache which is TTL ignorant, and this can lead to requests to non-functioning or wrong servers, with incorrect contents. So, I suggest not to do this.
If you are strictly sure you shall do this and you can't use external tools as custom resolver or even /etc/hosts, try to install custom "opener" (see urllib2.build_opener() function in documentation) which overrides DNS lookup. However I didn't do this ever, the knowledge is only on documentation read just now.
You can add the ip address mapping to the hosts file.