I have never used WebDav before but recently my client asked me to upload some files to his server. This proccess should be automated so I decided to use python to do this
My client has given me the info about the server in the following format:
Server location: \123.456.789.012\Something
Username: user
Password: pass
Domain: somedomain
I am trying to use easywebdav framework to do the job, however I get the following results:
Code:
webdav = easywebdav.connect(
host='123.456.789.012/Something',
username='user',
port=80,
protocol="http",
password='pass'
)
print(webdav.ls())
And the the exception I get is
Operation : PROPFIND .
Expected code : 207 Multi-Status, 301 Moved Permanently
Actual code : 401 Unauthorized
I might not be understanding everything correctly since I already tried multiple frameworks and they all fail the same way, but I HAVE successfully connected to the server using the same credentials via MAC OS X webdav client built-in Finder so it does work correctly.
I am sorry for the format of the question and probably lack of details, I am currently desperate after several hours trying to fix this problem. Feel free to ask anything in comments!
Found the solution to my problem. After using Wireshark to find out how my OS X(WebdavFS) connects I found out that the server is Microsoft IIS 7.5 and requires NTLM Auth instead of Basic Auth. Haven't found any python libraries that support NTLM Auth so luckily due to the fact the app is relatively small I swithed to C++ using neon library.
Related
I'm currently trying to use the dockerpy sdk to connect to my remote ubuntu server so i can manage my docker containers via python.
I am getting a few issues when attempting to do this.
docker.APIClient(base_url="ssh://user#ip")
When doing the following I am getting the error:
paramiko.ssh_exception.PasswordRequiredException: private key file is encrypted
I can resolve this issue by adding the kwarg: use_ssh_client, but then i am forced to input a password, which limits the potential for automation.
docker.APIClient(base_url="ssh://user:#ip", use_ssh_client=True)
When using the above code, I have also tried to enter my ssh key password into the base_url such as:
docker.APIClient(base_url="ssh://user:pass#ip", use_ssh_client=True)
However, this then greets me with the following error:
docker.errors.DockerException: Invalid bind address format: ssh://root:pass#ip
I have run out of ideas and am confused as to how I am supposed to get around this?
Many thanks in advance...
It's possible to make a connection as Mr. Piere answered here. Even with that question about docker.client.DockerClient which uses docker.api.client.APIClient under the hood.
You are trying to establish a connection using Password Authentication that's why you asked to prompt a password.
I guess you need to configure the Key-Based SSH Login as said in docker's docs
Steps to fix:
configure SSH Login on a remote server and fill ~/.ssh/config on your local machine
connect from the local terminal using the ssh command to ensure a connection is established without asking password ssh user#ip
connect using library client = docker. APIClient(base_url="ssh://user#ip", use_ssh_client=True)
I had a similar Problem. Your Problem is, that you Key is encrypted. The Docker Clients doesn't have a Passphrase option by default. I wrote some Code based on this Post. It works for me :)
import os
from docker import APIClient
from docker.transport import SSHHTTPAdapter
class MySSHHTTPAdapter(SSHHTTPAdapter):
def _connect(self):
if self.ssh_client:
self.ssh_params["key_filename"] = os.environ.get("SSH_KEY_FILENAME")
self.ssh_params["passphrase"] = os.environ.get("SSH_PASSPHRASE")
self.ssh_client.connect(**self.ssh_params)
client = APIClient('ssh://ip:22', use_ssh_client=True, version='1.41')
ssh_adapter = MySSHHTTPAdapter('ssh://user#ip:22')
client.mount('http+docker://ssh', ssh_adapter)
print(client.version())
Since the announcement about XMLA endpoints, I've been trying to figure out how to connect to a URL of the form powerbi://api.powerbi.com/v1.0/myorg/[workspace name] as an SSAS OLAP cube via Python, but I haven't gotten anything to work.
I have a workspace in a premium capacity and I am able to connect to it using DAX Studio as well as SSMS as explained here, but I haven't figured out how to do it with Python. I've tried installing olap.xmla, but I get the following error when I try to use the Power BI URL as the location using either the powerbi or https as the prefix.
import olap.xmla.xmla as xmla
p = xmla.XMLAProvider()
c = p.connect(location="powerbi://api.powerbi.com/v1.0/myorg/[My Workspace]")
[...]
TransportError: Server returned HTTP status 404 (no content available)
I'm sure there are authentication issues involved, but I'm a bit out of my depth here. Do I need to set up an "app" in ActiveDirectory and use the API somehow? How is authentication handled for this kind of connection?
If anyone knows of any blog posts or other resources that demonstrate how to connect to a Power BI XMLA endpoint specifically using Python, that would be amazing. My searching has failed me, but surely I can't be the only one who is trying to do this.
After #Gigga pointed the connector issue, I went looking for other Python modules that worked with MSOLAP to connect and found one that I got working!
The module is adodbapi (note the pywin32 prerequisite).
Connecting is as simple as this:
import adodbapi
# Connection string
conn = adodbapi.connect("Provider=MSOLAP.8; \
Data Source='powerbi://api.powerbi.com/v1.0/myorg/My Workspace Name'; \
Initial Catalog='My Data Model'")
# Example query
print('The tables in your database are:')
for name in conn.get_table_names():
print(name)
It authenticated using my Windows credentials by popping up a window like this:
I'm not familiar with olap.xmla or using Python to connect to olap cubes, but I think the problem is with the driver (or connector ?) provided in olap.xmla.
In the announcement about XMLA endpoints page, it says that the connection only works with SSMS 18.0 RC1 or later, which is quite new. Same thing with DAX studio, the version where xmla connection is supported (Version 2.8.2, Feb 3 2019), is quite fresh.
The latest version of olap.xmla seems to be from august 2013, so it's possible that there's some Microsoft magic behind the PowerBI XMLA connection and that's why it doesn't work with older connectors.
They now have a REST endpoint via which you can execute DAX queries. This could be easier than trying to invoke the XMLA endpoint directly.
https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/execute-queries
I'm starting to feel a bit stupid. Have someone been able to successfully create an Application gateway using Python SDK for Azure?
The documentation seems ok, but I'm struggling with finding the right parameters to pass 'parameters' of
azure.mgmt.network.operations.ApplicationGatewaysOperations application_gateways.create_or_update(). I found a complete working example for load_balancer but can't find anything for Application gateway. Getting 'string indices must be integers, not str' doesn't help at all. Any help will be appreciated, Thanks!
Update: Solved. An advice for everyone doing this, look carefully for the type of data required for the Application gateway params
I know there is no Python sample for Application Gateway currently, I apologize for that...
Right now I suggest you to:
Create the Network client using this tutorial or this one
Take a look at this ARM template for Application Gateway. Python parameters will be very close from this JSON. At worst, you can deploy an ARM template using the Python SDK too.
Take a look at the ReadTheDocs page of the create operation, will give you the an idea of what is expected as parameters.
Open an issue on the Github tracker, so you can follow when I do a sample (or at least a unit test you can mimic).
Edit after question in comment:
To get the IP of VM once you have a VM object:
# Gives you the ID if this NIC
nic_id = vm.network_profile.network_interfaces[0].id
# Parse this ID to get the nic name
nic_name = nic_id.split('/')[-1]
# Get the NIC instance
nic = network_client.network_interfaces.get('RG', nic_name)
# Get the actual IP
nic.ip_configurations[0].private_ip_address
Edit:
I finally wrote the sample:
https://github.com/Azure-Samples/network-python-manage-application-gateway
(I work at MS and I'm responsible of the Azure SDK for Python)
If there is someone out there who has already worked with SOLR and a python library to index/query solr, would you be able to try and answer the following question.
I am using the mySolr python library but there are others out (like pysolr) there and I don't think the problem is related to the library itself.
I have a default multicore SOLR setup, so no authentication required normally. Don't need it to access the admin page at http://localhost:8080/solr/testcore/admin/ either
from mysolr import Solr
solr = Solr('http://localhost:8080/solr/testcore/')
response = solr.search(q='*:*')
print("response")
print(response)
This code used to work but now I get a 401 reply from SOLR ... just like that, no changes have been made to the python virtual env containing mysolr or the SOLR setup. Still...something must have changed somewhere but I'm out of clues.
What could be the causes of a SOLR 401 reponse?
Additional info: This script and mor advanced script do work on another PC, just not on the one I am working on. Also, adding "/select?q=:" behind the url in the browser does return the correct results. So the SOLR is setup correctly, it has probably something to do with my computer itself. Could windows settings (of any kind) have an impact on how SOLR responds to requests from python? The python env itself has been reinstalled several times to no avail.
Thanks in advance!
The problem was: proxy.
If this exact situation was ever to occur to someone and you are behind a proxy, check if your HTTP and HTTPS environmental variables are not set. If they are... this might cause the python session to try using the proxy while it shouldn't (connecting to localhost via proxy).
It didn't cause any trouble for months but out of the blue it did so whether you encounter this or not might be dependent on how your IT setup your proxy or made some other changes...somewhere.
thank you everyone!
I'm having issues with Sentry running on my internal server. I walked through the docs to get this installed on a Centos machine. It seems to run, but none of the asynchronous javascript is working.
Can someone help me find my mistake?
This is what Chrome keeps complaining about:
XMLHttpRequest cannot load
http://test.example.com/api/main-testproject/testproject/poll/. No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://test.example.com:9000' is therefore not allowed
access.
I'm new to Django, but I am comfortable with python web services. I figured there was surely a configuration I missed. I found something in the docs referring to a setting I should use; SENTRY_ALLOW_ORIGIN.
# You MUST configure the absolute URI root for Sentry:
SENTRY_URL_PREFIX = 'http://test.example.com' # No trailing slash!
SENTRY_ALLOW_ORIGIN = "http://test.example.com"
I even tried various paths to my server by using the fully qualified domain name, as well as the IP. None of this seemed to help. As you can see from the chrome error, I was actively connected to the domain name that was throwing the error.
I found my issue. The XMLHttpRequest error is showing that port 9000 is used. This needs to be specified in the SENTRY_URL_PREFIX.
SENTRY_URL_PREFIX = 'http://test.example.com:9000'
edit:
I even found this answer listed in the FAQ:
https://docs.getsentry.com/on-premise/server/faq/