I am able to use python3-vici in the global namespace, suppose I want to route it through a particular namespace say, /var/run/x/x/vpn, how do I do that?
I have charon.ctl, charon.pid, ipsec.conf, ipsec.d, starter.charon.pid, strongswan.conf
files in the vpn folder but not charon.vici. I tried installing vici in the namespace, but I don't see a charon.vici file there.
Anything I'm missing here?
Another thing:
I'm not able to map the certificates I have loaded using vici.Session().load_cert() with a particular connection. Using a 'cert' attribute in a connection dictionary inside 'local' throws an error like:
vici.exception.CommandException: Command failed: b'unknown option: certs, config discarded'
Although, if you load the connection using swanctl.conf, and then retrieve information using vici, you can see the cert field being populated on doing a list_conns().
Related
In my Azure Function, I have specified an Environment Variable/App Setting for a database connection string. I can use the Environment Variable when I run the Function locally on my Azure Data Science Virtual Machine using VS Code and Python.
However, when I deploy the Function to Azure, I get an error: KeyValue is None, meaning that it cannot find the Environment Variable for the connection string. See error:
Exception while executing function: Functions.matchmodel Result: Failure
Exception: KeyError: 'CONNECTIONSTRINGS:PDMPDBCONNECTIONSTRING'
Stack: File "/azure-functions
host/workers/python/3.7/LINUX/X64/azure_functions_worker/dispatcher.py", line 315, in
_handle__invocation_request self.__run_sync_func, invocation_id, fi.func, args)
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/azure-functions-host/workers/python/3.7/LINUX/X64/azure_functions_worker/dispatcher.py",
line 434, in __run_sync_func
return func(**params)
File "/home/site/wwwroot/matchmodel/__init__.py", line 116, in main
File "/home/site/wwwroot/matchmodel/production/dataload.py", line 28, in query_dev_database
setting = os.environ["CONNECTIONSTRINGS:PDMPDBCONNECTIONSTRING"]
File "/usr/local/lib/python3.7/os.py", line 679, in __getitem__
raise KeyError(key) from None'
I have tried the following solutions:
Added "CONNECTIONSTRINGS" to specify the Environment Variable in the Python script (which made it work locally)
setting = os.environ["CONNECTIONSTRINGS:PDMPDBCONNECTIONSTRING"]
Used logging.info(os.environ) to output my Environment Variables in the console. My connection string is listed.
Added the connection string as Application Setting in the Azure Function portal.
Added the Connection String as Connection Strings in the Azure Function portal.
Does anyone have any other solutions that I can try?
Actually you are almost get the right the connection string, however you use the wrong prepended string. Further more detailed information you could refer to this doc:Configure connection strings.
Which string to use it depends on which type you choose, like in my test I use a custom type. Then I should use os.environ['CUSTOMCONNSTR_testconnectionstring'] to get the value.
From the doc you could find there are following types:
SQL Server: SQLCONNSTR_
MySQL: MYSQLCONNSTR_
SQL Database: SQLAZURECONNSTR_
Custom: CUSTOMCONNSTR_
I was struggling with this and found out this solution:
import os
setting = os.getenv("mysetting")
In Functions application settings, such as service connection strings, are exposed as environment variables during execution. You can access these settings by declaring import os and then using
setting = os.environ["mysetting"]
As Alex said, try to remove CONNECTIONSTRINGS: from the name of the environment variable. In azure portal just add mysetting in application settings as keyName.
I figured out the issue with help from George and selected George's answer as the correct answer.
I changed the code to os.environ["SQLCONNSTR_PDMPDBCONNECTIONSTRING"] but also I had been deploying the package from VS Code to Azure via the following code using Azure Command Line Interface (Azure CLI). This code overwrites Azure App Settings with the local.settings.json.
func azure functionapp publish <MY_FUNCTION_NAME> --publish-local-settings -i --overwrite-settings -y
I think that was causing changes to the type of database that I had specified in Azure App Settings (SQL Server), so when I had previously tried os.environ["SQLCONNSTR_PDMPDBCONNECTIONSTRING"] it didn't work because I was overwriting my Azure settings using my local.settings.json, which did not specify a database type.
I finally got it to work by deploying using the VS Code Extension called Azure Functions (Azure Functions: Deploy to Function App). This retains the settings that I had created in Azure App Services.
I'm building a webapplication with the Flask framework of python. On the server I would like to preserve some state. I think the following code example makes my goal clear (and was also my initial idea):
name = ""
#app.route('/<input_name>')
def home(input_name):
global name
name = input_name
return "name set"
#app.route('/getname')
def getname():
global name
return name
Though when I deployed my website the response for an /getname request behaves inconsistent because there are multiple thread-instances of the code (I could be wrong). I have some plausible solutions to overcome this problem but I wonder if there would be a more 'clean' solution:
Solution 1: read and write the name from a database (a database seems like overkill if a only want to store 1 variable)
Solution 2: store the value of name in a file and setup a locking mechanism so that only one process/thread could write to the file at the same moment.
Goal: when client 'A' requests www.website.com/sven and after that client 'B' requests www.website.com/getname I want the response for client B to be 'sven'
Any suggestions?
Your example should not be done with global state, it will not work for the reason you mentioned - requests might land into different processes that will have different global values.
You can handle storing global variables by using a key-value cache, such as memcached or Redis, or file-system based cache - check Flask-Chaching package and particular docs https://flask-caching.readthedocs.io/en/latest/#built-in-cache-backends
I've used boto to interact with S3 with with no problems, but now I'm attempting to connect to the AWS Support API to pull back info on open tickets, trusted advisor results, etc. It seems that the boto library has different connect methods for each AWS service? For example, with S3 it is:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
According to the boto docs, the following should work to connect to AWS Support API:
>>> from boto.support.connection import SupportConnection
>>> conn = SupportConnection('<aws access key>', '<aws secret key>')
However, there are a few problems I see after digging through the source code. First, boto.support.connection doesn't actually exist. boto.connection does, but it doesn't contain a class SupportConnection. boto.support.layer1 exists, and DOES have the class SupportConnection, but it doesn't accept key arguments as the docs suggest. Instead it takes 1 argument - an AWSQueryConnection object. That class is defined in boto.connection. AWSQueryConnection takes 1 argument - an AWSAuthConnection object, class also defined in boto.connection. Lastly, AWSAuthConnection takes a generic object, with requirements defined in init as:
class AWSAuthConnection(object):
def __init__(self, host, aws_access_key_id=None,
aws_secret_access_key=None,
is_secure=True, port=None, proxy=None, proxy_port=None,
proxy_user=None, proxy_pass=None, debug=0,
https_connection_factory=None, path='/',
provider='aws', security_token=None,
suppress_consec_slashes=True,
validate_certs=True, profile_name=None):
So, for kicks, I tried creating an AWSAuthConnection by passing keys, followed by AWSQueryConnection(awsauth), followed by SupportConnection(awsquery), with no luck. This was inside a script.
Last item of interest is that, with my keys defined in a .boto file in my home directory, and running python interpreter from the command line, I can make a direct import and call to SupportConnection() (no arguments) and it works. It clearly is picking up my keys from the .boto file and consuming them but I haven't analyzed every line of source code to understand how, and frankly, I'm hoping to avoid doing that.
Long story short, I'm hoping someone has some familiarity with boto and connecting to AWS API's other than S3 (the bulk of material that exists via google) to help me troubleshoot further.
This should work:
import boto.support
conn = boto.support.connect_to_region('us-east-1')
This assumes you have credentials in your boto config file or in an IAM Role. If you want to pass explicit credentials, do this:
import boto.support
conn = boto.support.connect_to_region('us-east-1', aws_access_key_id="<access key>", aws_secret_access_key="<secret key>")
This basic incantation should work for all services in all regions. Just import the correct module (e.g. boto.support or boto.ec2 or boto.s3 or whatever) and then call it's connect_to_region method, supplying the name of the region you want as a parameter.
In Python, ssl.wrap_socket can read certificates from files, ssl.wrap_socket require the certificate as a file path.
How can I start an SSL connection using a certificate read from string variables?
My host environment does not allow write to files, and tempfile module is not functional
I'm using Python 2.7.
I store the certificate inside MySQL and read as a string.
Edit:
I gave up, this is basically require implement ssl by pure python code, this is beyond my current knowledge.
Looking at the source, ssl.wrap_socket calls directly into the native code (openssl) function SSL_CTX_use_cert_chain_file which requires a path to a file, so what you are trying to do is not possible.
For reference:
In ssl/init.py we see:
def wrap_socket(sock, keyfile=None, certfile=None,
server_side=False, cert_reqs=CERT_NONE,
ssl_version=PROTOCOL_SSLv23, ca_certs=None,
do_handshake_on_connect=True):
return SSLSocket(sock, keyfile=keyfile, certfile=certfile,
server_side=server_side, cert_reqs=cert_reqs,
ssl_version=ssl_version, ca_certs=ca_certs,
do_handshake_on_connect=do_handshake_on_connect)
Points us to the SSLSocket constructor (which is in the same file) and we see the following happen:
self._sslobj = _ssl2.sslwrap(self._sock, server_side,
keyfile, certfile,
cert_reqs, ssl_version, ca_certs)
_ssl2 is implemented in C (_ssl2.c)
Looking at the sslwrap function, we see it's creating a new object:
return (PyObject *) newPySSLObject(Sock, key_file, cert_file,
server_side, verification_mode,
protocol, cacerts_file);
Looking at the constructor for that object, we eventually see:
ret = SSL_CTX_use_certificate_chain_file(self->ctx,
cert_file);
That function is defined in openssl, so now we need to switch to that codebase.
In ssl/ssl_rsa.c we eventually find in the function:
BIO_read_filename(in,file)
If you dig far enough into the BIO code (part of openssl) you'll eventually come to a normal fopen():
fp=fopen(ptr,p);
So it looks like as it's currently written. It must be in a file openable by C's fopen().
Also, since python's ssl library so quickly jumps into C, I don't see a immediately obvious place to monkeypatch in a workaround either.
Quick look though the ssl module source confirms what you want is not supported by the API:
http://code.google.com/codesearch#2T6lfGELm_A/trunk/Modules/_ssl.c&q=sslwrap&type=cs
Which is not to say it is impossible, you could create a named pipe, feed one end from Python and give the filename to the ssl module.
For simpler, less secure use, dump cert from memory to mkstemp()'d file.
You could even mount FUSE volume and intercept file callback.
Finally, use ctypes to hack at ssl context at runtime and load cert/ket from a buffer following the C recipe Read certificate files from memory instead of a file using OpenSSL Things like these have been done before, but it's not for the faintest of heart.
It looks like you are trying to get out of e.g. app engine "jail," perhaps it is just not possible?
If you are not picky on ssl implementation, you can use M2Crypto, TLS Lite, pyOpenSSL or something else. The earlier is pure python, you can definitely hack it to use in-memory certificates/keys.
From Python 3.4, you can use SSLContext#load_verify_locations:
context = ssl.SSLContext()
context.load_verify_locations(cadata=cert) # Where cert is str.
From https://docs.python.org/3/library/ssl.html#ssl.SSLContext.load_verify_locations
Python 3.4 added support for a cdata parameter to load certificates from a string. From https://docs.python.org/3/library/ssl.html#ssl.SSLContext.load_verify_locations:
"The cadata object, if present, is either an ASCII string of one or more PEM-encoded certificates or a bytes-like object of DER-encoded certificates. Like with capath extra lines around PEM-encoded certificates are ignored but at least one certificate must be present".
You can treat strings like files with StringIO.
I am trying to use paramiko to download a file via SFTP. I create the SFTP object like this:
transport = paramiko.Transport((sftp_server, sftp_port))
transport.connect(username = sftp_login, password = sftp_password)
sftp = paramiko.SFTPClient.from_transport(transport)
sftp.get("file_name", '.', None)
and, I get the exception:
Exception python : Folder not found: \\$IP_ADDRESS\folder_1/folder_2\file_name.
I'm running paramiko to connect to a client chrooted SFTP. The file, 'file_name', is located at the root of my client's chroot.
I don't get why I have this error showing apparently the full path (outside the chroot) of my client's server.
I don't know why my dummy file is not going to be downloaded :O
I will provide any necessary information.
The following code worked for me in Ubuntu 11.10:
sftp.get("file_name", "file_name")
I just made a couple of changes that shouldn't affect to your problem:
localpath: Used full path to the local file name instead of just '.' (directories aren't allowed)
callback: Removed it since None is already the default value and that's not really needed
Since I'm not getting the same error you're getting regarding the remotepath parameter, I guess you might be using a different sftp server that has a different behaviour.
My advice would be to:
Verify with another client, for example the sftp command, that the file you're looking for is really where you are trying to get it.
Use sftp.chdir just to make sure that the default directory being used is the one you expect.
I hope this helps.