I'm using the PYRFC library and so far I've managed to connect to SAP.
conn = Connection(ashost='xxxxxxxxx', sysnr='02', client='100', user='xxxxx', passwd='xxxxxxxx.')
result = conn.call('STFC_CONNECTION', REQUTEXT=u'Hello SAP!')
print (result)
I did the connection test and everything was ok. But now I'm trying to run the queues created in SAP.
I performed some tests, trying to simulate the F8 but without success.
Is there any way to make this execution using via python?
Directly you cannot query SMQ1/2 results, for an extended explanation read my answer here.
To get the SMQ results in Pyton you need to find a module with an equivalent functionality, and good news pa-pam!, I did it for you: this is TRFC_QIN_GET_CURRENT_QUEUES function module.
How to call you can find it in any pyrfc tutorial. Probable sample:
def func():
import pyrfc
from pyrfc import Connection
conn=pyrfc.Connection(user='', passwd='', ashost='', etc.)
queue_name = '*'
client = '100'
result=con.call("TRFC_QIN_GET_CURRENT_QUEUES", qname=queue_name,
client=client)
print(result['QVIEW'])
all you need to put here is queue name (qname parameter) and client value (what is it?).
Related
I was deploying my fastapi application to github using CI/CD,and it was running the job and gave an error,that a default connection was not defined (in this app I am using FastAPI to the api itself,mongoengine in most of the part of the db and pytest to tests,I only use pymongo to delete the test db content after the tests,because it is easier to do it without the errors of default connection not being set up)
Every single route has the same dependency ,of the connect function,so that when I am testing, I can replace it to the testing db (because if I was not going to test,I could just call the connect() function inside of the main.py,and it would just work fine (I tested it gentlemen)
#router.method('/route')
def route_function(db=Depends(connect):
do something
return {something}
The connect function consists of :
def connect():
try:
mongoengine.connect(host=connection_url, db=db_name,
alias=db_name, uuidRepresentation='standard')
except Exception as e:
print('Failed to connect to production db')
print(e)
finally:
disconnect(alias=db_alias)
And in every test I do a
app.dependency_overrides[connect] = test_connect
(do tests)
app.dependency_overrides.clear()
And the test_connect function is :
def test_connect():
try:
disconnect(db_alias)
mongoengine.connect(host=test_connection_url,
db=test_db_name, alias=test_db_alias, uuidRepresentation='standard')
except:
print('Failed to connect to testing db')
finally:
disconnect(alias=test_db_alias)
And now that you have read until here,PLEASE read until the end so that I can explain you all the problem
When I run the tests,it does not replace the test_connect function to the connect one,even when I use some tricks like lambda:test_connect to try to make it work,it doesnt...
And so that I can complete all my doubts,in the meta dictionary of each model,I now only use "db" keyword to say what database it is going to be stored,because when I used "db_alias",it gave an error saying that the alias that I used in the production was not defined,because the testing db alias was different of course
And when I used the same alias,it gave an error saying that I did not set an default connection (and even when I used alias of default in the connect function or even in the model,it still gave this missing default connection error or just dont still replace the prod db to the test one)
So I was tired of this problems and after a lot of research I found out that using only "db" could solve this problem
I had an idea that if I could transform a pymongo connection to mongoengine connection,I could solve this problem,but I dont know how to do this at all
So after all this (first THANK YOU SO MUCH for reading until here,second->)...any ideas please?
I need a script for which i need to connect CassDb nodes with password using python script
I tried bellow script
from cassandra.auth import PlainTextAuthProvider
from cassandra.cluster import Cluster
ap = PlainTextAuthProvider(username='##',password='##')
cass_contact_points=['cassdb01.p01.eng.sjc01.com', 'cassdb02.p01.eng.sjc01.com']
cluster = Cluster(cass_contact_points,auth_provider=ap,port=50126)
session = cluster.connect('##')
I'm getting bellow error:
File "C:\python35\lib\site-packages\cassandra\cluster.py", line 2792, in _reconnect_internal
raise NoHostAvailable("Unable to connect to any servers", errors)cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'10.44.67.92': OperationTimedOut('errors=None, last_host=None'), '10.44.67.91': OperationTimedOut('errors=None, last_host=None')})
I see two potential problems.
First, unless you're in a upgrade scenario or you've had past problems with protocol versions, I would not specify that. The driver should negotiate that value. Setting it to 2 will fail with Cassandra 3.x, for example.
Second, I don't think the driver can properly parse-out ports from the endpoints.
node_ips = ['cassdb01.p01.eng.sjc01.com:50126', 'cassdb02.p01.eng.sjc01.com:50126']
When I pass the port along with my endpoints, I get a similar failure. So I would try changing that line to this:
node_ips = ['cassdb01.p01.eng.sjc01.com', 'cassdb02.p01.eng.sjc01.com']
Try starting with that. I have some other thoughts, but let's get those two obvious settings out of the way.
Has anyone used the python wrapper for the Five9 API?
I'm trying to run one of our custom reports through the API but am unsure how to pass the time criteria.
If the report is already built:
Is there a way to run it without passing criteria?
It seems like this should be enough (since the report is already built):
client.configuration.runReport(folderName='Shared Reports', reportName='Test report')
But it didn't work as expected.
What can I do to solve this?
This works for me. You can probably get fancier with a while loop checking if the report is done but this isn't time sensitive.
from five9 import Five9
import time
client = Five9('username','password')
start = '2019-08-01T00:00:00.000'
end = '2019-08-29T00:00:00.000'
criteria = {'time':{'end':end, 'start':start}}
identifier = client.configuration.runReport(folderName='SharedReports',reportName='reportname',criteria=criteria)
time.sleep(15)
get_results = client.configuration.getReportResult(identifier)
I'm writing some python3 code using the ldap3 library and I'm trying to prevent LDAP-injection. The OWASP injection-prevention cheat sheet recommends using a safe/parameterized API(among other things). However, I can't find a safe API or safe method for composing search queries in the ldap3 docs. Most of the search queries in the docs use hard-coded strings, like this:
conn.search('dc=demo1,dc=freeipa,dc=org', '(objectclass=person)')
and I'm trying to avoid the need to compose queries in a manner similar to this:
conn.search(search, '(accAttrib=' + accName + ')')
Additionally, there seems to be no mention of 'injection' or 'escaping' or similar concepts in the docs. Does anyone know if this is missing in this library altogether or if there is a similar library for Python that provides a safe/parameterized API? Or has anyone encountered and solved this problem before?
A final point: I've seen the other StackOverflow questions that point out how to use whitelist validation or escaping as a way to prevent LDAP-injection and I plan to implement them. But I'd prefer to use all three methods if possible.
I was a little surprised that the documentation doesn't seem to mention this. However there is a utility function escape_filter_chars which I believe is what you are looking for:
from ldap3.utils import conv
attribute = conv.escape_filter_chars("bar)", encoding=None)
query = "(foo={0})".format(attribute)
conn.search(search, query)
I believe the best way to prevent LDAP injection in python3 is to use the Abstraction Layer.
Example code:
# First create a connection to ldap to use.
# I use a function that creates my connection (abstracted here)
conn = self.connect_to_ldap()
o = ObjectDef('groupOfUniqueNames', conn)
query = 'Common Name: %s' % cn
r = Reader(conn, o, 'dc=example,,dc=org', query)
r.search()
Notice how the query is abstracted? The query would error if someone tried to inject a search here. Also, this search is protected by a Reader instead of a Writer. The ldap3 documentation goes through all of this.
I am trying to load a SPSS file into a Pandas DataFrame in Python, and am looking for easier ways to do it from more recent developments in using R codes in the Python environment, which lead me to PyRserve.
After connecting to PyRserve,
import pyRserve
conn = pyRserve.connect()
One can pretty much run basic r codes such as
conn.eval('3+5') #output = 8.0
However, if possible in PyRserve, how do one import an R library to load a dataframe with r codes like the ones below,
library(foreign)
dat<-read.spss("/path/spss_file.sav", to.data.frame=TRUE)
and hopefully onto a pandas DataFrame? Any thoughts are appreciated!
#import pyRserve
import pyRserve
#open pyRserve connection
conn = pyRserve.connect()
#load your rscript into a variable (you can even write functions)
test_r_script = '''
library(foreign)
dat<-read.spss("/path/spss_file.sav",
to.data.frame=TRUE)
'''
#do the connection eval
variable = conn.eval(test_r_script)
print variable
# closing the pyRserve connection
conn.close()
My apologies for not explaining it properly... I am adding my github link with it so you can see more examples . I think I have explained it properly there
https://github.com/shintojoseph1234/Rserve-and-pyRserve