fetch data from one python file into another - python

I am trying to break my Python script into multiple .py files. In sf_opps.py file I have all the login credentials and a query that fetches the data with REST API call. The data is stored in sf_prod_data variable. How can I access this variable that contains the data I need from another .py file?
I need to loop through the sf_prod_data and I don't want to use classes as all of my code are mostly loops, so need to know how to access the variables with stored data in it from different .py files.
I have tried:
import sf_opps
print(sf_prod_data)
sf_prod_data is Undefined

Either:
from sf_opps import sf_prod_data
print(sf_prod_data)
or:
import sf_opps
print(sf_opps.sf_prod_data)
Further reading: python tutorial on modules

Related

Execute multiple scripts in Python by passing the scripts names

I have a total of 25-30 scripts that updates data on the server. Since every day new data gets generated, so every day I have to run these scripts manually in a sequence and update the database with new data. All these scripts contain 300+ lines of SQL queries.
Now I want to automate this task with Python, but not really sure how to do that.
I have used some libraries in past to connect to SQL server and then define a cursor to execute certain queries - cur.execute(select * from abc)
But now I want to automate this task and run all scripts by passing just the script names.
Like
cur.execute(sql_script_1)
cur.execute(sql_script_2)
cur.execute(sql_script_3)
.
.
.
cur.execute(sql_script_25)
In this way, in the end, I'll just have to run this .py file and it will automatically run all scripts in the given order.
Can this be done somehow? Either in this way or some other way.
The main motive is to automate the task of running all scripts by just passing the names.
Your question is probably a bad one, kinda vague and no effort shown in terms of research or implementing it yourself. But it's possible I had a similar use case, so I will share.
In my case I needed to pull down data as a pandas df. Your case might vary but I imagine basic structure will remain same.
In any case here is what I did:
you store each of the sql scripts as a string variable in a python file (or files) somewhere in your project directory.
you define a function that manages the connection and passes a sql script as an argument.
you call that function as needed for each script.
So your python file in one looks something like:
sql_script_1 = 'select....from...where...'
sql_script_2 = 'select....from...where...'
sql_script_3 = 'select....from...where...'
Then you define the function in two to manage the connection.
Something like
import pandas as pd
import pyodbc
def query_passer(query:str, conn_db):
conn = pyodbc.connect(conn_db)
df = pd.read_sql(query, conn)
conn.close()
return df
Then in three you call the function, do whatever you were gonna do with the data, and then repeat for each query.
The code below assume the query_passer function was saved in a python file named "functions.py" in the subfolder "resources" and that the queries are stored in a file named "query1.py" in the subfolder "resources.queries". Organize your own files as you wish, or just keep them all in one big file.
from resources.functions import query_passer
from resources.queries.query1 import sql_script_1, sql_script_2, sql_script_3
import pandas as pd
# define the connection
conn_db = (
"Driver={stuff};"
"Server=stuff;"
"Database=stuff;"
".....;"
)
# run queries
df = query_passer(sql_script_1, conn_db)
df.to_csv('sql_script_1.csv')
df = query_passer(sql_script_2, conn_db)
df.to_csv('sql_script_2.csv')
df = query_passer(sql_script_3, conn_db)
df.to_csv('sql_script_3.csv')

Creating *.mat file from Python without using dictionary

I have few lists which i want to save it to a *.mat file. But according to scipy.io.savemat command documentation i Need to create a dictionary with the lists and then use the command to save it to a *.mat file.
If i save it according to the way mentioned in the docs the mat file will have structure with variables as the Arrays which i used in the dictionary. Now i have a Problem here, I have another program (which is not editable) will use the mat files and load them to plot some Graphs from the data. The program cannot process the structure because it is written in a way where if it loads a mat files and then it will directly process the Arrays in it.
So is there a way to save the mat file without using dictionaries? Please see the Image for more understanding
Thanks
This is the sample algorithm i used to save my *.mat file
import os
os.getcwd()
os.chdir(os.getcwd())
import scipy.io as sio
x=[1,2,3,4,5]
y=[234,5445,778] #can be 1000 lists
data={}
data['x']=x
data['y']=y
sio.savemat('test.mat',{'interpolated_data':data})
How about
scipy.io.savemat('interpolated_data_max_compare.mat',
{'NA1_X_order10_ACCE_ms2': np.zeros((3000,1)),
'NA1_X_order10_DISP_mm': np.ones((3000,1))})
Should work fine...
According to the code you added in your question, instead of sio.savemat('...', {'interpolated_data':data}), just save
sio.savemat('...', data)
and you should be fine: data is already a dictionary you don't need to add an extra level with {'interpolated_data': data} when saving.
You could use the Writing primitives directly
import scipy.io.matlab as ml
f=open("something.mat","wb")
mw=ml.mio5.MatFile5Writer(f)
mw.put_variables({"testVar":22})

How to load .mat file into workspace using Matlab Engine API for Python?

I have a .mat workspace file containing 4 character variables. These variables contain paths to various folders I need to be able to cd to and from relatively quickly. Usually, when using only Matlab I can load this workspace as follows (provided the .mat file is in the current directory).
load paths.mat
Currently I am experimenting with the Matlab Engine API for Python. The Matlab help docs recommend using the following Python formula to send variables to the current workspace in the desktop app:
import matlab.engine
eng = matlab.engine.start_matlab()
x = 4.0
eng.workspace['y'] = x
a = eng.eval('sqrt(y)')
print(a)
Which works well. However the whole point of the .mat file is that it can quickly load entire sets of variables the user is comfortable with. So the above is not efficient when trying to load the workspace.
I have also tried two different variations in Python:
eng.load("paths.mat")
eng.eval("load paths.mat")
The first variation successfully loads a dict variable in Python containing all four keys and values but this does not propagate to the workspace in Matlab. The second variation throws an error:
File "", line unknown SyntaxError: Error: Unexpected MATLAB
expression.
How do I load up a workspace through the engine without having to manually do it in Matlab? This is an important part of my workflow....
You didn't specify the number of output arguments from the MATLAB engine, which is a possible reason for the error.
I would expect the error from eng.load("paths.mat") to read something like
TypeError: unsupported data type returned from MATLAB
The difference in error messages may arise from different versions of MATLAB, engine API...
In any case, try specifying the number of output arguments like so,
eng.load("paths.mat", nargout=0)
This was giving me fits for a while. A few things to try. I was able to get this working on Matlab 2019a with Python 3.7. I had the most trouble trying to create a string and using the string as an argument for load and eval/evalin, so there might be some trickiness with the single or double quotes, or needing to have an additional set of quotes in the string.
Make sure the MAT file is on the Matlab Path. You can use addpath and rmpath really easily with pathlib objects:
from pathlib import Path
mat_file = Path('local/path/from/cwd/example.mat').resolve # get absolute path
eng.addpath(str(mat_file.parent))
# Execute other commands
eng.rmpath(str(mat_file.parent))
Per dML's answer, make sure to specify the nargout=0 when there are no outputs from the function, and always when calling a script. If there are 1 or more outputs you don't have to have an output in Python, and there is more than one it will be output as a tuple.
You can also turn your script into a function (just won't have access to base workspace without using evalin/assignin):
function load_example_matfile()
evalin('base','load example.mat')
end
eng.feval('load_example_matfile')
And, it does seem to work on the plain vanilla eval and load as well, but if you leave off the nargout=0 it either errors out or gives you the output of the file in python directly.
Both of these work.
eng.eval('load example.mat', nargout=0)
eng.load('example.mat', nargout=0)

Storing data globally in Python

Django and Python newbie here. Ok, so I want to make a webpage where the user can enter a number between 1 and 10. Then, I want to display an image corresponding to that number. Each number is associated with an image filename, and these 10 pairs are stored in a list in a .txt file.
One way to retrieve the appropriate filename is to create a NumToImage model, which has an integer field and a string field, and store all 10 NumToImage objects in the SQL database. I could then retrieve the filename for any query number. However, this does not seem like such a great solution for storing a simple .txt file which I know is not going to change.
So, what is the way to do this in Python, without using a database? I am used to C++, where I would create an array of strings, one for each of the numbers, and load these from the .txt file when the application starts. This vector would then lie within a static object such that I can access it from anywhere in my application.
How can a similar thing be done in Python? I don't know how to instantiate a Python object and then enable it to be accessible from other Python scripts. The only way I can think of doing this is to pass the object instance as an argument for every single function that I call, which is just silly.
What's the standard solution to this?
Thank you.
The Python way is quite similar: you run code at the module level, and create objects in the module namespace that can be imported by other modules.
In your case it might look something like this:
myimage.py
imagemap = {}
# Now read the (image_num, image_path) pairs from the
# file one line at a time and do:
# imagemap[image_num] = image_path
views.py
from myimage import imagemap
def my_view(image_num)
image_path = imagemap[image_num]
# do something with image_path

Python: Append dictionary in another file

i am working on a program and I need to access a dicionary from another file, which I know how to do.I also need to be able to append the same dictionary and have it saved in its current form to the other file.
is there anyway to do this?
EDIT:
the program requires you to log in. you can create an account, and when you do it needs to save that username:password you entered into the dictionary. The way I had it, you could create an account, but once you quit the program, the account was deleted.
You can store and retrieve data structures using the pickle module in python, which provides object serialisation.
Save the dictionary
import pickle
some_dict = {'this':1,'is':2,'an':3,'example':4}
with open('saved_dict.pkl','w') as pickle_out:
pickle.dump(some_dict,pickle_out)
Load the dictionary
with open('saved_dict.pkl.'r') as pickle_in:
that_dict_again = pickle.load(pickle_in)

Categories

Resources