python cut between partitioned column results - python

I use below code in Spark-scala to get the partitioned columns.
scala> val part_cols= spark.sql(" describe extended work.quality_stat ").select("col_name").as[String].collect()
part_cols: Array[String] = Array(x_bar, p1, p5, p50, p90, p95, p99, x_id, y_id, # Partition Information, # col_name, x_id, y_id, "", # Detailed Table Information, Database, Table, Owner, Created Time, Last Access, Created By, Type, Provider, Table Properties, Location, Serde Library, InputFormat, OutputFormat, Storage Properties, Partition Provider)
scala> part_cols.takeWhile( x => x.length()!= 0 ).reverse.takeWhile( x => x != "# col_name" )
res20: Array[String] = Array(x_id, y_id)
and I need to get similar output in Python. I'm struggling to replicate the same code in Python for the Array Operation to get the [y_id, x_id].
Below is what I tried.
>>> part_cols=spark.sql(" describe extended work.quality_stat ").select("col_name").collect()
Is it possible using Python.

part_cols in the question is an array of rows. So the first step is to convert it into an array of strings.
part_cols = spark.sql(...).select("col_name").collect()
part_cols = [row['col_name'] for row in part_cols]
Now the start and end of the array's part that you are interessted in can be calculated with
start_index = part_cols.index("# col_name") + 1
end_index = part_cols.index('', start_index)
Finally a slice can be extracted from the list with these two values as start and end
part_cols[start_index:end_index]
This slice will contain the values
['x_id', 'y_id']
If the output really should be reversed, the slice
part_cols[end_index-1:start_index-1:-1]
will contain the values
['y_id', 'x_id']

Related

Drop rows in dataframe if length of the name columns <=1

Please point out where i am doing wrong or a duplicate of this question
I have 11 columns in my table, i am loading data from Ceph(AWS) bucket to Postgres and while doing that i have to filter the data with the below conditions before inserting data into Postgres
Drop the entire row if there is any empty/ Null values in any column
First name and last name should have more than a single letter. Ex : first name = A or last name = P, any record either first name or last name or both , entire record/row should be dropped
Zip code should be 5 digit or greater . Max 7 digit
First name and last name records should not have [Jr, Sr, I, II, etc] in it. or drop the entire record
i have managed to execute the first step (new to pandas) but i was blocked at the next step and i believe that it might also help me solve step3 if i find a solution for step2. While doing a quick research in google, I found that i might be complicating the process by using chunks and might have to use 'concat' to apply it for all chunks or may be i am wrong but i am dealing with huge amount of data and using chunks would help me load the data faster into Postgres.
I am going to paste my code here and mention what i tried, what was the output and what would be the expected output
what i tried:
columns = [
'cust_last_nm',
'cust_frst_nm',
'cust_brth_dt',
'cust_gendr_cd',
'cust_postl_cd',
'indiv_entpr_id',
'TOKEN_1',
'TOKEN_2',
'TOKEN_3',
'TOKEN_4',
'TOKEN_KEY'
]
def push_to_pg_weekly(key):
vants = []
print(key)
key = _download_s3(key)
how_many_files_pushed.append(True)
s=sp.Popen(["wc", "-l", key], stdout=sp.PIPE)
a, b = s.communicate()
total_rows = int(a.split()[0])
rows = 0
data = pd.read_csv(key, sep="|", header=None, chunksize=100000)
for chunk in data:
rows += len(chunk)
print("Processed rows: ", (float(rows)/total_rows)*100)
chunk = chunk.dropna(axis=0) #step-1 Drop the rows where at least one element is missing.
index_names = chunk[(len(chunk[0]) <= 1) | (len(chunk[1]) <= 1)].index #step2
chunk.drop(index_names, axis=0)
chunk.to_csv("/tmp/sample.csv", sep="|", header=None, index=False)
connection = psycopg2.connect(user = os.environ.get("DATABASE_USER", “USERNAME”),
password = os.environ.get("DATABASE_PASS", “PASSWORD“),
host = os.environ.get("DATABASE_HOST", "cvlpsql.pgsql.com"),
port = 5432,
dbname = os.environ.get("DATABASE_NAME", "cvlpsql_db"),
options = "-c search_path=DATAVANT_O")
with connection.cursor() as cursor:
cursor.copy_from(open('/tmp/sample.csv'), "COVID1", sep='|')
connection.commit()
def push_to_pg():
paginator = CLIENT.get_paginator('list_objects')
pages = paginator.paginate(Bucket=bucket)
for page in pages:
if "Contents" in page:
for obj in page["Contents"]:
if obj['Key'].startswith('test/covid-2020-11-10-175213') and (obj['Key'].endswith('.txt') or obj['Key'].endswith('.csv')):
push_to_pg_weekly(obj['Key'])
os.remove(obj['Key'])
return
Data:
john|doe|1974-01-01|F|606.0|113955973|cC80fi6kHVjKRNgUnATuE8Nn5x/YyoTUdSDY3sUDis4=|2qalDHguJRO9gR66LZcRLSe2SSQSQAIcT9btvaqLnZk=|eLQ9vYAj0aUfMM9smdpXqIh7QRxLBh6wYl6iYkItz6g=|3ktelRCCKf1CHOVBUdaVbjqltxa70FF+9Lf9MNJ+HDU=|cigna_TOKEN_ENCRYPTION_KEY
j|ab|1978-01-01|M|328.0|125135976|yjYaupdG9gdlje+2HdQB+FdEEj6Lh4+WekqEuB1DSvM=|j8VuTUKll7mywqsKrqBnomppGutsoJAR+2IoH/Tq0b8=|6qNP9ch57MlX912gXS7RMg7UfjtaP6by/cR68PbzNmQ=|R5DemSNrFvcevijrktwf3aixOShNU6j7wfahyKeUyzk=|cigna_TOKEN_ENCRYPTION_KEY
j|j|1985-01-01|F|105.0|115144390|fn0r8nVzmDJUihnaQh1SXm1sLOIjzGsPDBskdX4/b+0=|Fh6facONoOiL9hCCA8Q1rtUp9n5h9VBhg2IaX9gjaKI=|NWtnZegpcpgcit2u063zQv3pcEhk4bpKHKFa9hW7LtU=|P3cVOUd6PyYN5tKezdMkVDI62aW8dv+bjIwKtAgX3OM=|cigna_TOKEN_ENCRYPTION_KEY
jh|on|1989-01-01|M|381.0|133794239|PvCWdh+ucgi1WyP5Vr0E6ysTrTZ1gLTQIteXDxZbEJg=|7K3RsfC8ItQtrEQ+MdBGpx6neggYvBvR8nNDMOBTRtU=|nHsF/rJFM/O+HPevTj9cVYwrXS1ou+2/4FelEXTV0Ww=|Jw/nzI/Gu9s6QsgtxTZhTFFBXGLUv06vEewxQbhDyWk=|cigna_TOKEN_ENCRYPTION_KEY
||1969-01-01|M|926.0|135112782|E2sboFz4Mk2aGIKhD4vm6J9Jt3ZSoSdLm+0PCdWsJto=|YSILMFS5sPPZZF/KFroEHV77z1bMeiL/f4FqF2kj4Xc=|tNjgnby5zDbfT2SLsCCwhNBxobSDcCp7ws0zYVme5w4=|kk25p0lrp2T54Z3B1HM3ZQN0RM63rjqvewrwW5VhYcI=|cigna_TOKEN_ENCRYPTION_KEY
||1978-01-01|M|70.0|170737333|Q8NDJz563UrquOUUz0vD6Es05vIaAD/AfVOef4Mhj24=|k5Q02GVd0nJ6xMs1vHVM24MxV6tZ46HJNKoePcDsyoM=|C9cvHz5n+sDycUecioiWZW8USE6D2dli5gRzo4nOyvY=|z4eNSVNDAjiPU2Sw3VY+Ni1djO5fptl5FGQvfnBodr4=|cigna_TOKEN_ENCRYPTION_KEY
||1996-01-01|M|840.0|91951973|Y4kmxp0qdZVCW5pJgQmvWCfc4URg9oFnv2DWGglfQKM=|RJfyDYJjwuZ1ZDjP+5PA5S2fLS6llFD51Lg+uJ84Tus=|+PXzrKt7O79FehSnL3Q8EjGmnyZVDUfdM4zzHk1ghOY=|gjyVKjunky2Aui3dxzmeLt0U6+vT39/uILMbEiT0co8=|cigna_TOKEN_ENCRYPTION_KEY
||1960-01-01|M|180.0|64496569|80e1CgNJeO8oYQHlSn8zWYL4vVrHSPe9AnK2T2PrdII=|bJl7veT+4MlU4j2mhFpFyins0xeCFWeaA30JUzWsfqo=|0GuhUfbS4xCnCj2ms43wqmGFG5lCnfiIQdyti9moneM=|lq84jO9yhz8f9/DUM0ACVc/Rp+sKDvHznVjNnLOaRo4=|cigna_TOKEN_ENCRYPTION_KEY
||1963-01-01|M|310.0|122732991|zEvHkd5AVT7hZFR3/13dR9KzN5WSulewY0pjTFEov2Y=|eGqNbLoeCN1GJyvgaa01w+z26OtmplcrAY2vxwOZ4Y4=|6q9DPLPK5PPAItZA/x253DvdAWA/r6zIi0dtIqPIu2g=|lOl11DhznPphGQOFz6YFJ8i28HID1T6Sg7B/Y7W1M3o=|cigna_TOKEN_ENCRYPTION_KEY
||2001-01-01|F|650.0|43653178|vv/+KLdhHqUm13bWhpzBexwxgosXSIzgrxZIUwB7PDo=|78cJu1biJAlMddJT1yIzQAH1KCkyDoXiL1+Lo1I2jkw=|9/BM/hvqHYXgfmWehPP2JGGuB6lKmfu7uUsmCtpPyz8=|o/yP8bMzFl6KJ1cX+uFll1SrleCC+8BXmqBzyuGdtwM=|cigna_TOKEN_ENCRYPTION_KEY
output - data inserted into postgresDB:
john|doe|1974-01-01|F|606.0|113955973|cC80fi6kHVjKRNgUnATuE8Nn5x/YyoTUdSDY3sUDis4=|2qalDHguJRO9gR66LZcRLSe2SSQSQAIcT9btvaqLnZk=|eLQ9vYAj0aUfMM9smdpXqIh7QRxLBh6wYl6iYkItz6g=|3ktelRCCKf1CHOVBUdaVbjqltxa70FF+9Lf9MNJ+HDU=|cigna_TOKEN_ENCRYPTION_KEY
j|ab|1978-01-01|M|328.0|125135976|yjYaupdG9gdlje+2HdQB+FdEEj6Lh4+WekqEuB1DSvM=|j8VuTUKll7mywqsKrqBnomppGutsoJAR+2IoH/Tq0b8=|6qNP9ch57MlX912gXS7RMg7UfjtaP6by/cR68PbzNmQ=|R5DemSNrFvcevijrktwf3aixOShNU6j7wfahyKeUyzk=|cigna_TOKEN_ENCRYPTION_KEY
j|j|1985-01-01|F|105.0|115144390|fn0r8nVzmDJUihnaQh1SXm1sLOIjzGsPDBskdX4/b+0=|Fh6facONoOiL9hCCA8Q1rtUp9n5h9VBhg2IaX9gjaKI=|NWtnZegpcpgcit2u063zQv3pcEhk4bpKHKFa9hW7LtU=|P3cVOUd6PyYN5tKezdMkVDI62aW8dv+bjIwKtAgX3OM=|cigna_TOKEN_ENCRYPTION_KEY
jh|on|1989-01-01|M|381.0|133794239|PvCWdh+ucgi1WyP5Vr0E6ysTrTZ1gLTQIteXDxZbEJg=|7K3RsfC8ItQtrEQ+MdBGpx6neggYvBvR8nNDMOBTRtU=|nHsF/rJFM/O+HPevTj9cVYwrXS1ou+2/4FelEXTV0Ww=|Jw/nzI/Gu9s6QsgtxTZhTFFBXGLUv06vEewxQbhDyWk=|cigna_TOKEN_ENCRYPTION_KEY
Expected Output:
john|doe|1974-01-01|F|606.0|113955973|cC80fi6kHVjKRNgUnATuE8Nn5x/YyoTUdSDY3sUDis4=|2qalDHguJRO9gR66LZcRLSe2SSQSQAIcT9btvaqLnZk=|eLQ9vYAj0aUfMM9smdpXqIh7QRxLBh6wYl6iYkItz6g=|3ktelRCCKf1CHOVBUdaVbjqltxa70FF+9Lf9MNJ+HDU=|cigna_TOKEN_ENCRYPTION_KEY
jh|on|1989-01-01|M|381.0|133794239|PvCWdh+ucgi1WyP5Vr0E6ysTrTZ1gLTQIteXDxZbEJg=|7K3RsfC8ItQtrEQ+MdBGpx6neggYvBvR8nNDMOBTRtU=|nHsF/rJFM/O+HPevTj9cVYwrXS1ou+2/4FelEXTV0Ww=|Jw/nzI/Gu9s6QsgtxTZhTFFBXGLUv06vEewxQbhDyWk=|cigna_TOKEN_ENCRYPTION_KEY
Any answers/comments will be very much appriciated, thank you
Fastest way to do operations like this on pandas is through numpy.where.
eg for String length:
data = data[np.where((data['cust_last_nm'].str.len()>1) &
(data['cust_frst_nm'].str.len()>1), True, False)]
Note: you can add postal code condition in same way. by default in your data postal codes will read in as floats, so cast them to string first, and then set length limit:
## string length & postal code conditions together
data = data[np.where((data['cust_last_nm'].str.len()>1) &
(data['cust_frst_nm'].str.len()>1) &
(data['cust_postl_cd'].astype('str').str.len()>4) &
(data['cust_postl_cd'].astype('str').str.len()<8)
, True, False)]
EDIT:
Since you working in chunks, change the data to chunk and put this inside your loop. Also, since you don't import headers (headers=0, change column names to their index values. And convert all values to strings before comparison, since otherwise NaN columns will be treated as floats eg:
chunk = chunk[np.where((chunk[0].astype('str').str.len()>1) &
(chunk[1].astype('str').str.len()>1) &
(chunk[5].astype('str').str.len()>4) &
(chunk[5].astype('str').str.len()<8), True, False)]
Create a new column in the dataframe with a value for the length:
df['name_length'] = df.name.str.len()
Index using the new column:
df = df[df.name_length > 1]

for loop with same dataframe on both side of the operator

I have defined 10 different DataFrames A06_df, A07_df , etc, which picks up six different data point inputs in a daily time series for a number of years. To be able to work with them I need to do some formatting operations such as
A07_df=A07_df.fillna(0)
A07_df[A07_df < 0] = 0
A07_df.columns = col # col is defined
A07_df['oil']=A07_df['oil']*24
A07_df['water']=A07_df['water']*24
A07_df['gas']=A07_df['gas']*24
A07_df['water_inj']=0
A07_df['gas_inj']=0
A07_df=A07_df[['oil', 'water', 'gas','gaslift', 'water_inj', 'gas_inj', 'bhp', 'whp']]
etc for a few more formatting operations
Is there a nice way to have a for loop or something so I don’t have to write each operation for each dataframe A06_df, A07_df, A08.... etc?
As an example, I have tried
list=[A06_df, A07_df, A08_df, A10_df, A11_df, A12_df, A13_df, A15_df, A18_df, A19_df]
for i in list:
i=i.fillna(0)
But this does not do the trick.
Any help is appreciated
As i.fillna() returns a new object (an updated copy of your original dataframe), i=i.fillna(0) will update the content of ibut not of the list content A06_df, A07_df,....
I suggest you copy the updated content in a new list like this:
list_raw = [A06_df, A07_df, A08_df, A10_df, A11_df, A12_df, A13_df, A15_df, A18_df, A19_df]
list_updated = []
for i in list_raw:
i=i.fillna(0)
# More code here
list_updated.append(i)
To simplify your future processes I would recommend to use a dictionary of dataframes instead of a list of named variables.
dfs = {}
dfs['A0'] = ...
dfs['A1'] = ...
dfs_updated = {}
for k,i in dfs.items():
i=i.fillna(0)
# More code here
dfs_updated[k] = i

Two Dataframes, one with more columns than the other -> Subtract and Combine

Ok, I know the title may be a little bit confusing, but I will try to explain this in detail:
I use Python 3.5.2:
I got two .csv files that I read via pandas and convert into two separate dataframes. The first dataframe (coming from XYZ.csv) looks like this:
ip community
10.0.0.1 OL123
.
.
.
123.12.5.31 IK753
The second (export.csv) just has the "ip" column.
Now what I want to do:
I want to compare the two dataframes and as a result get a third dataframe (or list) that contains all ip-addresses that are in the first dataframe but not in the other WITH their correlating community. So far, I managed to compare the two and get a proper result, as long as the second dataframe also contains the communities. I manually inserted those communites into the second export.csv, unfortunately I cannot automate this and that is why I need this to work without the second dataframe containing the communities.
This is my code:
def compare_csvs():
timestamp = time.strftime("%Y-%m-%d")
# Reads XYZ.csv and creates list that contains all ip addresses in integer format.
A = pd.read_csv("XYZ.csv", index_col=False, header=0)
ips1 = A.ip.tolist()
comu1 = A.ro_community.tolist()
AIP = []
for element1 in ips1:
AIP.append(int(ipaddress.IPv4Address(element1)))
IPACOM1 = zip(AIP,comu1)
# Reads export.csv and creates list that contains all ip addresses in integer format.
B = pd.read_csv("export" + timestamp + ".csv", index_col=False, header=0)
ips2 = B.ip.tolist()
comu2 = B.ro_community.tolist()
BIP = []
for element2 in ips2:
BIP.append(int(ipaddress.IPv4Address(element2)))
IPACOM2 = zip(BIP,comu2)
# Creates a set that contains all ip addresses (in integer format) that exist inside the XYZ.csv but not the export.csv.
DeltaInt = OrderedSet(IPACOM1)-OrderedSet(IPACOM2)
List = list(DeltaInt)
UnzippedIP = []
UnzippedCommunity = []
UnzippedIP, UnzippedCommunity = zip(*List)
# Puts all the elements of the DeltaInt set inside a list and also changes the integers back to readable IPv4-addresses.
DeltaIP = []
for element3 in UnzippedIP:
DeltaIP.append(str(ipaddress.IPv4Address(element3)))
IPandCommunity = zip(DeltaIP,UnzippedCommunity)
Now all I need is something that can compare the two lists I created and keep the "community" with the "ip" it is assigned to. I tried a whole lot but I can't seem to get anything to work. Maybe I am just having a problem with the logic here, all help is appreciated!
Also, excuse the code mess, I just threw all that together and will clean it up once the code actually works.
Here is some dummy data to play with:
This is df:
ip community
10.0.0.1 OL123
10.1.1.1 ACLSH
10.9.8.7 OKUAJ1
123.12.5.31 IK753
df = pd.read_clipboard()
This is export.csv:
s_export = pd.Series(s_export = pd.Series(name='ip', data=['10.1.1.1','123.12.5.31', '0.0.0.0'])
s_export
0 10.1.1.1
1 123.12.5.31
2 0.0.0.0
Name: ip, dtype: object
To select the ones that aren't in export, we can simply use boolean indexing using isin():
# ~ means 'not', so here that's "find df.ip that is NOT in s_export"
# Store result in a dataframe
df_exclude = df[~df.ip.isin(s_export)]
df_exclude
ip community
0 10.0.0.1 OL123
2 10.9.8.7 OKUAJ1

combine 2 tables and aggregate data

users
Could you please help with following?
I need to extract data from a mysql database and aggregate them.
There are two tables in the database, both of them have data in a dfferent timestep.
I need now to make one new table (txt), where all data of table 1 are combined with table 2 data.
I so only need the data of table 2 with most coresponding time to timesteps of table 1.
for better understanding, see an example of the tables here:
https://www.dropbox.com/s/mo2q0hj72ilx05n/data%20aggregation.xlsx?dl=0
I already have a python-code which extracts the hexadecimal data and makes table 2.
I also have a code which makes table 1.
I need to combine both now.
Thank you very much for your advices!
After copying your data tables into Python lists, I had to split up the values in table 2 back into independent series. Overall you may be able to skip the step where you consolidate these values into the single table Table2.
The key to solving this is to write a simple class that implements __getitem__, taking a single key argument and returning the corresponding value. For instance, in the case of a regular Python dict, then __getitem__ returns the dict entry that exactly matches the key, or a KeyError if there is no match. In your case, I implemented __getitem__ to just return the entry with the minimum difference of the entry's timestamp from the given timestamp, in this line:
closest = min(self.data, key=lambda x: abs(x[0]-keyts))
(Left as an exercise to the OP - how to handle the case where the key falls exactly between two entries.) If you need to adjust the lookup logic, just change the implementation of __getitem__ - everything else in the code will remain the same.
Here is my sample implementation:
# t1 and t2 are lists of tab-delimited strings copy-pasted
# from the OP's spreadsheet
TAB = '\t'
t1data = [t.split(TAB) for t in t1]
t2data = [t.split(TAB) for t in t2]
# split each parameter into individual time,value pairs
readings = {'A':[], 'B':[], 'C':[]}
for parm in "ABC":
for trec in t2data:
t,a,b,c = trec
t = int(t)
if a: readings['A'].append((t,int(a)))
if b: readings['B'].append((t,int(b)))
if c: readings['C'].append((t,int(c)))
# define class for retrieving value with "closest" key if
# there is not an exact match
class LookupClosest(object):
def __init__(self, pairs):
self.data = pairs
def __getitem__(self, key):
# implement logic here to find closest matching item in series
# TODO - what if key is exactly between two different values?
closest = min(self.data, key=lambda x: abs(x[0]-key))
return closest[1]
# convert each data series to LookupClosest
for key in "ABC":
readings[key] = LookupClosest(readings[key])
# extract and display data
for vals in t1data:
t = int(vals[0])
gps = vals[1]
a = readings['A'][t]
b = readings['B'][t]
c = readings['C'][t]
rec = t,gps,a,b,c
print rec
prints: (I modified the Table1 data so that you can tell the difference from one record to the next):
( 1, 'x01', 1, 10, 44)
(10, 'x10', 2, 11, 47)
(21, 'x21', 4, 12, 45)
(30, 'x30', 3, 12, 44)
(41, 'x41', 4, 12, 47)
(52, 'x52', 2, 10, 48)

How can I do windowed query on multiple columns primary key?

Based on example found here but I guess I'm not understanding it. This works for single column primary keys but fails on multiple ones.
This is my code
#classmethod
def column_windows(cls, q, columns, windowsize, where = None):
"""Return a series of WHERE clauses against
a given column that break it into windows.
Result is an iterable of tuples, consisting of
((start, end), whereclause), where (start, end) are the ids.
Requires a database that supports window functions,
i.e. Postgresql, SQL Server, Oracle.
Enhance this yourself ! Add a "where" argument
so that windows of just a subset of rows can
be computed.
"""
#Here is the thing... how to compare...
def int_for_range(start_id, end_id):
if end_id:
return and_(
columns>=start_id,
columns<end_id
)
else:
return columns>=start_id
if isinstance(columns, Column):
columns_k=(columns,)
else:
columns_k=tuple(columns)
q2=None
cols=()
for c in columns:
cols = cols + (c,)
if not q2:
q2=q.session.query(c)
else:
q2=q2.add_column(c)
q2 = q2.add_column(func.row_number().over(order_by=columns_k).label('rownum'))
q2=q2.filter(q._criterion).from_self(cols)
if windowsize > 1:
q2 = q2.filter("rownum %% %d=1" % windowsize)
for res in q2:
print res
intervals = [id for id, in q2]
while intervals:
start = intervals.pop(0)
if intervals:
end = intervals[0]
else:
end = None
yield int_for_range(start, end)
#classmethod
def windowed_query(cls, q, columns, windowsize):
""""Break a Query into windows on a given column."""
for whereclause in cls.column_windows(q,columns, windowsize):
for row in q.filter(whereclause).order_by(columns):
yield row
Now I have the problem when comparing the set of columns of the primary key. Well I guess kind of recursive clause generating function should do it... Let's try it...
Well, result is not what expected but got it to work: Now it really windows any query keeping all in place, multi column unique ordering, and so on:
Here is my code, hope it may be usefull for someone else:
#classmethod
def window_query(cls, q, windowsize, windows=None):
"""
q=Query object we want to window results
windowsize=The number of elements each window has
windows=The window, or window list, numbers: 1-based to query
"""
windowselect=False
if windows:
if not isinstance(windows,list):
windows=list(windows)
windowselect=True
#Appending u_columns to ordered counting subquery will ensure unique ordering
u_columns=list([col for col in cls.getBestUniqueColumns()])
#o_columns is the list of order by columns for the query
o_columns=list([col for col in q._order_by])
#we append columns from u_columns not in o_columns to ensure unique ordering but keeping the desired one
sq_o_columns=list(o_columns)
for col in u_columns:
if not col in sq_o_columns:
sq_o_columns.append(col)
sub=None
#we select unique columns in subquery that we'll need to join in parent query
for col in u_columns:
if not sub:
sub=q.session.query(col)
else:
sub=sub.add_column(col)
#Generate a tuple from sq_o_columns list (I don't know why over() won't accept list itself TODO: more elegant
sq_o_col_tuple=()
for col in sq_o_columns:
sq_o_col_tuple=sq_o_col_tuple + (col,)
#we add row counting column, counting on generated combined ordering+unique columns tuple
sub = sub.add_column(func.row_number().over(order_by=sq_o_col_tuple).label('rownum')).filter(q._criterion)
#Prepare sub query to use as subquery (LOL)
sub=sub.subquery('lacrn')
#Prepare join ON clauses epxression comparing unique columns defined by u_columns
joinclause=expression.BooleanClauseList()
for col in u_columns:
joinclause=joinclause.__and__(col == sub.c[col.key])
#Make the joining
q=q.join(sub,joinclause
)
i=-1
while True:
#We try to query windows defined by windows list
if windowselect:
#We want selected-windows-results to returned
if windows:
i=windows.pop(0)-1
else:
break
else:
#We want all-windows-results to be returned
i=i+1
res=q.filter(and_(sub.c.rownum > (i*windowsize), sub.c.rownum <= ((i+1)*windowsize))).all()
if not (res or windowselect):
#We end an all-windows-results because of no more results, we must check if is selected-window-query
#because of selected-window-results may not exist and the are unordered
#EX: [1,2,9999999999999,3] : Assuming the third page required has no results it will return pages 1, 2, and 3
break
for row in res:
yield row

Categories

Resources