Removing incomplete numbers from list Python - python
I'm given this list
[1.3, 2.2, 2.3, 4.2, 5.1, 3.2, 5.3, 3.3, 2.1, 1.1, 5.2, 3.1]
and I'm supposed to remove the elements 1,3, 4.2 and 1.1 so that it becomes
[2.2, 2.3, 5.1, 3.2, 5.3, 3.3, 2.1, 5.2, 3.1]
I have written this code and it becomes wrong. What am I doing wrong?
def removeIncomplete(id):
numbers_buf = id
idComplete = id[:]
for ind, item in enumerate(id):
if item == 1.3 and item == 4.2 and item == 1.1:
numbers_buf.remove(item)
return numbers_buf
#return idComplete
import numpy as np
print(removeIncomplete(np.array([1.3, 2.2, 2.3, 4.2, 5.1,
3.2, 5.3, 3.3, 2.1, 1.1, 5.2, 3.1])))
#Correct output [ 2.2 2.3 5.1 3.2 5.3 3.3 2.1 5.2 3.1]
def removeIncomplete(id):
numbers_buf = id
idComplete = id[:]
for ind, item in enumerate(id):
if item == 1.3 or item == 4.2 or item == 1.1:
numbers_buf = np.delete(numbers_buf, np.where(numbers_buf == item))
return numbers_buf
#return idComplete
import numpy as np
print(removeIncomplete(np.array([1.3, 2.2, 2.3, 4.2, 5.1,
3.2, 5.3, 3.3, 2.1, 1.1, 5.2, 3.1])))
what about using list comprehension
data = [1.3, 2.2, 2.3, 4.2, 5.1, 3.2, 5.3, 3.3, 2.1, 1.1, 5.2, 3.1]
exclude = [1.3, 4.2, 1.1]
out = [val for val in data if val not in exclude]
print(out)
>>>
[2.2, 2.3, 5.1, 3.2, 5.3, 3.3, 2.1, 5.2, 3.1]
Related
Reduce python list size while preserving information
I have multiple long lists in my program. Each list has approximately 3000 float values. And there are around 100 such lists. I want to reduce the size of each list to say, 500, while preserving the information in the original list. I know that it is not possible to completely preserve the information, but I would like to have the elements in the original list to have contribution to the values of the smaller list. Let's say we have the following list, and want to shorten it to a lists of size 3 or 4. myList = [[4.3, 2.3, 5.1, 6.4, 3.2, 7.7, 1.5, 6.5, 7.4, 4.1], [7.3, 3.5, 6.2, 7.4, 2.6, 3.7, 2.6, 7.1, 3.4, 7.1], [4.7, 2.6, 5.6, 7.4, 3.7, 7.7, 3.5, 6.5, 7.2, 4.1], [7.3, 7.3, 4.1, 6.6, 2.2, 3.9, 1.6, 3.0, 2.3, 4.6], [4.7, 2.3, 5.7, 6.4, 3.4, 6.8, 7.2, 6.9, 8.4, 7.1]] Is there some way to do this. Maybe by averaging of some sort (?)
You can do something like this: from statistics import mean, stdev myList = [[4.3, 2.3, 5.1, 6.4, 3.2, 7.7, 1.5, 6.5, 7.4, 4.1], [2.3, 6.4, 3.2, 7.7, 1.5, 6.5, 7.4, 4.1]] shorten_list = [[max(i)-min(i), mean(i), round(stdev(i), 5)] for i in myList] You can also include information such as the sum of the list or the mode. If you just want to take the mean of each list within your list, you can just do this: from statistics import mean mean_list = list(map(mean, myList))
batching may work. I request you to look at this question How do I split a list into equally-sized chunks? this converts the list into equal batches. or can sequence the dimension of the list using max pool layer import numpy as np from keras.models import Sequential from keras.layers import MaxPooling2D image = np.array([[4.3, 2.3, 5.1, 6.4, 3.2, 7.7, 1.5, 6.5, 7.4, 4.1], [7.3, 3.5, 6.2, 7.4, 2.6, 3.7, 2.6, 7.1, 3.4, 7.1], [4.7, 2.6, 5.6, 7.4, 3.7, 7.7, 3.5, 6.5, 7.2, 4.1], [7.3, 7.3, 4.1, 6.6, 2.2, 3.9, 1.6, 3.0, 2.3, 4.6], [4.7, 2.3, 5.7, 6.4, 3.4, 6.8, 7.2, 6.9, 8.4, 7.1]] ) image = image.reshape(1, 5, 10, 1) model = Sequential([MaxPooling2D(pool_size =(1,10), strides = (1))]) output = model.predict(image) print(output) this gives output as [[[[7.7]] [[7.4]] [[7.7]] [[7.3]] [[8.4]]]] if you want to change the output size, can change the pool size.
How to combine these two numpy arrays?
How would I combine these two arrays: x = np.asarray([[1.0, 1.1, 1.2, 1.3], [2.0, 2.1, 2.2, 2.3], [3.0, 3.1, 3.2, 3.3], [4.0, 4.1, 4.2, 4.3], [5.0, 5.1, 5.2, 5.3]]) y = np.asarray([[0.1], [0.2], [0.3], [0.4], [0.5]]) Into something like this: xy = [[0.1, [1.0, 1.1, 1.2, 1.3]], [0.2, [2.0, 2.1, 2.2, 2.3]... Thank you for the assistance! Someone suggested I post code that I have tried and I realized I had forgot to: xy = np.array(list(zip(x, y))) This is my current solution, however it is extremely inefficient.
You can use zip to combine [[a,b] for a,b in zip(y,x)] Out: [[array([0.1]), array([1. , 1.1, 1.2, 1.3])], [array([0.2]), array([2. , 2.1, 2.2, 2.3])], [array([0.3]), array([3. , 3.1, 3.2, 3.3])], [array([0.4]), array([4. , 4.1, 4.2, 4.3])], [array([0.5]), array([5. , 5.1, 5.2, 5.3])]]
A pure numpy solution will be much faster than list comprehension for large arrays. I do have to say your use case makes no sense, as there is no logic in putting these arrays into a single data structure, and I believe you should re check your design. Like #user2357112 supports Monica was subtly implying, this is very likely an XY problem. See if this is really what you are trying to solve, and not something else. If you want something else, try asking about that. I strongly suggest checking what you want to do before moving on, as you will put yourself in a place with bad design. That aside, here's a solution import numpy as np x = np.asarray([[1.0, 1.1, 1.2, 1.3], [2.0, 2.1, 2.2, 2.3], [3.0, 3.1, 3.2, 3.3], [4.0, 4.1, 4.2, 4.3], [5.0, 5.1, 5.2, 5.3]]) y = np.asarray([[0.1], [0.2], [0.3], [0.4], [0.5]]) xy = np.hstack([y, x]) print(xy) prints [[0.1 1. 1.1 1.2 1.3] [0.2 2. 2.1 2.2 2.3] [0.3 3. 3.1 3.2 3.3] [0.4 4. 4.1 4.2 4.3] [0.5 5. 5.1 5.2 5.3]]
Find median for each element in list
I have some large list of data, between 1000 and 10000 elements. Now I want to filter out some peak values with the help of the median function. #example list with just 10 elements my_list = [4.5, 4.7, 5.1, 3.9, 9.9, 5.6, 4.3, 0.2, 5.0, 4.6] #list of medians calculated from 3 elements my_median_list = [] for i in range(len(my_list)): if i == 0: my_median_list.append(statistics.median([my_list[0], my_list[1], my_list[2]]) elif i == (len(my_list)-1): my_median_list.append(statistics.median([my_list[-1], my_list[-2], my_list[-3]]) else: my_median_list.append(statistics.median([my_list[i-1], my_list[i], my_list[i+1]]) print(my_median_list) # [4.7, 4.7, 4.7, 5.1, 5.6, 5.6, 4.3, 4.3, 4.6, 4.6] This works so far. But I think it looks ugly and is maybe inefficient? Is there a way with statistics or NumPy to do it faster? Or another solution? Also, I look for a solution where I can pass an argument from how many elements the median is calculated. In my example, I used the median always from 3 elements. But with my real data, I want to play with the median setting and then maybe use the median out of 10 elements.
You are calculating too many values since: my_median_list.append(statistics.median([my_list[i-1], my_list[i], my_list[i+1]]) and my_median_list.append(statistics.median([my_list[0], my_list[1], my_list[2]]) are the same when i == 1. The same error happens at the end so you get one too many end values. It's easier and less error-prone to do this with zip() which will make the three element tuples for you: from statistics import median my_list = [4.5, 4.7, 5.1, 3.9, 9.9, 5.6, 4.3, 0.2, 5.0, 4.6] [median(l) for l in zip(my_list, my_list[1:], my_list[2:])] # [4.7, 4.7, 5.1, 5.6, 5.6, 4.3, 4.3, 4.6] For groups of arbitrary size collections.deque is super handy because you can set a max size. Then you just keep pushing items on one end and it removes items off the other to maintain the size. Here's a generator example that takes you groups size as n: from statistics import median from collections import deque def rolling_median(l, n): d = deque(l[0:n], n) yield median(d) for num in l[n:]: d.append(num) yield median(d) my_list = [4.5, 4.7, 5.1, 3.9, 9.9, 5.6, 4.3, 0.2, 5.0, 4.6] list(rolling_median(my_list, 3)) # [4.7, 4.7, 5.1, 5.6, 5.6, 4.3, 4.3, 4.6] list(rolling_median(my_list, 5)) # [4.7, 5.1, 5.1, 4.3, 5.0, 4.6]
SQLAlchemy and pandas producing an error (engine.table_names return empty list)
I have a code which looks like the following: from sqlalchemy import create_engine import pandas as pd # load the CSV df = pd.Series() df['raw'] = pd.read_csv('./data/Iris.csv',index_col='Id') # Connect to the mysql, and use database "datasets" engine = create_engine('mysql://root:root#127.0.0.1') engine.execute("USE Datasets") # select new db # Write data table_name = 'IRIS' df['raw'].to_sql(table_name, engine, if_exists='append', index=False) The data is correctly inserted into the database, and can be loaded afterwards, but the command produces an error: > --------------------------------------------------------------------------- AttributeError Traceback (most recent call > last) <ipython-input-6-cfb9b0f5c930> in <module>() > 1 table_name = 'IRIS' > 2 > ----> 3 df['raw'].to_sql(table_name, engine, if_exists='append', index=False) > > ~/anaconda3/lib/python3.6/site-packages/pandas/core/generic.py in > to_sql(self, name, con, flavor, schema, if_exists, index, index_label, > chunksize, dtype) 1360 sql.to_sql(self, name, con, > flavor=flavor, schema=schema, 1361 > if_exists=if_exists, index=index, index_label=index_label, > -> 1362 chunksize=chunksize, dtype=dtype) 1363 1364 def to_pickle(self, path, compression='infer'): > > ~/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in > to_sql(frame, name, con, flavor, schema, if_exists, index, > index_label, chunksize, dtype) > 469 pandas_sql.to_sql(frame, name, if_exists=if_exists, index=index, > 470 index_label=index_label, schema=schema, > --> 471 chunksize=chunksize, dtype=dtype) > 472 > 473 > > ~/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in > to_sql(self, frame, name, if_exists, index, index_label, schema, > chunksize, dtype) 1157 table_names = > engine.table_names( 1158 schema=schema or > self.meta.schema, > -> 1159 connection=conn, 1160 ) 1161 if name not in table_names: > > ~/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py in > table_names(self, schema, connection) 2137 if not > schema: 2138 schema = > self.dialect.default_schema_name > -> 2139 return self.dialect.get_table_names(conn, schema) 2140 2141 def has_table(self, table_name, schema=None): > > <string> in get_table_names(self, connection, schema, **kw) > > ~/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/reflection.py > in cache(fn, self, con, *args, **kw) > 40 info_cache = kw.get('info_cache', None) > 41 if info_cache is None: > ---> 42 return fn(self, con, *args, **kw) > 43 key = ( > 44 fn.__name__, > > ~/anaconda3/lib/python3.6/site-packages/sqlalchemy/dialects/mysql/base.py > in get_table_names(self, connection, schema, **kw) 1954 > rp = connection.execute( 1955 "SHOW FULL TABLES > FROM %s" % > -> 1956 self.identifier_preparer.quote_identifier(current_schema)) 1957 > 1958 return [row[0] > > ~/anaconda3/lib/python3.6/site-packages/sqlalchemy/sql/compiler.py in > quote_identifier(self, value) 3021 3022 return > self.initial_quote + \ > -> 3023 self._escape_identifier(value) + \ 3024 self.final_quote 3025 > > ~/anaconda3/lib/python3.6/site-packages/sqlalchemy/sql/compiler.py in > _escape_identifier(self, value) 2999 """ 3000 > -> 3001 value = value.replace(self.escape_quote, self.escape_to_quote) 3002 if self._double_percents: > 3003 value = value.replace('%', '%%') > > AttributeError: 'NoneType' object has no attribute 'replace' I've enabled logging on the SQL, and the log states the following: Time Id Command Argument 2018-01-12T18:13:43.116036Z 20 Query SET global log_output = 'file' 2018-01-12T18:13:44.291677Z 20 Query SET global general_log = on 2018-01-12T18:14:15.861927Z 19 Query DESCRIBE `IRIS` 2018-01-12T18:14:15.864129Z 19 Query rollback 2018-01-12T18:14:15.869620Z 19 Query INSERT INTO `IRIS` (`SepalLengthCm`, `SepalWidthCm`, `PetalLengthCm`, `PetalWidthCm`, `Species`) VALUES (5.1, 3.5, 1.4, 0.2, 'Iris-setosa'),(4.9, 3, 1.4, 0.2, 'Iris-setosa'),(4.7, 3.2, 1.3, 0.2, 'Iris-setosa'),(4.6, 3.1, 1.5, 0.2, 'Iris-setosa'),(5, 3.6, 1.4, 0.2, 'Iris-setosa'),(5.4, 3.9, 1.7, 0.4, 'Iris-setosa'),(4.6, 3.4, 1.4, 0.3, 'Iris-setosa'),(5, 3.4, 1.5, 0.2, 'Iris-setosa'),(4.4, 2.9, 1.4, 0.2, 'Iris-setosa'),(4.9, 3.1, 1.5, 0.1, 'Iris-setosa'),(5.4, 3.7, 1.5, 0.2, 'Iris-setosa'),(4.8, 3.4, 1.6, 0.2, 'Iris-setosa'),(4.8, 3, 1.4, 0.1, 'Iris-setosa'),(4.3, 3, 1.1, 0.1, 'Iris-setosa'),(5.8, 4, 1.2, 0.2, 'Iris-setosa'),(5.7, 4.4, 1.5, 0.4, 'Iris-setosa'),(5.4, 3.9, 1.3, 0.4, 'Iris-setosa'),(5.1, 3.5, 1.4, 0.3, 'Iris-setosa'),(5.7, 3.8, 1.7, 0.3, 'Iris-setosa'),(5.1, 3.8, 1.5, 0.3, 'Iris-setosa'),(5.4, 3.4, 1.7, 0.2, 'Iris-setosa'),(5.1, 3.7, 1.5, 0.4, 'Iris-setosa'),(4.6, 3.6, 1, 0.2, 'Iris-setosa'),(5.1, 3.3, 1.7, 0.5, 'Iris-setosa'),(4.8, 3.4, 1.9, 0.2, 'Iris-setosa'),(5, 3, 1.6, 0.2, 'Iris-setosa'),(5, 3.4, 1.6, 0.4, 'Iris-setosa'),(5.2, 3.5, 1.5, 0.2, 'Iris-setosa'),(5.2, 3.4, 1.4, 0.2, 'Iris-setosa'),(4.7, 3.2, 1.6, 0.2, 'Iris-setosa'),(4.8, 3.1, 1.6, 0.2, 'Iris-setosa'),(5.4, 3.4, 1.5, 0.4, 'Iris-setosa'),(5.2, 4.1, 1.5, 0.1, 'Iris-setosa'),(5.5, 4.2, 1.4, 0.2, 'Iris-setosa'),(4.9, 3.1, 1.5, 0.1, 'Iris-setosa'),(5, 3.2, 1.2, 0.2, 'Iris-setosa'),(5.5, 3.5, 1.3, 0.2, 'Iris-setosa'),(4.9, 3.1, 1.5, 0.1, 'Iris-setosa'),(4.4, 3, 1.3, 0.2, 'Iris-setosa'),(5.1, 3.4, 1.5, 0.2, 'Iris-setosa'),(5, 3.5, 1.3, 0.3, 'Iris-setosa'),(4.5, 2.3, 1.3, 0.3, 'Iris-setosa'),(4.4, 3.2, 1.3, 0.2, 'Iris-setosa'),(5, 3.5, 1.6, 0.6, 'Iris-setosa'),(5.1, 3.8, 1.9, 0.4, 'Iris-setosa'),(4.8, 3, 1.4, 0.3, 'Iris-setosa'),(5.1, 3.8, 1.6, 0.2, 'Iris-setosa'),(4.6, 3.2, 1.4, 0.2, 'Iris-setosa'),(5.3, 3.7, 1.5, 0.2, 'Iris-setosa'),(5, 3.3, 1.4, 0.2, 'Iris-setosa'),(7, 3.2, 4.7, 1.4, 'Iris-versicolor'),(6.4, 3.2, 4.5, 1.5, 'Iris-versicolor'),(6.9, 3.1, 4.9, 1.5, 'Iris-versicolor'),(5.5, 2.3, 4, 1.3, 'Iris-versicolor'),(6.5, 2.8, 4.6, 1.5, 'Iris-versicolor'),(5.7, 2.8, 4.5, 1.3, 'Iris-versicolor'),(6.3, 3.3, 4.7, 1.6, 'Iris-versicolor'),(4.9, 2.4, 3.3, 1, 'Iris-versicolor'),(6.6, 2.9, 4.6, 1.3, 'Iris-versicolor'),(5.2, 2.7, 3.9, 1.4, 'Iris-versicolor'),(5, 2, 3.5, 1, 'Iris-versicolor'),(5.9, 3, 4.2, 1.5, 'Iris-versicolor'),(6, 2.2, 4, 1, 'Iris-versicolor'),(6.1, 2.9, 4.7, 1.4, 'Iris-versicolor'),(5.6, 2.9, 3.6, 1.3, 'Iris-versicolor'),(6.7, 3.1, 4.4, 1.4, 'Iris-versicolor'),(5.6, 3, 4.5, 1.5, 'Iris-versicolor'),(5.8, 2.7, 4.1, 1, 'Iris-versicolor'),(6.2, 2.2, 4.5, 1.5, 'Iris-versicolor'),(5.6, 2.5, 3.9, 1.1, 'Iris-versicolor'),(5.9, 3.2, 4.8, 1.8, 'Iris-versicolor'),(6.1, 2.8, 4, 1.3, 'Iris-versicolor'),(6.3, 2.5, 4.9, 1.5, 'Iris-versicolor'),(6.1, 2.8, 4.7, 1.2, 'Iris-versicolor'),(6.4, 2.9, 4.3, 1.3, 'Iris-versicolor'),(6.6, 3, 4.4, 1.4, 'Iris-versicolor'),(6.8, 2.8, 4.8, 1.4, 'Iris-versicolor'),(6.7, 3, 5, 1.7, 'Iris-versicolor'),(6, 2.9, 4.5, 1.5, 'Iris-versicolor'),(5.7, 2.6, 3.5, 1, 'Iris-versicolor'),(5.5, 2.4, 3.8, 1.1, 'Iris-versicolor'),(5.5, 2.4, 3.7, 1, 'Iris-versicolor'),(5.8, 2.7, 3.9, 1.2, 'Iris-versicolor'),(6, 2.7, 5.1, 1.6, 'Iris-versicolor'),(5.4, 3, 4.5, 1.5, 'Iris-versicolor'),(6, 3.4, 4.5, 1.6, 'Iris-versicolor'),(6.7, 3.1, 4.7, 1.5, 'Iris-versicolor'),(6.3, 2.3, 4.4, 1.3, 'Iris-versicolor'),(5.6, 3, 4.1, 1.3, 'Iris-versicolor'),(5.5, 2.5, 4, 1.3, 'Iris-versicolor'),(5.5, 2.6, 4.4, 1.2, 'Iris-versicolor'),(6.1, 3, 4.6, 1.4, 'Iris-versicolor'),(5.8, 2.6, 4, 1.2, 'Iris-versicolor'),(5, 2.3, 3.3, 1, 'Iris-versicolor'),(5.6, 2.7, 4.2, 1.3, 'Iris-versicolor'),(5.7, 3, 4.2, 1.2, 'Iris-versicolor'),(5.7, 2.9, 4.2, 1.3, 'Iris-versicolor'),(6.2, 2.9, 4.3, 1.3, 'Iris-versicolor'),(5.1, 2.5, 3, 1.1, 'Iris-versicolor'),(5.7, 2.8, 4.1, 1.3, 'Iris-versicolor'),(6.3, 3.3, 6, 2.5, 'Iris-virginica'),(5.8, 2.7, 5.1, 1.9, 'Iris-virginica'),(7.1, 3, 5.9, 2.1, 'Iris-virginica'),(6.3, 2.9, 5.6, 1.8, 'Iris-virginica'),(6.5, 3, 5.8, 2.2, 'Iris-virginica'),(7.6, 3, 6.6, 2.1, 'Iris-virginica'),(4.9, 2.5, 4.5, 1.7, 'Iris-virginica'),(7.3, 2.9, 6.3, 1.8, 'Iris-virginica'),(6.7, 2.5, 5.8, 1.8, 'Iris-virginica'),(7.2, 3.6, 6.1, 2.5, 'Iris-virginica'),(6.5, 3.2, 5.1, 2, 'Iris-virginica'),(6.4, 2.7, 5.3, 1.9, 'Iris-virginica'),(6.8, 3, 5.5, 2.1, 'Iris-virginica'),(5.7, 2.5, 5, 2, 'Iris-virginica'),(5.8, 2.8, 5.1, 2.4, 'Iris-virginica'),(6.4, 3.2, 5.3, 2.3, 'Iris-virginica'),(6.5, 3, 5.5, 1.8, 'Iris-virginica'),(7.7, 3.8, 6.7, 2.2, 'Iris-virginica'),(7.7, 2.6, 6.9, 2.3, 'Iris-virginica'),(6, 2.2, 5, 1.5, 'Iris-virginica'),(6.9, 3.2, 5.7, 2.3, 'Iris-virginica'),(5.6, 2.8, 4.9, 2, 'Iris-virginica'),(7.7, 2.8, 6.7, 2, 'Iris-virginica'),(6.3, 2.7, 4.9, 1.8, 'Iris-virginica'),(6.7, 3.3, 5.7, 2.1, 'Iris-virginica'),(7.2, 3.2, 6, 1.8, 'Iris-virginica'),(6.2, 2.8, 4.8, 1.8, 'Iris-virginica'),(6.1, 3, 4.9, 1.8, 'Iris-virginica'),(6.4, 2.8, 5.6, 2.1, 'Iris-virginica'),(7.2, 3, 5.8, 1.6, 'Iris-virginica'),(7.4, 2.8, 6.1, 1.9, 'Iris-virginica'),(7.9, 3.8, 6.4, 2, 'Iris-virginica'),(6.4, 2.8, 5.6, 2.2, 'Iris-virginica'),(6.3, 2.8, 5.1, 1.5, 'Iris-virginica'),(6.1, 2.6, 5.6, 1.4, 'Iris-virginica'),(7.7, 3, 6.1, 2.3, 'Iris-virginica'),(6.3, 3.4, 5.6, 2.4, 'Iris-virginica'),(6.4, 3.1, 5.5, 1.8, 'Iris-virginica'),(6, 3, 4.8, 1.8, 'Iris-virginica'),(6.9, 3.1, 5.4, 2.1, 'Iris-virginica'),(6.7, 3.1, 5.6, 2.4, 'Iris-virginica'),(6.9, 3.1, 5.1, 2.3, 'Iris-virginica'),(5.8, 2.7, 5.1, 1.9, 'Iris-virginica'),(6.8, 3.2, 5.9, 2.3, 'Iris-virginica'),(6.7, 3.3, 5.7, 2.5, 'Iris-virginica'),(6.7, 3, 5.2, 2.3, 'Iris-virginica'),(6.3, 2.5, 5, 1.9, 'Iris-virginica'),(6.5, 3, 5.2, 2, 'Iris-virginica'),(6.2, 3.4, 5.4, 2.3, 'Iris-virginica'),(5.9, 3, 5.1, 1.8, 'Iris-virginica') 2018-01-12T18:14:15.879509Z 19 Query commit 2018-01-12T18:14:15.881477Z 19 Query rollback 2018-01-12T18:14:15.881695Z 19 Query rollback I see that the problem is when SQLalchemy is trying to get the name of the current database. Is it possible to either fix the error, or to stop the API from making the call? my only problem is the program crashing due to an error, as the data is actually in the table.
I found the root cause of the problem to be the table_names method to return an empty list. len(engine.table_names()) 0 I changed this: # Connect to the mysql, and use database "datasets" ######### wrong part ########## #engine = create_engine('mysql://root:root#127.0.0.1') #engine.execute("USE Datasets") # select new db ######### Correct code ######## engine = create_engine('mysql+mysqlconnector://root:root#127.0.0.1/Datasets') this way the engine knows which database to return the tables from. the length of table_names is now correct: len(engine.table_names()) 1
List conversion
I am looking for a way to convert a list like this [[1.1, 1.2, 1.3, 1.4, 1.5], [2.1, 2.2, 2.3, 2.4, 2.5], [3.1, 3.2, 3.3, 3.4, 3.5], [4.1, 4.2, 4.3, 4.4, 4.5], [5.1, 5.2, 5.3, 5.4, 5.5]] to something like this [[(1.1,1.2),(1.2,1.3),(1.3,1.4),(1.4,1.5)], [(2.1,2.2),(2.2,2.3),(2.3,2.4),(2.4,2.5)] .........................................
The following line should do it: [list(zip(row, row[1:])) for row in m] where m is your initial 2-dimensional list UPDATE for second question in comment You have to transpose (= exchange columns with rows) your 2-dimensional list. The python way to achieve a transposition of m is zip(*m): [list(zip(column, column[1:])) for column in zip(*m)]
In response to further comment from questioner, two answers: # Original grid grid = [[1.1, 1.2, 1.3, 1.4, 1.5], [2.1, 2.2, 2.3, 2.4, 2.5], [3.1, 3.2, 3.3, 3.4, 3.5], [4.1, 4.2, 4.3, 4.4, 4.5], [5.1, 5.2, 5.3, 5.4, 5.5]] # Window function to return sequence of pairs. def window(row): return [(row[i], row[i + 1]) for i in range(len(row) - 1)] ORIGINAL QUESTION: # Print sequences of pairs for grid print [window(y) for y in grid] UPDATED QUESTION: # Take the nth item from every row to get that column. def column(grid, columnNumber): return [row[columnNumber] for row in grid] # Transpose grid to turn it into columns. def transpose(grid): # Assume all rows are the same length. numColumns = len(grid[0]) return [column(grid, columnI) for columnI in range(numColumns)] # Return windowed pairs for transposed matrix. print [window(y) for y in transpose(grid)]
Another version would be to use lambda and map map(lambda x: zip(x,x[1:]),m) where m is your matrix of choice.
List comprehensions provide a concise way to create lists: http://docs.python.org/tutorial/datastructures.html#list-comprehensions [[(a[i],a[i+1]) for i in xrange(len(a)-1)] for a in A]