I am trying to create a system where you can input entries, but the created time & updated time doesn't update.
Here is my code:
from painlessdb import Schema, PainlessDB
from datetime import datetime
import os
schema = {
"entries": {
"tag": Schema.types.int(),
"content": Schema.types.text(),
"created": Schema.types.datetime(),
"updated": Schema.types.datetime(),
},
"amount": Schema.types.int(default=0),
}
db_path = os.path.join(os.getcwd(), 'Others/data.pldb') # .pldb is db file extension
database = PainlessDB(db_path, schema_data=schema)
def add_entry():
amount = database.get("amount").value
current_time = datetime.now()
content = input("Enter the contents of this entry: ")
database.create("entries", fields=database.fields(
tag=amount,
content=content,
created=current_time,
updated=current_time))
database.update("amount", value=amount + 1)
print("Entry created.")
If I run add_entry(), the code runs fine and exits without issue. When I use another function to list the entries:
def list_entries():
query = database.get("entries", multiple=True)
print("No | Content | Created | Updated")
for entry in query:
created_string = entry.created.strftime("%d/%m/%Y - %H:%M:%S")
updated_string = entry.updated.strftime("%d/%m/%Y - %H:%M:%S")
print(f"{entry.tag} | {entry.content} | {created_string} | {updated_string}")
I get this output:
No | Content | Created | Updated
1 | sssss | 21/11/2022 - 22:00:14 | 21/11/2022 - 22:00:14
2 | srsad | 21/11/2022 - 22:00:14 | 21/11/2022 - 22:00:14
3 | kllskk | 21/11/2022 - 22:00:14 | 21/11/2022 - 22:00:14
You can see that the created datetime and updated datetime is somehow stuck to the time I created the first entry (1st entry was created last night, and 3rd entry was just made 5 minutes ago, they are nearly 13 hours apart). I don't think it has anything to do with the database package which I'm using (PainlessDB), but you can take a look here just in case: https://github.com/AidenEllis/PainlessDB/tree/main/docs
I would appreciate any help. Thanks!
Related
I'm attempting to load Excel data into a Pandas DataFrame and then push the ip_address from the DataFrame to an api which returns information back in json format, and then append the results from the json back to the original DataFrame - how would I do this, iterating over each row in the dataframe and appending the results back each time?
Initial dataframe:
date | ip | name
date1 | ip1 | name1
date2 | ip2 | name2
json:
{
"status": "ok",
"8.8.8.8": {
"proxy": "yes",
"type": "VPN",
"provider": "Google Inc",
"risk": 66
}
}
Code:
df = pd.read_excel (r'test_data.xlsx')
def query_api(ip_address):
risk_score = None
vpn_score = None
api_key = "xxx"
base_url = "http://falseacc.com/"
endpoint = f"{base_url}{ip_address}?key={api_key}&risk=1&vpn=1"
r = requests.get(endpoint)
if r.status_code not in range(200, 299):
return None, None
try:
with urllib.request.urlopen(endpoint) as url:
data = json.loads(url.read().decode())
proxy = (data[ip_address]['proxy'])
type = (data[ip_address]['type'])
risk = (data[ip_address]['risk'])
df2 = pd.Dataframe({"ip":[ip_address],
"proxy":[proxy],
"type":[type],
"risk":[risk]})
print(df2)
except:
print("No data")
Expected output:
Dataframe:
date | ip | name | proxy | type | risk
date1 | ip1 | name1 | proxy1 | type1 | risk1
date2 | ip2 | name2 | proxy2 | type2 | risk2
You can use the pandas Series.apply method to pick each ip from your dataframe, and get the proxy, type, risk values corresponding to it from your query_api function. Then assign the values to the corresponding columns in the end:
df = pd.read_excel (r'test_data.xlsx')
def query_api(ip_address):
risk_score = None
vpn_score = None
api_key = "xxx"
base_url = "http://falseacc.com/"
endpoint = f"{base_url}{ip_address}?key={api_key}&risk=1&vpn=1"
r = requests.get(endpoint)
if r.status_code not in range(200, 299):
return pd.Series([None]*3)
try:
with urllib.request.urlopen(endpoint) as url:
data = json.loads(url.read().decode())
proxy = (data[ip_address]['proxy'])
type = (data[ip_address]['type'])
risk = (data[ip_address]['risk'])
return pd.Series([proxy, type, risk])
except:
return pd.Series([None]*3)
df[['proxy','type','risk']] = df.ip.apply(query_api)
Have a look at the official docs to know how pandas.Series.apply works.
I would also recommend to not use type as a variable name in your code since it overshadows the in-built type function in python
I am looking for the ways to change the headings of Cucumber's Data Table to the side. So it will make the feature file readable.
Ordinary way:
| Name | Email | Phone No. | ......... |
| John | i#g.net | 098765644 | ......... |
It can be a very wide data table and I would have to scroll back and forth.
Desired way:
| Name | John |
| Email | i#g.net |
| Phone No. | 098765444 |
.
.
.
There are a small number of examples in Java and Ruby. But I am working with Python.
I had tried many different things like numpy.transpose(), converting them to list. But it won't work because the Data Table's format is:
[<Row['Name','John'],...]
You can implement this behaviour quite simply yourself, here is my version:
def tabledict(table, defaults, aliases = {}):
"""
Converts a behave context.table to a dictionary.
Throws NotImplementedError if the table references an unknown key.
defaults should contain a dictionary with the (surprise) default values.
aliases makes it possible to map alternative names to the keys of the defaults.
All keys of the table will be converted to lowercase, you sould make sure that
you defaults and aliases dictionaries also use lowercase.
Example:
Given the book
| Property | Value |
| Title | The Tragedy of Man |
| Author | Madach, Imre |
| International Standard Book Number | 9631527395 |
defaults = { "title": "Untitled", "author": "Anonymous", "isbn": None, "publisher": None }
aliases = { "International Standard Book Number" : "isbn" }
givenBook = tabledict(context.table, defaults, aliases)
will give you:
givenBook == {
"title": "The Tragedy of Man",
"author": "Madach, Imre",
"isbn": 9631527395,
"publisher": None
}
"""
initParams = defaults.copy()
validKeys = aliases.keys()[:] + defaults.keys()[:]
for row in table:
name, value = row[0].lower(), row[1]
if not name in validKeys:
raise NotImplementedError(u'%s property is not supported.'%name)
if name in aliases: name = aliases[name]
initParams[name] = value
return initParams
This doesn't look like it's related to numpy.
pivoting a list of list is often done with zip(*the_list)
This will return a pivoted behave table
from behave.model import Table
class TurnTable(unittest.TestCase):
"""
"""
def test_transpose(self):
table = Table(
['Name', 'John', 'Mary'],
rows=[
['Email', "john#example.com", "mary#example.com"],
['Phone', "0123456789", "9876543210"],
])
aggregate = [table.headings[:]]
aggregate.extend(table.rows)
pivoted = list(zip(*aggregate))
self.assertListEqual(pivoted,
[('Name', 'Email', 'Phone'),
('John', 'john#example.com', '0123456789'),
('Mary', 'mary#example.com', '9876543210')])
pivoted_table = Table(
pivoted[0],
rows=pivoted[1:])
mary = pivoted_table.rows[1]
self.assertEqual(mary['Name'], 'Mary')
self.assertEqual(mary['Phone'], '9876543210')
you can also have a look at https://pypi.python.org/pypi/pivottable
I want to convert CSV to JSON in python. I was able to convert simple csv files to json, but not able to join two csv into one nested json.
emp.csv:
empid | empname | empemail
e123 | adam | adam#gmail.com
e124 | steve | steve#gmail.com
e125 | brian | brain#yahoo.com
e126 | mark | mark#msn.com
items.csv:
empid | itemid | itemname | itemqty
e123 | itm128 | glass | 25
e124 | itm130 | bowl | 15
e123 | itm116 | book | 50
e126 | itm118 | plate | 10
e126 | itm128 | glass | 15
e125 | itm132 | pen | 10
the output should be like:
[{
"empid": "e123",
"empname": "adam",
"empemail": "adam#gmail.com",
"items": [{
"itemid": "itm128",
"itmname": "glass",
"itemqty": 25
}, {
"itemid": "itm116",
"itmname": "book",
"itemqty": 50
}]
},
and similar for others]
the code that i have written:
import csv
import json
empcsvfile = open('emp.csv', 'r')
jsonfile = open('datamodel.json', 'w')
itemcsvfile = open('items.csv', 'r')
empfieldnames = ("empid","name","phone","email")
itemfieldnames = ("empid","itemid","itemname","itemdesc","itemqty")
empreader = csv.DictReader( empcsvfile, empfieldnames)
itemreader = csv.DictReader( itemcsvfile, itemfieldnames)
output=[];
empcount=0
for emprow in empreader:
output.append(emprow)
for itemrow in itemreader:
if(itemrow["empid"]==emprow["empid"]):
output.append(itemrow)
empcount = empcount +1
print output
json.dump(output, jsonfile,sort_keys=True)
and it doesnot work.
Help needed. Thanks
Okay, so you have a few problems. The first is that you need to specify the delimiter for your CSV file. You're using the | character and by default python is probably going to expect ,. So you need to do this:
empreader = csv.DictReader( empcsvfile, empfieldnames, delimiter='|')
Second, you aren't appending the items to the employee dictionary. You probably should create a key called 'items' on each employee dictionary object and append the items to that list. Like this:
for emprow in empreader:
emprow['items'] = [] # create a list to hold items for this employee
...
for itemrow in itemreader:
...
emprow['items'].append(itemrow) # append an item for this employee
Third, each time you loop through an employee, you need to go back to the top of the item csv file. You have to realize that once python reads to the bottom of a file it won't just go back to the top of it on the next loop. You have to tell it to do that. Right now, your code reads through the item.csv file after the first employee is processed then stays there at the bottom of the file for all the other employees. You have to use seek(0) to tell it to go back to the top of the file for each employee.
for emprow in empreader:
emprow['items'] = []
output.append(emprow)
itemcsvfile.seek(0)
for itemrow in itemreader:
if(itemrow["empid"]==emprow["empid"]):
emprow['items'].append(itemrow)
Columns are not matching:
empid | empname | empemail
empfieldnames = ("empid","name","phone","email")
empid | itemid | itemname | itemqty
itemfieldnames = ("empid","itemid","itemname","itemdesc","itemqty")
We use , usually instead of | in CSV
What's more, you need to replace ' into " in JSON
I’m having a problem printing a table with terminaltables.
Here's my main script:
from ConfigParser import SafeConfigParser
from terminaltables import AsciiTable
parser = SafeConfigParser()
parser.read('my.conf')
for section_name in parser.sections():
description = parser.get(section_name,'description')
url = parser.get(section_name,'url')
table_data = [['Repository', 'Url', 'Description'], [section_name, url, description]]
table = AsciiTable(table_data)
print table.table
and here's the configuration file my.conf:
[bug_tracker]
description = some text here
url = http://localhost.tld:8080/bugs/
username = dhellmann
password = SECRET
[wiki]
description = foo bar bla
url = http://localhost.tld:8080/wiki/
username = dhellmann
password = SECRET
This print me a table for each entry like this:
+-------------+---------------------------------+------------------------+
| Repository | Url | Description |
+-------------+---------------------------------+------------------------+
| bug_tracker | http://localhost.foo:8080/bugs/ | some text here |
+-------------+---------------------------------+------------------------+
+------------+---------------------------------+-------------+
| Repository | Url | Description |
+------------+---------------------------------+-------------+
| wiki | http://localhost.foo:8080/wiki/ | foo bar bla |
+------------+---------------------------------+-------------+
but what I want is this:
+-------------+---------------------------------+------------------------+
| Repository | Url | Description |
+-------------+---------------------------------+------------------------+
| bug_tracker | http://localhost.foo:8080/bugs/ | some text here |
+-------------+---------------------------------+------------------------+
| wiki | http://localhost.foo:8080/wiki/ | foo bar bla |
+-------------+---------------------------------+------------------------+
How can I modify the script to get this output?
The problem is that you recreate table_data and table on each iteration of the loop. You print on each iteration, and then the old data gets thrown away and a new table gets started from scratch. There’s no overlap in the body of the tables you’re creating.
You should have a single table_data, which starts with the headings, then you gather all the table data before printing anything. Add the new entries on each iteration of the loop, and put the print statement after the for loop is finished. Here’s an example:
from ConfigParser import SafeConfigParser
from terminaltables import AsciiTable
parser = SafeConfigParser()
parser.read('my.conf')
table_data = [['Repository', 'Url', 'Description']]
for section_name in parser.sections():
description = parser.get(section_name,'description')
url = parser.get(section_name,'url')
table_data.append([section_name, url, description])
table = AsciiTable(table_data)
print table.table
and here’s what it outputs:
+-------------+---------------------------------+----------------+
| Repository | Url | Description |
+-------------+---------------------------------+----------------+
| bug_tracker | http://localhost.tld:8080/bugs/ | some text here |
| wiki | http://localhost.tld:8080/wiki/ | foo bar bla |
+-------------+---------------------------------+----------------+
If you want to have a horizontal rule between the bug_tracker and wiki lines, then you need to set table.inner_row_border to True. So you’d replace the last two lines with:
table = AsciiTable(table_data)
table.inner_row_border = True
print table.table
I have a pretty simple N:M relationship in SqlAlchemy 0.6.6. I have a class "attractLoop" that can contain a bunch of Media (Images or Videos). I need to have a list in which the same Media (let's say image) can be appended twice. The relationship is as follows:
The media is a base class with most of the attributes Images and Videos will share.
class BaseMedia(BaseClass.BaseClass, declarativeBase):
__tablename__ = "base_media"
_polymorphicIdentity = Column("polymorphic_identity", String(20), key="polymorphicIdentity")
__mapper_args__ = {
'polymorphic_on': _polymorphicIdentity,
'polymorphic_identity': None
}
_name = Column("name", String(50))
_type = Column("type", String(50))
_size = Column("size", Integer)
_lastModified = Column("last_modified", DateTime, key="lastModified")
_url = Column("url", String(512))
_thumbnailFile = Column("thumbnail_file", String(512), key="thumbnailFile")
_md5Hash = Column("md5_hash", LargeBinary(32), key="md5Hash")
Then the class who is going to use these "media" things:
class TestSqlAlchemyList(BaseClass.BaseClass, declarativeBase):
__tablename__ = "tests"
_mediaItems = relationship("BaseMedia",
secondary=intermediate_test_to_media,
primaryjoin="tests.c.id == intermediate_test_to_media.c.testId",
secondaryjoin="base_media.c.id == intermediate_test_to_media.c.baseMediaId",
collection_class=list,
uselist=True
)
def __init__(self):
super(TestSqlAlchemyList, self).__init__()
self.mediaItems = list()
def getMediaItems(self):
return self._mediaItems
def setMediaItems(self, mediaItems):
if mediaItems:
self._mediaItems = mediaItems
else:
self._mediaItems = list()
def addMediaItem(self, mediaItem):
self.mediaItems.append(mediaItem)
#log.debug("::addMediaItem > Added media item %s to %s. Now length is %d (contains: %s)" % (mediaItem.id, self.id, len(self.mediaItems), list(item.id for item in self.mediaItems)))
def addMediaItemById(self, mediaItemId):
mediaItem = backlib.media.BaseMediaManager.BaseMediaManager.getById(int(mediaItemId))
if mediaItem:
if mediaItem.validityCheck():
self.addMediaItem(mediaItem)
else:
raise TypeError("Media item with id %s didn't pass the validity check" % mediaItemId)
else:
raise KeyError("Media Item with id %s not found" % mediaItem)
mediaItems = synonym('_mediaItems', descriptor=property(getMediaItems, setMediaItems))
And the intermediate class to link both of the tables:
intermediate_test_to_media = Table(
"intermediate_test_to_media",
Database.Base.metadata,
Column("id", Integer, primary_key=True),
Column("test_id", Integer, ForeignKey("tests.id"), key="testId"),
Column("base_media_id", Integer, ForeignKey("base_media.id"), key="baseMediaId")
)
When I append the same Media object (instance) twice to one instances of that TestSqlAlchemyList, it appends two correctly, but when I retrieve the TestSqlAlchemyList instance from the database, I only get one. It seems to be behaving more like a set.
The intermediate table has properly all the information, so the insertion seems to be working fine. Is when I try to load the list from the database when I don't get all the items I had inserted.
mysql> SELECT * FROM intermediate_test_to_media;
+----+---------+---------------+
| id | test_id | base_media_id |
+----+---------+---------------+
| 1 | 1 | 1 |
| 2 | 1 | 1 |
| 3 | 1 | 2 |
| 4 | 1 | 2 |
| 5 | 1 | 1 |
| 6 | 1 | 1 |
| 7 | 2 | 1 |
| 8 | 2 | 1 |
| 9 | 2 | 1 |
| 10 | 2 | 2 |
| 11 | 2 | 1 |
| 12 | 2 | 1 |
As you can see, the "test" instance with id=1 should have the media [1, 1, 2, 2, 1, 1]. Well, it doesn't. When I load it from the DB, it only has the media [1, 2]
I have tried to set any parameter in the relationship that could possibly smell to list... uselist, collection_class = list... Nothing...
You will see that the classes inherit from a BaseClass. That's just a class that isn't actually mapped to any table but contains a numeric field ("id") that will be the primary key for every class and a bunch of other methods useful for the rest of the classes in my system (toJSON, toXML...). Just in case, I'm attaching an excerpt of it:
class BaseClass(object):
_id = Column("id", Integer, primary_key=True, key="id")
def __hash__(self):
return int(self.id)
def setId(self, id):
try:
self._id = int(id)
except TypeError:
self._id = None
def getId(self):
return self._id
#declared_attr
def id(cls):
return synonym('_id', descriptor=property(cls.getId, cls.setId))
If anyone can give me a push, I'll appreciate it a lot. Thank you. And sorry for the huge post... I don't really know how to explain better.
I think what you want is a described in the documentation as "Extra Fields in Many-to-Many Relationships". Rather than storing a unique row in the database foreach "link", between attractLoop and Media, you would store a single association and specify (as a part of the link object model) how many times it is referenced and/or in which location(s) in the final list the media should appear. This is a different paradigm from where you started, so it'll certainly require some re-coding, but I think it addresses your issue. You would likely need to use a property to redefine how to add or remove Media from the attactLoop.