I wrote a Python-based exam program and I can make this program with .exe extension under current conditions. I am also considering releasing this software as a commercial product. For example, when a person buys this software for 1 year, he will be given a license key and this license key will be unusable after one year. Or, when this person approaches the end of a year, he will be able to use it for two more years by extending his contract. There are some issues that I can't solve at this point.
1 - How can I remotely update a value of the license key such as the expiration date.
2 - How can I convert software that I can convert into .exe to setup.exe (I want to package the program).
3 - How can I create a system that I can update remotely while a user is using the program, perhaps in another country.
So at the end of the day I want to make my program salable. Can you help me on the path I should follow?
(Some information can be found on Google, but I couldn't find a deep enough resource for me.)
For the license part -
One of the simplest things to do would be to have the activation key as the date, and then store the activation key in a txt file. On every program start, read this txt file and compare the date.
To avoid the user bypassing this by modifying this txt, you could use some kind of asymmetric encryption. You have 1 key hardcoded into the software and 1 key private. The key would be the date encrypted with your key that can be decrypted with the app key.
To avoid the user sharing this key, you could have each download with a serial number, and have the serial number and the date included in each key.
To prevent a user from sharing the entirety of the program and the key, you would need to do something network based -> Check number of ip's per serial numbers etc. But that can also be spoofed or become cause the software to become unreliable if there are network issues etc.
In the end there is no 100% crack prevention, as you might have seen from the vast amount of cracked software online.
For the remote part -
You could user various restrictions such as make sure the system time is correct, you only have 1 ip per key.
You could ask the application to check for and download the new exe
Related
I'm relatively new in MongoDB- I've done stuff in it before, but my current project involves using collections to store values per "key". In this case- a key is referring to a string of characters that will be used to access my software. The authentication and key generation will be done on my website using Flask as the backend, which means I can use Python to handle all the key generation and authentication stuff. I have the code complete for the most part, it's able to generate and authenticate keys amazingly, and I'm really happy with how it works. Now the problem I face is getting the collection or key to automatically delete after 3 days.
The reason I want them to delete after 3 days is because the keys aren't lifetime keys. The software is a free software, but in order to use it you must have a key. That key should expire after a certain amount of time (in this case, 3 days) and the user must go back and get another one.
Please note that I can't use invidual documents, as one I've already set it up to use collections and two it needs to store multiple documents as compared to one document.
I've already tried TTL on the collection but it doesn't seem to be working.
What's the best way to do this? Keep in mind the collection name is the key itself so it can't have a date of deletion in it (a date that another code scans and when that date is met the collection is deleted).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 months ago.
Improve this question
A friend of mine has a small business and I offered her to help. For me it would be a great case study to learn a few things along the way, as I'm a beginner in Python in SQL.
At the moment she uses Excel to do her administration, but given she is not very skilled at it and often her accountant reverts back with errors in her sheet. Hence, I want to help her out with creating a simple and straight forward tool in Python (let's call it ERP for future reference), connected to a db. And a manual 3-monthly output (need to give the accountant a call for his preference) which she can email to her accountant.
I've broken it down into 4 fases for myself and despite some things I found online, I'm a bit fearfull that I invest a lot of time and get an unstable output.
Fase 1. Hosting a SQLite DB online and make sure the ERP can read, update, and backup the DB.
Fase 2. Write the ERP with a simple; CRM and "order" DB (1 "order" would be a customer for 30-60 minutes).
Fase 3. timesheet/calender (for making apointments) and an IF, if my friend had a customer, that it autmatically adds an entry "clean room" of 1.5 hours to the day. (timesheet is an IRS thing, she needs to proof how many hours she worked).
Fase 4. Live testing.
The ERP would just be a gui connected to the DB.
With 4 tabs, "CRM", "Appointment/Orders", "Calender(optional)",
So my questions per fase are;
Fase 1
SQLite is free (vs MSSQL) and normally hosted locally. I want to host it external to avoid losing the DB if I.E. HDD crash / pc-format by someone else (friend needs help from someone who might not be aware of the DB). Dropbox, doesn't work appearantly. Github should be possible. Is Github safe and stable for this? Or would it be better to host it with a website, with logon to reach the DB? Seems complexer, but once that works I could use the website method for more programs/tools. Unfortunately can't find much on it, other than it should be possible.
According to this article; https://www.geeksforgeeks.org/how-to-create-a-backup-of-a-sqlite-database-using-python/ a backup can be easily created. However I would like to do it;
With an interval I.E. Weekly, that Python checks at startup whether the week number (2022_33) already excists or if the last backup was longer than 7 days ago. Any idea how I can achieve this?
Automatic, so my friend doesn't have to remember to do it, nor press a button (user-friendly).
Would this be possible? If so, does anyone have additional documentation to achieve this?
Fase 2
Pretty sure I can achieve this by myself / read up where needed . Most common things can be found online / has been asked before or I've done before :).
Fase 3
I have no clue here, all though it is optional (would be great addition when turning it into a mobile app as well), I would love to learn how to do this. I did find how to create a calender (view dates only), but not one where you can "plan" appointments. Similar to I.E. Outlook and schedule a teams meeting idea. I would assume this could be a DB in SQlite?
And where I would be totally lost is how to write a function, where it would check IF on a day an appointment is TRUE - Then add "Clean_room" = 1. ELIF "Clean_Room' = 0. Where it would automatically add an entry in the "Timesheet" DB for "Clean_Room" = "90 minutes".
Any help on this would be very appriciated, most likely I've used the wrong search terms to find a piece a code / article that covers this "option". As I'm more than sure it has been done before.
Fase 4
As with many things, it feels far, far, away now...
Many thanks in advance for taking the time to read this extended "question".
The SQLite database can be compiled to WebAssembly (WASM) and served using static website hosting. SQLite stores data using files so you can avoid downloading the whole thing by making clever use of HTTP range requests which accept a certain byte range as a 204 partial request. The HTTP 204 No Content success status response code indicates that a request has succeeded, but that the client doesn't need to navigate away from its current page. This might be used, for example, when implementing "save and continue editing" functionality for a wiki site. See Developer Mozilla docs.
Be careful, the SQLite will be usable only for getting data sake, SQLite is just a file on storage unit, such as harddisk or memory.
I don't see why you want to use SQLite exactly, knowing well it is only a local database, but I recommend you to use any other SQL database for your application, the process is quite the same, especially the SQL won't change for almost all requests obviously, see The Most Popular SQL Databases for reference. I most of the time use MySQL or PostgreSQL for SQL databases, check them out.
I am working on a Data Quality Monitoring project which is new me.
I started with a Data Profiling to analyse my data and have a global view of it.
Next, i thought about defining some data quality rules, but i'm a little bit confused about how to implement these rules.
If u guys can guide me a little bit as i'm totally new to this.
This is quite ambiguous question but I try to guess a few tips how to start. Since you are a new to data quality and want already implementation hints, lets start from that.
Purpose: Data quality monitoring system wants to a) recognize error and b) trigger next step how to handle it.
First, build a data quality rule for your data set. The rule can be attribute, record, table or cross-table rule. Lets start with attribute level rule. Implement a rule that recognizes that attribute content does not have '#' in it. Run it to email attributes and create an error record for each row that does not have '#' in email attribute. Error record should have these attributes:
ErrorInstanceID; ErrorName; ErrorCategory; ErrorRule; ErrorLevel; ErrorReaction; ErrorScript; SourceSystem; SourceTable; SourceRecord; SourceAttribute; ErrorDate;
"asd2321sa1"; "Email Format Invalid"; "AttributeError"; "Does not contain #"; "Warning|Alert"; "Request new email at next login"; "ScriptID x"; "Excel1"; "Sheet1"; "RowID=34"; "Column=Email"; "1.1.2022"
MONITORING SYSTEM
You need to make above scripts configurable so that you can change systems, tables and columns as well as rules easily. When ran on top of data sets, they will all populate error records to the same structures resulting in a consistent and historical storage of all errors. You should be able to build reports about existing errors in specific systems, trends of errors appearing or getting fixed and so on.
Next, you need to start building a full-sale data quality metadata repository with a proper data model and design a suitable historical versioning for the above information. You need to store information like which rules were ran and when, which systems and tables they checked, and so on. To detect which systems have bee included in monitoring and also to recognize if systems are not monitored with correct rules. In practice, quality monitoring for data quality monitoring system. You should have statistics which systems are monitored with specific rules, when they were ran last time, aggregates of inspected tables, records and errors.
Typically, its more important to focus on errors that need immediate attention and "alert" an end-user to go fix the issue or triggers a new workflow or flag in source system. For example, invalid emails might be categorized as alerts and be just aggregate statistics. We have 2134223 invalid emails. Nobody cares. However, it might be more important to recognize invalid email of a person who has ordered his bills as digital invoices to his email. Alert. That kind of error (Invalid Email AND Email Invoicing) should trigger an alert and set up a flag in CRM for end users to try get email fixed. There should not be any error records for this error. But this kind of rule should be ran on top of all systems that store customer contact and billind preferences.
For a technical person, I could recommend this book. It's a good book that goes deeper in technical and logical issues of data quality assessment and monitoring systems. There is also a small metadata model for data quality metadata structures. https://www.amazon.com/Data-Quality-Assessment-Arkady-Maydanchik/dp/0977140024/
I am working on a web application for downloading resources of an unimportant type. It's written in python using the flask web framework. I use the SQLAlchemy DB system.
It has a user authentication system and you can download the resources only while logged in.
What I am trying to do is a download history chart for every resource and every user. To elaborate, each user could see two charts of their download activity on their profile page, for the last 7 days and the last year respectively. Each resource would also have a similar pair of charts, but they would instead visualize how many times the resource itself was downloaded in the time periods.
Here is an example screenshot of the charts
(Don't have enough reputation to embed images)
http://dl.dropbox.com/u/5011799/Selection_049.png
The problem is, I can't seem to figure out what the best way to store the downloads in a database would be. I found 2 ways that are relatively easy to implement and should work:
1) I could store the download count for each day in the last week in separate fields and every 24 hours just get rid of the first one and move them to the left by 1. This, however, seems like a kind of a hacky way to do this.
2) I could also create a separate table for the downloads and every time a user downloads a resource I would insert a row into the table with the Datetime, user_id of the downloader and the resource_id of the downloaded resource. This would allow me to do some nice querying of time periods etc. The problem with that configuration could be the row count in the table. I have no idea how heavily the website is going to be used, but if I do the math with 1000 downloads / day, I am going to end up with over 360k rows in just the first year. I don't know how fast that would to perform. I know I could just archive old entries if performace started being a huge problem.
I would like to know whether the 2nd option would be fast enough for a web app and what configuration you would use.
Thanks in advance.
I recommend the second approach, with periodic aggregation to improve performance.
Storing counts by day will force you to SELECT the existing count so that you can either add to it with an UPDATE statement or know that you need to INSERT a new record. That's two trips to the database on every download. And if things get out of whack, there's really no easy way to determine what happened or what the correct numbers ought to be. (You're not saving information about the individual events.) That's probably not a significant concern for a simple download count, but if this were sensitive information it might matter.
The second approach simply requires a single INSERT for each download, and because each event is stored separately, it's easy to troubleshoot. And, as you point out, you can slice this data any way you like.
As for performance, 360,000 rows is trivial for a modern RDBMS on contemporary hardware, but you do want to make sure you have an index on date, username/resource name or any other columns that will be used to select data.
Still, you might have more volume than you expect, or maybe your DB is iffy (I'm not familiar with SQLAlchemy). To reduce your row count you could create a weekly batch process (yeah, I know, batch ain't dead despite what some people say) during non-peak hours to create summary records by week.
It would probably be easiest to create your summary records in a different table that is simply keyed by week and year, or start/end dates, depending on how you want to use it. After you've generated the weekly summary for a period, you can archive or delete the daily detail records for that period.
I apologize if the question is a little vague but I'm trying to find the simplest way to do this.
I have a small group of people, for whom I have written a python script. Now this python script summarizes articles mined from a website (that are unique by an id number) when the user runs it with some parameters. Now each user might choose to "claim" one or more articles, which means that they will be working on it. Thus any future execution of the script should omit using a "claimed" article in its summary.
I need a way to have a globally accessible file, which my script accesses and checks its output against.
I also need to have a way for the user to add multiple id numbers to this global file.
I understand that a rudimentary database might be the best way to go, but is there a simpler way to read and edit files remotely over python? I can host this file on my personal webspace, but I'm not sure of what would be the simplest way to edit and read it since I'm relatively new to python.
The number of users is small and constant so it does not have to be very robust, just needs to work.
Language: Python
Thanks!