diff options
Diffstat (limited to 'docs/dev')
-rw-r--r-- | docs/dev/background_jobs.md | 62 | ||||
-rw-r--r-- | docs/dev/quality_assurance_on_csv_files.md | 52 |
2 files changed, 114 insertions, 0 deletions
diff --git a/docs/dev/background_jobs.md b/docs/dev/background_jobs.md new file mode 100644 index 0000000..1a41636 --- /dev/null +++ b/docs/dev/background_jobs.md @@ -0,0 +1,62 @@ +# Background Jobs + +We run background jobs for long-running processes, e.g. quality-assurance checks +across multiple huge files, inserting huge data to databases, etc. The system +needs to keep track of the progress of these jobs and communicate the state to +the user whenever the user requests. + +This details some thoughts on how to handle these jobs, especially in failure +conditions. + +We currently use Redis[^redis] to keep track of the state of the background +processes. + +Every background job started will have a Redis[^redis] key with the prefix `gn-uploader:jobs` + +## Users + +Currently (2024-10-23T13:29UTC-05:00), we do not track the user that started the job. Moving forward, we will track this information. + +We could have the keys be something like, `gn-uploader:jobs:<user-id>:<job-id>`. + +Another option is track any particular users jobs with a key of the form +`gn-uploader:users:<user-id>:jobs` and in that case, have the job keys take the +form `gn-uploader:jobs:<job-id>`. I (@fredmanglis) favour this option over +having the user's ID in the jobs keys directly, since it provides a way to +interact with **ALL** the jobs without indirecting through each specific user. +This is a useful ability to have, especially for system administrative tasks. + +## Multiprocessing Within Jobs + +Some jobs, e.g. quality-assurance jobs, can run multiple threads/processes +themselves. This brings up a problem because Redis[^redis] does not allow +parallel access to a key, especially for writing. + +We also do not want to create bottlenecks by writing to the same key from +multiple threads/processes. + +The design I have currently come up with, that might work is as follows: + +- At any point just before where multiple threads/processes are started, a list + of new keys, each of which will collect the output from a single thread, will + be built. +- These keys are recorded in the parent's redis key data +- The threads/processes are started and do whatever they need, pushing their + outputs to the appropriate keys within redis. + +The new keys for the children threads/processe could build on the theme + + +## Fetching Jobs Status + +Different jobs could have different ways of requirements for handling/processing +their outputs, and those of any children they might spawn. The system will need +to provide a way to pass in the correct function/code to process the outputs at +the point where the job status is requested. + +This implies that we need to track the type of job in order to be able to select +the correct code for processing such output. + +## Links + +- [^redis]: https://redis.io/ diff --git a/docs/dev/quality_assurance_on_csv_files.md b/docs/dev/quality_assurance_on_csv_files.md new file mode 100644 index 0000000..02d63c9 --- /dev/null +++ b/docs/dev/quality_assurance_on_csv_files.md @@ -0,0 +1,52 @@ +# Quality Assurance/Control on CSV Files + +## Abbreviations + +- CSV files: Character-separated-values files — these are data files structured in a table-like format, with a specific character chosen as the column/field separator. The comma (,) is the most common field separator used by most csv files. It is, however, possible to encounter files with other characters separating the values. + +## General Pattern + +A general pattern has emerged when performing quality assurance on the data in +CSV files — the pseudocode below shows the general pattern: + +```python +def qc_function(filepath, …): + open(filepath, …) + + headers = read_first_line(…) + perform_qc_on_headings(headers, …) + + for each subsequent line in file: + perform_qc_on_first_column(line, …) + + for each subsequent field in line: + perform_qc_on_field(field, …) +``` + +We want to list the errors found in each file, so it makes sense for the `perform_qc_on*` functions in the pseudocode above to return the list of errors found for each file. + +The actual quality assurance done on the headers, first column of data rows, and the fields can differ from one type of file to the next, but the structure remains relatively unchanged. + +This implies we could make use of a higher-order function that contains the general structure with the actual qc steps passed in as functions that are called in the higher-order structuring function. This gives something like: + +```python +def qc_function(filepath, headers_qc, first_column_qc, data_qc, …): + for line in file: + if line is a comment line: + skip line and continue iteration + if line is first non-comment line: + line is the header line + call headers_qc on fields in this line + if line is not first non-comment line: + line is data line + call first_column_qc on first field of line + call data_qc on each of the subsequent fields of the line + + collect and return errors +``` + +## Improvements + +- Read the file in a separate generator function +- Parallelize QC if many files are present +- Add logging/output for user update (how do we do this correctly?) |