Age | Commit message (Collapse) | Author |
|
- Use a way faster way of parsing the strains file
|
|
|
|
Enable the queuing of file parsing jobs, since the files could be
really large and take a long time to parse and present results.
* etc/default_config.py: Add default config for redis server
* manifest.scm: Add redis, and rq as dependencies
* qc_app/__init__.py
* qc_app/jobs.py: module to hold utilities for management of the jobs
* qc_app/parse.py: Enqueue the job - extract file-parsing code to
callable function
* qc_app/templates/base.html: Enable addition of extra meta tags
* qc_app/templates/job_progress.html: template to display job progress
* qc_app/templates/no_such_job.html: template to indicate when a job
id is invalid
* quality_control/parsing.py: Add the total size parsed so far
|
|
Advance the seek position, once we have yielded up an error to causing
an infinite loop in certain conditions, where the `parse_errors`
function ends up resuming the gile in the same position once it
experiences an error.
|
|
To avoid processing all the items in an iterable, the `take` function
is added in this commit. It realised a limited number (specified at
call time) of items from the iterable given.
|
|
Build a function to collect all the parsing errors into a "sequence"
of dict objects containing the issues found.
|
|
Derive a "correct" sample file from an existing sample file with
errors for testing with large files.
Fix issue caught by test.
|
|
|
|
* Implement remaining file parsing tests and some helpers functions
needed for ensuring the tests work.
|
|
|
|
* Improve tests that ensure parsing fails in case the file has errors
* Add strains.csv file
* Implement minimum viable functionality that passes the implemented tests
|
|
Add dummy failing tests and a stub for the parsing of the files
|