Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
- Use a way faster way of parsing the strains file
|
|
* Use sqlite to save the jobs metadata and enable UI update of the
progress for large files
|
|
* Make the 'worker' functions free from needing the application
context by passing all the details they need as arguments.
* Enable the display of parsing results.
|
|
While the application is developed with GNU Guix, the end user might
not be using it, and therefore, this commit provides a way for the
user to install the application with the usual python package
management systems.
|
|
|
|
* Create and push the application context for the worker functions
* Fix the update of meta fields
|
|
Enable the queuing of file parsing jobs, since the files could be
really large and take a long time to parse and present results.
* etc/default_config.py: Add default config for redis server
* manifest.scm: Add redis, and rq as dependencies
* qc_app/__init__.py
* qc_app/jobs.py: module to hold utilities for management of the jobs
* qc_app/parse.py: Enqueue the job - extract file-parsing code to
callable function
* qc_app/templates/base.html: Enable addition of extra meta tags
* qc_app/templates/job_progress.html: template to display job progress
* qc_app/templates/no_such_job.html: template to indicate when a job
id is invalid
* quality_control/parsing.py: Add the total size parsed so far
|
|
* qc_app/entry.py: Pass filetype onward to parsing endpoint
* qc_app/parse.py: Call the function(s) necessary to parse a file
* quality_control/errors.py: Fix argument passing to super class
|
|
|
|
Advance the seek position, once we have yielded up an error to causing
an infinite loop in certain conditions, where the `parse_errors`
function ends up resuming the gile in the same position once it
experiences an error.
|
|
To avoid processing all the items in an iterable, the `take` function
is added in this commit. It realised a limited number (specified at
call time) of items from the iterable given.
|
|
|
|
Add template(s) for the index page and some basic styling to get
started with.
|
|
Ignore the flask instance directory if it is present in the
repository. The directory being present in the repository is mostly a
development convenience feature.
|
|
Build a function to collect all the parsing errors into a "sequence"
of dict objects containing the issues found.
|
|
Add a basic scaffolding for the web interface to the quality-control
application.
|
|
Derive a "correct" sample file from an existing sample file with
errors for testing with large files.
Fix issue caught by test.
|
|
|
|
* Implement remaining file parsing tests and some helpers functions
needed for ensuring the tests work.
|
|
|
|
|
|
|
|
* Improve tests that ensure parsing fails in case the file has errors
* Add strains.csv file
* Implement minimum viable functionality that passes the implemented tests
|
|
Add dummy failing tests and a stub for the parsing of the files
|
|
Change the exception name to be more descriptive.
|
|
Add some sample files to be used for testing that the parsing works as
expected.
|
|
Without the `tests/__init__.py` file, the tests directory was not
considered a package and therefore, running:
$ pytest
would fail with import error notifications. This fixes that.
|
|
Add a minimum viable implementation that passes the tests for the
function that checks for the validity of the headers
|
|
* Add tests to check for validity of the headers
* Add stubs for the tests
|
|
* Implement the minimum viable functions for the average and standard
error `valid_value` functions.
|
|
|
|
|
|
|