Age | Commit message (Collapse) | Author |
|
Use the generic worker script as the interface for launching external
processes.
|
|
Add a generic worker script, whose purpose is:
- to launch the specific worker script
- to capture both stdin and stderr streams and put them on redis
In this way, we can launch redis-aware and redis-unaware workers and
capture their outputs or errors for later processing.
|
|
As preparation for building a new generic worker script, this commit
renames the file validation script from 'worker.py' to
'validate_file.py' so as to ensure the name conforms better to what
the script does.
|
|
Provide user with a confirmation stage where they can verify all the
data before inserting into the database.
|
|
Enable the user to create a new dataset should the need arise.
A few extra fixes were done, such as:
- Provide list of average methods to choose from
- Provide input elements for some expected fields
- Add a new confirmation step before doing the actual data update
|
|
Rather than using the redirect, that led to exposing the study id as a
get parameter, this commit adds an auxilliary step that allows the
user to choose whether to continue with the new study or go back and
select an existing study.
|
|
- Implement UI enabling selection from existing datasets
- Start implementation of UI that enables creation of new dataset
|
|
Enable the creation of the new study, and redirect appropriately with
the new study id.
|
|
|
|
|
|
- Build code to populate the "Group" and "Tissue" dropdown lists
- Enable redirect with POST data (code 307) in case there is input
error to enable the user fix their errors
- Move hidden fields to macro to reduce repetition
|
|
Implement the select study UI
|
|
|
|
Rather than specifying a specific commit in the development guix.scm
this commit has the system automatically get the latest commit and use
that for building the package object.
|
|
|
|
|
|
Use the builtin mimetypes which gives better results
|
|
|
|
The filetype determines the queries to be run to update the database,
therefore, this commit adds filetype information.
|
|
The GeneChipId value is required for the data being inserted, so this
commit provides the UI to enable selection of the chip.
|
|
As part of updating the database with the new data, there is a need to
select the appropriate dataset that the data belongs to, and this
commit provides the UI to assist the user do that.
|
|
The number columns in each contents line should be equal to the nember
of columns in the header line.
|
|
|
|
|
|
|
|
|
|
|
|
- Ensure errors respond with status code 400
- Ensure error messages are displayed for any invalid zip file that is
uploaded.
|
|
|
|
* Ensure error messages are displayed if a request is made to the
'/parse/parse' endpoint with invalid, or missing data.
|
|
- Test upload with missing or invalid data
- Test triggering the parsing of the file
|
|
|
|
|
|
|
|
|
|
|
|
Enable the user to abort the background parsing of the file.
|
|
|
|
Enable the progress status page to show all the errors found at any
point during the processing of the file.
|
|
|
|
|
|
|
|
|
|
This reverts commit 960c1a5b831d8761a3e1716f86ded4cc5b67eea0.
After meeting with Arthur, it was confirmed that the CSV file should
not have values in the exponential notation.
|
|
|
|
The CLI scripts use "standard-error" so update the web version to fit
in with that.
|
|
|
|
Implement code to handle errors in the processing of files.
|
|
- README.org: document how to run scripts manually
- manifest.scm: remove python-rq as a dependency
- qc_app/jobs.py: rework job launching and processing
- qc_app/parse.py: use reworked job processing
- qc_app/templates/job_progress.html: display progress correctly
- qc_app/templates/parse_results.html: display final results
- scripts/worker.py: new worker script
|
|
Since progress indication is not part of the parsing, this commit
extracts the progress indication into functions with well defined
input arguments that hide the progress indication logic from the
parsing function.
|