Age | Commit message (Collapse) | Author |
|
Enable the user to create a new dataset should the need arise.
A few extra fixes were done, such as:
- Provide list of average methods to choose from
- Provide input elements for some expected fields
- Add a new confirmation step before doing the actual data update
|
|
Rather than using the redirect, that led to exposing the study id as a
get parameter, this commit adds an auxilliary step that allows the
user to choose whether to continue with the new study or go back and
select an existing study.
|
|
- Implement UI enabling selection from existing datasets
- Start implementation of UI that enables creation of new dataset
|
|
Enable the creation of the new study, and redirect appropriately with
the new study id.
|
|
|
|
|
|
- Build code to populate the "Group" and "Tissue" dropdown lists
- Enable redirect with POST data (code 307) in case there is input
error to enable the user fix their errors
- Move hidden fields to macro to reduce repetition
|
|
Implement the select study UI
|
|
|
|
|
|
|
|
The filetype determines the queries to be run to update the database,
therefore, this commit adds filetype information.
|
|
The GeneChipId value is required for the data being inserted, so this
commit provides the UI to enable selection of the chip.
|
|
As part of updating the database with the new data, there is a need to
select the appropriate dataset that the data belongs to, and this
commit provides the UI to assist the user do that.
|
|
The number columns in each contents line should be equal to the nember
of columns in the header line.
|
|
|
|
- Ensure errors respond with status code 400
- Ensure error messages are displayed for any invalid zip file that is
uploaded.
|
|
* Ensure error messages are displayed if a request is made to the
'/parse/parse' endpoint with invalid, or missing data.
|
|
|
|
|
|
|
|
|
|
Enable the user to abort the background parsing of the file.
|
|
|
|
Enable the progress status page to show all the errors found at any
point during the processing of the file.
|
|
|
|
The CLI scripts use "standard-error" so update the web version to fit
in with that.
|
|
|
|
Implement code to handle errors in the processing of files.
|
|
- README.org: document how to run scripts manually
- manifest.scm: remove python-rq as a dependency
- qc_app/jobs.py: rework job launching and processing
- qc_app/parse.py: use reworked job processing
- qc_app/templates/job_progress.html: display progress correctly
- qc_app/templates/parse_results.html: display final results
- scripts/worker.py: new worker script
|
|
- Use a way faster way of parsing the strains file
|
|
* Use sqlite to save the jobs metadata and enable UI update of the
progress for large files
|
|
* Make the 'worker' functions free from needing the application
context by passing all the details they need as arguments.
* Enable the display of parsing results.
|
|
* Create and push the application context for the worker functions
* Fix the update of meta fields
|
|
Enable the queuing of file parsing jobs, since the files could be
really large and take a long time to parse and present results.
* etc/default_config.py: Add default config for redis server
* manifest.scm: Add redis, and rq as dependencies
* qc_app/__init__.py
* qc_app/jobs.py: module to hold utilities for management of the jobs
* qc_app/parse.py: Enqueue the job - extract file-parsing code to
callable function
* qc_app/templates/base.html: Enable addition of extra meta tags
* qc_app/templates/job_progress.html: template to display job progress
* qc_app/templates/no_such_job.html: template to indicate when a job
id is invalid
* quality_control/parsing.py: Add the total size parsed so far
|
|
* qc_app/entry.py: Pass filetype onward to parsing endpoint
* qc_app/parse.py: Call the function(s) necessary to parse a file
* quality_control/errors.py: Fix argument passing to super class
|
|
|
|
Add template(s) for the index page and some basic styling to get
started with.
|
|
Add a basic scaffolding for the web interface to the quality-control
application.
|