Age | Commit message (Collapse) | Author |
|
* wqflask/tests/unit/wqflask/api/test_correlation.py: Use proper
database connection instead of the db connection attached at "g.db".
* wqflask/tests/unit/wqflask/snp_browser/test_snp_browser.py: Ditto.
* wqflask/wqflask/api/correlation.py: Ditto.
* wqflask/wqflask/snp_browser/snp_browser.py: Ditto.
|
|
|
|
* wqflask/wqflask/snp_browser/snp_browser.py: Remove "getLogger".
|
|
If the GN2_SETTINGS environment variable, is for some reason, not set,
and the application actually ever tries to get a connection to the
database, then use the default settings/configuration file.
|
|
|
|
|
|
Also made a large number of other fixes that proved necessary during
testing
|
|
Also store parents/type metadata from source genofiles
|
|
- I was mixing up source/target genofiles previously; the JSON file is for the source genofiles
- references to the app context are removed in favor of just taking input as arguments or environment variables
- Updated example commands
|
|
generate_new_genofiles function
|
|
|
|
|
|
- Removed some unused code
- Strip marker genotype to avoid newline character at end
- Convert zip to list for marker genotypes
- Add typing to group_samples
- Rename strain_genofile to source_genofile
|
|
gen_ind_genofiles.py is a command line script to generate genotype files for groups of
individuals/samples, taking a source .geno or .json file and a target 'dummy' .geno file as input
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* wqflask/wqflask/metadata_edits.py: Import "extract_invalid_csv_headers"
and "get_allowable_sampledata_headers".
(display_phenotype_metadata): Pass the allowable headers to the
template.
(update_phenotype): If a user uploads data with a column header that's
not in the db, don't upload the file, and send a warning message.
* wqflask/wqflask/templates/edit_phenotype.html: List the allowable
headers in the template.
|
|
This is a WIP.
|
|
|
|
Using "database_connection" within a context-manager makes sure that
the SQL connection that is created is closed.
* wqflask/wqflask/metadata_edits.py (display_probeset_metadata):
Connect to the db within a context-manager.
(update_phenotype): Ditto.
(update_probeset): Ditto.
(approve_data): Ditto.
|
|
|
|
* wqflask/wqflask/metadata_edits.py (approve_data): Explicitly close a
connection after it is used.
|
|
|
|
|
|
|
|
* wqflask/wqflask/metadata_edits.py (approve_data): Update query
strings.
|
|
* wqflask/wqflask/metadata_edits.py: Replace imports that start with
`db.traits` with `db.sample_data`.
|
|
|
|
* wqflask/wqflask/metadata_edits (update_phenotype): The logic was for
generating csv_diff and removing insignificant values in the edits was
moved to gn3; use those functions instead of the manual way.
|
|
|
|
|
|
|
|
Remove the `wqflask.utility.tools` and retrieve the `SQL_URI` setting
directly from the environment or the settings file. This breaks the
circular imports and makes the `wqflask.database` module standalone.
Move the `parse_db_url` function to the `wqflask.database` module,
where it is more sensible to be.
|
|
Use the `with` context manager with database connections and cursors
to ensure that they are closed once they are no longer needed.
Where it was not feasible to use the `with` context manager without a
huge refactor/rewrite, the cursors and connections are closed manually.
|
|
This commit gets rid of the multi-step partial correlations process
replacing it with a single-step process.
Summary of changes:
* wqflask/wqflask/collect.py: Add function to format the trait details
in a format that is usable for the partial correlations system.
* wqflask/wqflask/database.py: Provide function to create a connection
to the database
* wqflask/wqflask/partial_correlations_views.py: Rework the code to
enable the one-step process for the partial correlations
computations
* wqflask/wqflask/static/new/javascript/partial_correlations.js: Get
rid of code that supported the multi-step process
* wqflask/wqflask/templates/collections/view.html: Remove inconsistent
UI elements. Attach traits info in a form usable for the partial
correlations
* wqflask/wqflask/templates/partial_correlations.html: delete html
template
* wqflask/wqflask/templates/partial_correlations/pcorrs_error.html:
provide a html template to display errors in the partial
correlations computation process
* wqflask/wqflask/templates/partial_correlations/pcorrs_poll_results.html:
UI template to provide user with feedback as the computations
continue in the background
* wqflask/wqflask/templates/partial_correlations/pcorrs_results_presentation.html:
UI template to present the results of successful computation
* wqflask/wqflask/templates/partial_correlations/pcorrs_select_operations.html:
UI template to trigger the partial correlations computations
* wqflask/wqflask/templates/tool_buttons.html: Add the partial
correlations button to the template to ensure a consistent look and
feel
|
|
* unlink file for JSONDecodeError
* fix for avoiding caching empty dicts
* fix for checking null dicts
|
|
Hyphens in fulltext searches were causing problems, but a recent commit
I made to fix the issue apparently had some side effects for other types
of searches, in addition to make such searches someewhat slower
Apparently the issue wasn't just the hyphens, but also the text to
either side of the hyphen being lower than the minimum word length
(which is either 2 or 3 for us, can't remember). To try and address
this, I did a regular expression check for a pattern with text of <3
legnth to either side of a hyphen, and when that's the case I add quotes
from the search term plus an asterisk, which seeems to be necessary to
get it to not treat the hyphen as a delimiter and to correctly detect
the search term
|
|
|
|
This CSS is overwritten by CSS from trait_list.css and show_trait.css
|
|
|
|
|
|
|
|
|
|
ProbeSet queries previously weren't dealing with aliases correctly,
because it was doing a MATCH/AGAINST against the ProbeSet.alias field,
but that field usually contains a list of gene symbols separated by
semi-colons (so it wouldn't detect the alias unless there was only a
single alias.
To fix this, I added some LIKE conditions, searching for the possible
variations. This is a little awkward, because I needed to make sure to
avoid a situation where, for example, an alias like 'LPD-1' matches a
search for 'PD-1'. I don't think the way it currently works is
efficient, but I don't know of any good alternative without changing the
way we store aliases in the database.
|
|
|
|
|