Age | Commit message (Collapse) | Author |
|
|
|
|
|
Fix bugs with setting up of the selected traits for use while filtering the
search results.
|
|
Consistently encode all values for the top-level keys stored in redis to avoid
issues with json encode/decode
|
|
|
|
We need a search through the available phenotype traits in the database when
linking the traits to user groups. Unfortunately, the Xapian Search indexes do
not (and should not) include the internal identifiers we use to disambiguate
the traits.
On the other hand, we do not want to present the user with traits that have
already been linked to any user group within the search results.
The script in this commit, together with the modified queries for fetching the
phenotype data form a "hack" of sorts to wrap around the way the search works
while ensuring we do not present the user with "non-actionable" (linked)
traits in the search results.
|
|
Remove the deprecated function and fix a myriad of bugs that arise from
removing the function.
Issue: https://issues.genenetwork.org/issues/bugfix_coupling_current_app_and_db_utils
|
|
There is need to run external scripts using the same configurations as the
application but without the need to couple the script to the application.
In this case, we provide the needed configuration directly in the CLI, and
modify the existing `gn3.db_utils.database_connection` function to allow it to
work coupled to the app or otherwise.
|
|
* scripts/index-genenetwork (worker_queue): Set default number of workers to 1
if the number of CPUs cannot be determined.
|
|
* scripts/index-genenetwork: Import Callable, Generator, Iterable and List
from typing. Type hint all functions.
|
|
* scripts/index-genenetwork: New file.
* setup.py (install_requires): Add click, pymonad and xapian-bindings.
(scripts): Add scripts/index-genenetwork.
|
|
* README.md: update mypy's invocation
* scripts/argparse_actions.py: new file - implement custom FileCheck action
for argparse
* scripts/sample_correlations.py: new file - implement new script to run
sample correlations in an external process
|
|
To reduce the chances of the system failing due to the external process being
launched with the wrong parameters, add a parsing stage that converts the
method from the UI into a form acceptable by the CLI script.
* gn3/commands.py: parse the method from UI
* scripts/partial_correlations.py: simplify the acceptable methods
|
|
- Have "Pearson's r" and "Spearman's rho" as the only valid choices for the
partial correlations
|
|
Use new external script to run the partial correlations for both cases,
i.e.
- against an entire dataset, or
- against selected traits
|
|
* Add a new script to compute the partial correlations against:
- a select list of traits, or
- an entire dataset
depending on the specified subcommand. This new script is meant to supercede
the `scripts/partial_correlations.py` script.
* Fix the check for errors
* Reorganise the order of arguments for the
`partial_correlations_with_target_traits` function: move the `method`
argument before the `target_trait_names` argument so that the common
arguments in the partial correlation computation functions share the same
order.
|
|
* Extract the common error checking code into a separate function
* Rename the function to make its use clearer
|
|
https://github.com/zsloan/genenetwork3 into feature/add_rqtl_pairscan
|
|
Now it checks for pairscan first, just in case interval ends up being
passed (which is an irrelevant parameter for pairscan)
Also added a couple more verbose prints
|
|
we need the list of markers/pseudomarkers and their positions)
|
|
step-size to 10cM for pair-scan
|
|
(pairscan) is used
- For pairscan default to using step 20 (subject to change, but some
step is required during calc.genoprob to make it run fast enough)
- Added some new verbose prints
|
|
|
|
Use the `with` context manager to open database connections, so as to ensure
that those connections are closed once the call is completed. This hopefully
avoids the 'too many connections' error
|
|
Run the partial correlations code in an external python process decoupling it
from the server and making it asynchronous.
Summary of changes:
* gn3/api/correlation.py:
- Remove response processing code
- Queue partial corrs processing
- Create new endpoint to get results
* gn3/commands.py
- Compose the pcorrs command to be run in an external process
- Enable running of subprocess commands with list args
* gn3/responses/__init__.py: new module indicator file
* gn3/responses/pcorrs_responses.py: Hold response processing code extracted
from ~gn3.api.correlations.py~ file
* scripts/partial_correlations.py: CLI script to process the pcorrs
* sheepdog/worker.py:
- Add the *genenetwork3* path at the beginning of the ~sys.path~ list to
override any GN3 in the site-packages
- Add any environment variables to be set for the command to be run
|
|
Quote the shell variables to prevent globbing and word splitting.
Deactivate this check for the specific lines that require intentional word
splitting
|
|
The rqtl_wrapper script was throwing an error when only a single
categorical covariate was used. This is apparently because "covars[,name]"
throws an error in such a situation. Using just "covars" in such a
situation prevents the error. So I just added an if statement checking
the number of covariates. There might be some better way to deal with
this in R, but this is the best I could come up with.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* add r as a gn3 input
* calculate powers from user input
* fix merge conflict
|
|
|
|
* add biweight reimplementation with pingouin
* delete biweight scripts and tests
* add python-pingouin to guix file
* delete biweight paths
* mypy fix:pingouin mising imports
* pep8 formatting && pylint fixes
|
|
|