Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The `fix_strains` function works on the trait data, not the basic trait
info. This commit fixes the arguments passed to the function, and also some
bugs in the function.
|
|
* Add a new script to compute the partial correlations against:
- a select list of traits, or
- an entire dataset
depending on the specified subcommand. This new script is meant to supercede
the `scripts/partial_correlations.py` script.
* Fix the check for errors
* Reorganise the order of arguments for the
`partial_correlations_with_target_traits` function: move the `method`
argument before the `target_trait_names` argument so that the common
arguments in the partial correlation computation functions share the same
order.
|
|
|
|
Rework the code to process the traits in a single iteration to improve
performance.
|
|
Return generator objects rather than pre-computed tuples to reduce the number
of iterations needed to process the data, and thus improve the performance of
the system somewhat.
|
|
|
|
Enable the endpoint to actually compute partial correlations with selected
target traits rather than against an entire dataset.
Fix some issues caused by recent refactor that broke pcorrs against a dataset
|
|
Compute partial correlations against a selection of traits rather than against
an entire dataset.
|
|
* Extract the common error checking code into a separate function
* Rename the function to make its use clearer
|
|
Remove an unnecessary looping construct to help with speeding up the partial
correlations somewhat.
|
|
* Remove a module that is no longer in use
|
|
See: <https://ci.genenetwork.org/jobs/genenetwork3-pylint/126>
* gn3/computations/rqtl.py: Run `black gn3/computations/rqtl.py`. Also,
manually fix other pylint issues.
|
|
|
|
|
|
|
|
|
|
position in pair-scan results + return only the sorted top 500 results
|
|
it's needed to store the proximal/distal markers for each position
|
|
|
|
|
|
Dict and List respectively used for the pair scan figure and the table showing the results
|
|
in pairscan results + renamed process_rqtl_output to process_rqtl_mapping to distinguish between that and pairscan
|
|
* compute zscore function
* test case for computing zscore
* function to compute pca
* generate scree plot data
* generate new pca trait data from zscores and eigen_vec
* remove redundant functions
* generate factor loading table data
* generate pca temp dataset dict
* variable naming and error fixes
* unit test for processing factor loadings
* minor fixes for generating temp pca dataset
* pass datetime as argument to generate_pca temp dataset function
* add unittest for caching pca datasets
* cache temp datasets
* ignore missing imports for sklearn
* mypy fixes
* pylint fixes
* refactor tests for pca
* remove ununsed imports
* fix for generating pca traits vals
* mypy and code refactoring
* pep8 formatting and add docstrings
* remove comments /pep8 formatting
* sort eigen vectors based on eigen values
* minor fix for zscores
* fix for rounding variance ratios
* refactor tests
* rename module to pca
* rename datasets to traits
* fix failing tests
* fix caching function
* fixes return x and y coordinates for scree plot
* expand exception scope
* fix for deprecated numpy.matrix function
* fix for failing tests
* pep8 fixes
* remove comments
* fix merge conflict
* pylint fixes
* rename module name to test_pca
|
|
|
|
|
|
Fix some issues caught by tests due to changes introducing the hand-off of the
partial correlations computations to an external process
Fix some issues due to the changes that introduce context managers for
database connections
Update some tests to take the above two changes into consideration
|
|
correlations
In the original version of the if statement* I believe it was
interpreted as "if a_val and (b_val is not None)". This caused
values of 0 for a_val (the primary trait's values) to be evaluated as
False.
I changed it to compare both a_val and b_val to None. This seems to have
fixed the issue.
* if (a_val and b_val is not None)
|
|
Context managers should be preferred when allocating resources.
* gn3/computations/wgcna.py (stream_cmd_output): Call Popen with context
manager.
|
|
When the encoding is not specified explicitly, the system default encoding is
used. This is not recommended.
* gn3/computations/ctl.py (call_ctl_script),
gn3/computations/gemma.py (generate_pheno_txt_file),
gn3/computations/parsers.py (parse_genofile),
gn3/computations/partial_correlations.py (partial_correlations_fast),
gn3/computations/rqtl.py (process_rqtl_output, process_perm_output),
gn3/computations/wgcna.py (dump_wgcna_data, call_wgcna_script),
gn3/fs_helpers.py (jsonfile_to_dict): Explicitly specify UTF-8 to be the file
encoding.
*
tests/unit/computations/test_gemma.py (TestGemma.test_generate_pheno_txt_file),
tests/unit/computations/test_wgcna.py (TestWgcna.test_create_json_file): Test
for call to open with encoding='utf-8' argument.
|
|
|
|
* Use `with` in place of plain `open`
* Use f-strings in place of `str.format()`
* Remove string interpolation from queries - provide data as query parameters
* other minor fixes
|
|
Test that the partial correlations endpoint handles a mix of existing and
non-existing control traits gracefully and issues a warning to the user.
Summary of changes:
* gn3/computations/partial_correlations.py: Issue a warning for all
non-existing control traits
* gn3/db/partial_correlations.py: update queries - use `INNER JOIN` for tables
instead of comma-separated list of tables
* tests/integration/conftest.py: Add `db_conn` fixture to provide a database
connection to the tests. This will probably be changed in the future to
connect to a temporary database for tests.
* tests/integration/test_partial_correlations.py: Add test to check for
correct behaviour with a mix of existing and non-existing control traits
|
|
Test that if the endpoint is queried and not a single one of the control
traits exists in the database, then the endpoint will respond with a
404 (not-found) status code.
Summary of changes:
* gn3/computations/partial_correlations.py: Check whether any control trait is
found. If none is found, return "not-found" message.
* gn3/db/partial_correlations.py: Fix bug in Geno query.
* tests/integration/test_partial_correlations.py: Add test for non-existing
control traits. Rename function to make it clearer what it is testing
for. Remove obsoleted comments.
|
|
Test that the partial correlations endpoint responds with an appropriate
"not-found" message and the corresponding 404 status code in the case where a
request is made and the primary trait requested for does not exist in the
database.
Summary of the changes in each file:
* gn3/api/correlation.py: generalise the building of the response
* gn3/computations/partial_correlations.py: return with a "not-found" if the
primary trait does not exist in the database
* gn3/db/partial_correlations.py: Fix a number of bugs that led to exceptions
in the case that the primary trait did not exist
* pytest.ini: register a `slow` pytest marker
* tests/integration/test_partial_correlations.py: Add a new test to check for
an appropriate 404 response in case of a primary trait that does not exist
in the database.
|
|
Add property tests using pytest and hypothesis to test that the expected
properties hold for the
`gn3.computations.partial_correlations.dictify_by_samples`
function.
|
|
Do all the work in a single iteration to avoid unnecessary iterations that
hamper performance.
|
|
Web servers are long-running processes, and python is not very good at
cleaning up after itself especially in forked processes - this leads to memory
errors in the web-server after a while.
This commit removes the use of multiprocessing to avoid such failures.
|
|
|
|
This commit refactors the code to make it possible to use multiprocessing to
speed up the computation of the partial correlations.
The major refactor is to move the `__compute_trait_info__` function to the
top-level of the module, and provide to it all the other necessary context via
the new args.
|
|
In Python3 when slicing,
seq[:min(some_val, len(seq))] == seq[:some_val]
because Python3 will just return a copy of the entire sequence if `some_val`
happens to be larger/greater than the length of the sequence.
This commit removes the unnecessary call to `min()`
|
|
|
|
|