Age | Commit message (Collapse) | Author |
|
|
|
|
|
position in pair-scan results + return only the sorted top 500 results
|
|
it's needed to store the proximal/distal markers for each position
|
|
|
|
|
|
Dict and List respectively used for the pair scan figure and the table showing the results
|
|
in pairscan results + renamed process_rqtl_output to process_rqtl_mapping to distinguish between that and pairscan
|
|
* compute zscore function
* test case for computing zscore
* function to compute pca
* generate scree plot data
* generate new pca trait data from zscores and eigen_vec
* remove redundant functions
* generate factor loading table data
* generate pca temp dataset dict
* variable naming and error fixes
* unit test for processing factor loadings
* minor fixes for generating temp pca dataset
* pass datetime as argument to generate_pca temp dataset function
* add unittest for caching pca datasets
* cache temp datasets
* ignore missing imports for sklearn
* mypy fixes
* pylint fixes
* refactor tests for pca
* remove ununsed imports
* fix for generating pca traits vals
* mypy and code refactoring
* pep8 formatting and add docstrings
* remove comments /pep8 formatting
* sort eigen vectors based on eigen values
* minor fix for zscores
* fix for rounding variance ratios
* refactor tests
* rename module to pca
* rename datasets to traits
* fix failing tests
* fix caching function
* fixes return x and y coordinates for scree plot
* expand exception scope
* fix for deprecated numpy.matrix function
* fix for failing tests
* pep8 fixes
* remove comments
* fix merge conflict
* pylint fixes
* rename module name to test_pca
|
|
|
|
|
|
Fix some issues caught by tests due to changes introducing the hand-off of the
partial correlations computations to an external process
Fix some issues due to the changes that introduce context managers for
database connections
Update some tests to take the above two changes into consideration
|
|
correlations
In the original version of the if statement* I believe it was
interpreted as "if a_val and (b_val is not None)". This caused
values of 0 for a_val (the primary trait's values) to be evaluated as
False.
I changed it to compare both a_val and b_val to None. This seems to have
fixed the issue.
* if (a_val and b_val is not None)
|
|
Context managers should be preferred when allocating resources.
* gn3/computations/wgcna.py (stream_cmd_output): Call Popen with context
manager.
|
|
When the encoding is not specified explicitly, the system default encoding is
used. This is not recommended.
* gn3/computations/ctl.py (call_ctl_script),
gn3/computations/gemma.py (generate_pheno_txt_file),
gn3/computations/parsers.py (parse_genofile),
gn3/computations/partial_correlations.py (partial_correlations_fast),
gn3/computations/rqtl.py (process_rqtl_output, process_perm_output),
gn3/computations/wgcna.py (dump_wgcna_data, call_wgcna_script),
gn3/fs_helpers.py (jsonfile_to_dict): Explicitly specify UTF-8 to be the file
encoding.
*
tests/unit/computations/test_gemma.py (TestGemma.test_generate_pheno_txt_file),
tests/unit/computations/test_wgcna.py (TestWgcna.test_create_json_file): Test
for call to open with encoding='utf-8' argument.
|
|
|
|
* Use `with` in place of plain `open`
* Use f-strings in place of `str.format()`
* Remove string interpolation from queries - provide data as query parameters
* other minor fixes
|
|
Test that the partial correlations endpoint handles a mix of existing and
non-existing control traits gracefully and issues a warning to the user.
Summary of changes:
* gn3/computations/partial_correlations.py: Issue a warning for all
non-existing control traits
* gn3/db/partial_correlations.py: update queries - use `INNER JOIN` for tables
instead of comma-separated list of tables
* tests/integration/conftest.py: Add `db_conn` fixture to provide a database
connection to the tests. This will probably be changed in the future to
connect to a temporary database for tests.
* tests/integration/test_partial_correlations.py: Add test to check for
correct behaviour with a mix of existing and non-existing control traits
|
|
Test that if the endpoint is queried and not a single one of the control
traits exists in the database, then the endpoint will respond with a
404 (not-found) status code.
Summary of changes:
* gn3/computations/partial_correlations.py: Check whether any control trait is
found. If none is found, return "not-found" message.
* gn3/db/partial_correlations.py: Fix bug in Geno query.
* tests/integration/test_partial_correlations.py: Add test for non-existing
control traits. Rename function to make it clearer what it is testing
for. Remove obsoleted comments.
|
|
Test that the partial correlations endpoint responds with an appropriate
"not-found" message and the corresponding 404 status code in the case where a
request is made and the primary trait requested for does not exist in the
database.
Summary of the changes in each file:
* gn3/api/correlation.py: generalise the building of the response
* gn3/computations/partial_correlations.py: return with a "not-found" if the
primary trait does not exist in the database
* gn3/db/partial_correlations.py: Fix a number of bugs that led to exceptions
in the case that the primary trait did not exist
* pytest.ini: register a `slow` pytest marker
* tests/integration/test_partial_correlations.py: Add a new test to check for
an appropriate 404 response in case of a primary trait that does not exist
in the database.
|
|
Add property tests using pytest and hypothesis to test that the expected
properties hold for the
`gn3.computations.partial_correlations.dictify_by_samples`
function.
|
|
Do all the work in a single iteration to avoid unnecessary iterations that
hamper performance.
|
|
Web servers are long-running processes, and python is not very good at
cleaning up after itself especially in forked processes - this leads to memory
errors in the web-server after a while.
This commit removes the use of multiprocessing to avoid such failures.
|
|
|
|
This commit refactors the code to make it possible to use multiprocessing to
speed up the computation of the partial correlations.
The major refactor is to move the `__compute_trait_info__` function to the
top-level of the module, and provide to it all the other necessary context via
the new args.
|
|
In Python3 when slicing,
seq[:min(some_val, len(seq))] == seq[:some_val]
because Python3 will just return a copy of the entire sequence if `some_val`
happens to be larger/greater than the length of the sequence.
This commit removes the unnecessary call to `min()`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The function is a generator function, since it uses a `yield` statement, and
thus returns a generator object, that contains a tuple object. This fixes
that. We also remove a duplicate import.
|
|
* Use the correct case for the keys inorder to retrieve the correct values.
|
|
* The string had the f-string syntax to format the values to be inserted into
the string, but was missing the 'f' before the opening quotes to signify to
python that this was an f-string. This commit fixes that.
|
|
* Remove all key-value pairs whose value is None.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* Replace unoptimised function with one optimised to give better performance.
The optimisation done here is to fetch multiple items/traits from the
database per query, rather than the original form, which fetched a single
item/trait from the database per query.
|
|
Issue: https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
Comment:
https://github.com/genenetwork/genenetwork3/pull/67#issuecomment-1000828159
* Convert NaN values to None to avoid possible bugs with the string replace
method used before.
|
|
Issue:
* Function
`gn3.computations.partial_correlations_optimised.partial_correlations_entry`
is a copy of the
`gn3.computations.partial_correlation.partial_correlations_entry`
function that is optimised for better performance.
The optimised function is intended to replace the unoptimised one, but it is
included in this commit for comparison purposes, and to maintain some
historical context for doing it this way.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* In an attempt to optimise the performance of the partial correlations
feature, this commit reworks some database access functions to fetch
multiple items from the database, per query, unlike their original forms
which would fetch a single item per query.
This reduces queries to the database, and should hopefully improve the
responsiveness of the partial correlations feature.
|
|
Adds type hint for normalize_values function
|
|
|
|
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* Update the sorting algorithm, for literature and tissue correlations so that
it sorts the results by the correlation value first then by the p-value
next.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* Return the correlation method used
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* There is a lot of data that is not necessary in the final result. This
commit removes that data, retaining only data relevant for the display.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* The dataset type is relevant for the display of the data, therefore, this
commit presents the dataset type as part of the results.
|
|
Issue: https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
|
|
|