Age | Commit message (Collapse) | Author |
|
There was a subtle bug where "csvdiff" generated an error related to
"different column headings" caused something akin to diffing: "a, b \n, ..."
with "a, b\n, ...".
* gn3/csvcmp.py (csv_diff): Clean csv texts before any diffing.
* tests/unit/test_csvcmp.py (test_csv_diff_same_columns): Modify test case to
capture aforementioned bug.
|
|
* gn3/csvcmp.py (clean_csv_text): New function.
* tests/unit/test_csvcmp.py: Import "csv_text".
(test_clean_csv_text): Test case for the above.
|
|
* compute zscore function
* test case for computing zscore
* function to compute pca
* generate scree plot data
* generate new pca trait data from zscores and eigen_vec
* remove redundant functions
* generate factor loading table data
* generate pca temp dataset dict
* variable naming and error fixes
* unit test for processing factor loadings
* minor fixes for generating temp pca dataset
* pass datetime as argument to generate_pca temp dataset function
* add unittest for caching pca datasets
* cache temp datasets
* ignore missing imports for sklearn
* mypy fixes
* pylint fixes
* refactor tests for pca
* remove ununsed imports
* fix for generating pca traits vals
* mypy and code refactoring
* pep8 formatting and add docstrings
* remove comments /pep8 formatting
* sort eigen vectors based on eigen values
* minor fix for zscores
* fix for rounding variance ratios
* refactor tests
* rename module to pca
* rename datasets to traits
* fix failing tests
* fix caching function
* fixes return x and y coordinates for scree plot
* expand exception scope
* fix for deprecated numpy.matrix function
* fix for failing tests
* pep8 fixes
* remove comments
* fix merge conflict
* pylint fixes
* rename module name to test_pca
|
|
|
|
* gn3/csvcmp.py (extract_invalid_csv_headers): New function.
* tests/unit/test_csvcmp.py: Import "extract_invalid_csv_headers".
(test_extract_invalid_csv_headers_with_some_wrong_headers): Test case for the
above.
|
|
* gn3/csvcmp.py: Import "Any" and "List".
(get_allowable_sampledata_headers): New function.
* tests/unit/test_csvcmp: Import "get_allowable_sampledata_headers".
(test_get_allowable_csv_headers): Test case for the above.
|
|
|
|
* gn3/db/sample_data.py (__extract_actions): During updates, make sure that
the strain name is part of the returned string when extracting "actions".
* tests/unit/db/test_sample_data.py: Add test cases for the above.
|
|
|
|
* gn3/db/sample_data.py (get_sample_data_ids): Re-use "delete_sample_data" and
"insert_sample_data" when updating data; and also add logic for updating
modified data.
* tests/unit/db/test_sample_data.py: Add tests for the above.
|
|
* gn3/db/sample_data.py (__extract_actions): An update on a vector of data can
contain: inserts, deletes and updates. This functions extracts these actions
during an update.
* tests/unit/db/test_sample_data.py (test_extract_actions): Add test-case for
the above.
|
|
* tests/unit/db/test_sample_data.py (test_insert_sample_data): Test inserting
data.
(test_delete_sample_data): Test deleting data.
|
|
* gn3/csvcmp.py (fill_csv): Update this function to allow empty lists to be
filled with the default value(set in the args).
* tests/unit/test_csvcmp.py (test_fill_csv): Update test to capture above.
|
|
Most of this functions were moved to their own module.
|
|
* gn3/csvcmp.py (extract_strain_name): New function.
* gn3/db/sample_data (delete_sample_data): Use the aforementioned function.
(insert_sample_data): Ditto.
* tests/unit/test_csvcmp: Test cases for above.
|
|
gn3/csvcmp.py (csv_diff): If the diff is empty, don't add an extra key
"Column" to the dictionary.
tests/unit/test_csvcmp (test_csv_diff_only_column_change): Add test-case for
the above.
|
|
* tests/unit/test_csvcmp.py (test_csv_diff): Delete it.
(test_csv_diff_same_columns): Test csv_diff against csv texts with the same
columns.
(test_csv_diff_different_columns): Test csv texts against csv texts with
different varying columns.
|
|
* gn3/csvcmp.py (fill_csv): Given a csv text with uneven or incomplete fields
whole length are less than width, fill them with a value which defaults to
"x".
* tests/unit/test_csvcmp.py (test_fill_csv): Test cases for the above.
|
|
gn3/csvcmp.py: New file
(create_dirs_if_not_exists): From a list of dirs, create them if they don't
exist.
(remove_insignificant_edits): Given a dict with a "Modification" key, remove
edits with "delta < ε".
(csv_diff): Generate a csv_diff using the "csvdiff" tool packaged in guix.
tests/unit/test_csvcmp.py: Add some tests for "gn3/csvcmp.py"
|
|
|
|
|
|
|
|
Fix some issues caught by tests due to changes introducing the hand-off of the
partial correlations computations to an external process
Fix some issues due to the changes that introduce context managers for
database connections
Update some tests to take the above two changes into consideration
|
|
When the encoding is not specified explicitly, the system default encoding is
used. This is not recommended.
* gn3/computations/ctl.py (call_ctl_script),
gn3/computations/gemma.py (generate_pheno_txt_file),
gn3/computations/parsers.py (parse_genofile),
gn3/computations/partial_correlations.py (partial_correlations_fast),
gn3/computations/rqtl.py (process_rqtl_output, process_perm_output),
gn3/computations/wgcna.py (dump_wgcna_data, call_wgcna_script),
gn3/fs_helpers.py (jsonfile_to_dict): Explicitly specify UTF-8 to be the file
encoding.
*
tests/unit/computations/test_gemma.py (TestGemma.test_generate_pheno_txt_file),
tests/unit/computations/test_wgcna.py (TestWgcna.test_create_json_file): Test
for call to open with encoding='utf-8' argument.
|
|
|
|
* Use `with` in place of plain `open`
* Use f-strings in place of `str.format()`
* Remove string interpolation from queries - provide data as query parameters
* other minor fixes
|
|
* tests/unit/db/test_genotypes2.py: New file
|
|
Related to commit 75dcfe295af57b16428c586cc11dbaa827a5feba
This commit removes the related test that was checking for the wrong thing.
|
|
Use pytest's `mark` feature to explicitly categorise the tests and run them
per category
|
|
Add property tests using pytest and hypothesis to test that the expected
properties hold for the
`gn3.computations.partial_correlations.dictify_by_samples`
function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Issue: https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
|
|
* Fix linting errors like:
- Unused variables
- Undeclared variable errors (mostly caused by typos, and wrong names)
- Missing documentation strings for functions
etc.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* Add some tests for the query builders to ensure that the queries are built
up correctly.
|
|
Notes:
https://github.com/genenetwork/genenetwork3/pull/56#issuecomment-973798918
* As mentioned in the notes, rather than rounding to an arbitrary number of
decimal places, it is a much better practice to use approximate comparisons
of floats for the tests.
|
|
Notes:
https://github.com/genenetwork/genenetwork3/pull/56#issuecomment-973798918
* From the notes above, the assert_allclose is a better function for figuring
out what failed, unlike the allclose that simply just returns a True/False
value.
This commit restores the use of the assert_allclose function, and then
disables the linter error due to the fact that there is no use of the `self`
keyword.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* Fix some obvious linting errors and remove obsolete code
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* Replace the code that was in the process of being migrated from R in
GeneNetwork1 with calls to pingouin functions that achieve the same thing.
Since the functions in this case are computing correlations and partial
correlations, rather than having home-rolled functions to do that, this
commit makes use of the tried and tested pingouin functions.
This avoids complicating our code with edge-case checks, and leverages the
performance optimisations done in pingouin.
|
|
* gn3/computations/partial_correlations.py: Remove rounding. Fix computation
of remaining covariates
*
tests/unit/computations/partial_correlations_test_data/pcor_rec_blackbox_test.txt:
reduce the number of covariates to between one (1) and three (3)
* tests/unit/computations/test_partial_correlations.py: fix some minor bugs
It turns out that the computation complexity increases exponentially, with
the number of covariates. Therefore, to get a somewhat sensible test time,
while retaining a large-ish number of tests, this commit reduces the number
of covariates to between 1 and 3.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* When the z value is a Sequence of sequences of values, each of the internal
sequences should form a column of its own, and not a row, as it was
originally set up to do.
|
|
partial-correlations
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/partial-correlations.gmi
* To improve the usefulness of already existing code, provide the parser
function as an argument to the `parse_input_line` function.
This was found to be useful when writing code to compare the `pcor.test`
function in GN1 and the `pingouin.partial_corr` function.
The format of the data generated when getting results for the `pcor.test`
function shared a lot with that of the `pcor.rec` function, but it was
different in a few, places, and the differences were non-trivial, needing
different parsing processes.
In such a case, it was found necessary to just pass in the function to do
the actual parsing, rather than create code with the same form as the
existing one, save for the function being called.
|
|
correlations2.compute_correlation computes the Pearson correlation
coefficient. Outsource this computation to scipy.stats.pearsonr. When the
inputs are constant, the Pearson correlation coefficient does not exist and is
represented by NaN. Update the tests to reflect this.
* gn3/computations/correlations2.py: Remove import of sqrt from math.
(compute_correlation): Reimplement using scipy.stats.pearsonr.
* tests/unit/computations/test_correlation.py: Import math.
(TestCorrelation.test_compute_correlation): When inputs are constant, set
expected correlation coefficient to NaN.
|
|
Floating point numbers should only be compared approximately. Different
implementations of functions might produce slightly different results.
* tests/unit/computations/test_correlation.py: Import assert_almost_equal from
numpy.testing.
(TestCorrelation.test_compute_correlation): Compare floats using
assert_almost_equal instead of assertEqual.
* tests/unit/test_heatmaps.py: Import assert_allclose from numpy.testing.
(TestHeatmap.test_cluster_traits): Use assert_allclose instead of assertEqual.
|
|
|