Age | Commit message (Collapse) | Author |
|
* Add missing function and module docstrings
* Remove unused imports
* Fix import order
* Rework some code sections to fix issues
* Disable some pylint errors.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/clustering.gmi
* Update the check to look for at least 2 traits before trying to generate the
heatmap.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/clustering.gmi
* gn3/api/heatmaps.py: Serialize the figure to JSON
* gn3/heatmaps.py: Return the figure object
Serialize the Plotly figure into JSON, and return that, so that it can be
used on the client to display the image.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/clustering.gmi
* gn3/api/heatmaps.py: Fix bugs in data parsing
* gn3/app.py: enable CORS
* gn3/settings.py: add flask-cors configurations
* guix.scm: Add flask-cors dependency
For easier testing of the heatmaps generation feature, this commit activates
the cross-origin resource sharing for all "localhost" origins.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/clustering.gmi
* gn3/api/heatmaps.py: Parse incoming data to build up correct trait names and
respond with only the computed heatmap data.
* gn3/heatmaps.py: Return only the computed data for heatmaps and clustering.
Since GN3 is supposed to handle only the data, and db-access, this commit
ensures that GN3 responds to the client with only the computed heatmap data,
and does not try to generate the heatmaps themselves.
The generation of the heatmaps will be delegated to the UI clients, such as
GeneNetwork2.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/clustering.gmi
* To help with demonstrating that the code is producing the expected output,
for now, we return the path to the generated html file that displays the
interactive heatmap.
At this point, it is mostly useful in the development environment. Moving
forward, we might have to actually stream the raw html, or if we can get the
Kaleido library packaged for GNU Guix, stream the images binary data instead.
|
|
Issue:
https://github.com/genenetwork/gn-gemtext-threads/blob/main/topics/gn1-migration-to-gn2/clustering.gmi
* Intergrate the heatmap generation code on the /api/heatmaps/clustered
endpoint.
The endpoint should take a json query of the form:
{"traits_names": [ ... ] }
where the "traits_name" value is a list of the full names of traits.
A sample query to the endpoint could be something like the following:
curl -i -X POST "http://localhost:8080/api/heatmaps/clustered" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-d '{
"traits_names": [
"UCLA_BXDBXH_CARTILAGE_V2::ILM103710672",
"UCLA_BXDBXH_CARTILAGE_V2::ILM2260338",
"UCLA_BXDBXH_CARTILAGE_V2::ILM3140576",
"UCLA_BXDBXH_CARTILAGE_V2::ILM5670577",
"UCLA_BXDBXH_CARTILAGE_V2::ILM2070121",
"UCLA_BXDBXH_CARTILAGE_V2::ILM103990541",
"UCLA_BXDBXH_CARTILAGE_V2::ILM1190722",
"UCLA_BXDBXH_CARTILAGE_V2::ILM6590722",
"UCLA_BXDBXH_CARTILAGE_V2::ILM4200064",
"UCLA_BXDBXH_CARTILAGE_V2::ILM3140463"
]
}'
which should respond with a json response containing the raw binary string
for the png format and possibly another for the svg format.
|
|
* Fix linting errors that do not change the function of the code.
|
|
* use normal function for correlation + rename functions
* update test for sample correlation
* use normal function for tissue correlation + rename functions
|
|
|
|
|
|
generate_rqtl_cmd and also made code check if output file already exists (so caching works)
|
|
instead of just the output filename
|
|
they don't have corresponding values
|
|
|
|
generate_rqtl_cmd which returns the actual command and output path
|
|
|
|
* gn3/api/general.py (run_r_qtl): New function.
* gn3/settings.py: New variable.
|
|
Generally avoid naming things with a "utils" prefix/ suffix since it
encourages contributors to dump any new functions there; and over time, as the
code grows, things get messy...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- add new api for gn2-gn3 sample r integration
- delete map for sample list to values
- add db util file
- add python msql-client dependency
- add db for fetching lit correlation results
- add unittests for db utils
- add tests for db_utils
- modify api for fetching lit correlation results
- refactor Mock Database Connector and unittests
- add sql url parser
- add SQL URI env variable
- refactor code for db utils
- modify return data for lit correlation
- refactor tissue correlation endpoint
- replace db_instance with conn
|
|
|
|
|
|
|
|
modify unittest and integration tests for datasets
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* delete unwanted correlation stuff
* Refactor/clean up correlations (#4)
* initial commit for Refactor/clean-up-correlation
* add python scipy dependency
* initial commit for sample correlation
* initial commit for sample correlation endpoint
* initial commit for integration and unittest
* initial commit for registering correlation blueprint
* add and modify unittest and integration tests for correlation
* Add compute compute_all_sample_corr method for correlation
* add scipy to requirement txt file
* add tissue correlation for trait list
* add unittest for tissue correlation
* add lit correlation for trait list
* add unittests for lit correlation for trait list
* modify lit correlarion for trait list
* add unittests for lit correlation for trait list
* add correlation metho in dynamic url
* add file format for expected structure input while doing sample correlation
* modify input data structure -> add trait id
* update tests for sample r correlation
* add compute all lit correlation method
* add endpoint for computing lit_corr
* add unit and integration tests for computing lit corr
* add /api/correlation/tissue_corr/{corr_method} endpoint for tissue correlation
* add unittest and integration tests for tissue correlation
Co-authored-by: BonfaceKilz <bonfacemunyoki@gmail.com>
* update guix scm file
* fix pylint error for correlations api
Co-authored-by: BonfaceKilz <bonfacemunyoki@gmail.com>
|
|
* initial commit for Refactor/clean-up-correlation
* add python scipy dependency
* initial commit for sample correlation
* initial commit for sample correlation endpoint
* initial commit for integration and unittest
* initial commit for registering correlation blueprint
* add and modify unittest and integration tests for correlation
* Add compute compute_all_sample_corr method for correlation
* add scipy to requirement txt file
* add tissue correlation for trait list
* add unittest for tissue correlation
* add lit correlation for trait list
* add unittests for lit correlation for trait list
* modify lit correlarion for trait list
* add unittests for lit correlation for trait list
* add correlation metho in dynamic url
* add file format for expected structure input while doing sample correlation
* modify input data structure -> add trait id
* update tests for sample r correlation
* add compute all lit correlation method
* add endpoint for computing lit_corr
* add unit and integration tests for computing lit corr
* add /api/correlation/tissue_corr/{corr_method} endpoint for tissue correlation
* add unittest and integration tests for tissue correlation
Co-authored-by: BonfaceKilz <bonfacemunyoki@gmail.com>
|
|
|
|
* add file for correlation api
* register initial correlation api
* add correlation package
* add function for getting page data
* delete loading page api
* modify code for correlation
* add tests folder for correlations
* fix error in correlation api
* add tests for correlation
* add tests for correlation loading data
* add module for correlation computations
* modify api to return json when computing correlation
* add tests for computing correlation
* modify code for loading correlation data
* modify tests for correlation computation
* test loading correlation data using api endpoint
* add tests for asserting error in creating Correlation object
* add do correlation method
* add dummy tests for do_correlation method
* delete unused modules
* add tests for creating trait and dataset
* add intergration test for correlation api
* add tests for correlation api
* edit docorrelation method
* modify integration tests for correlation api
* modify tests for show_corr_results
* add create dataset function
* pep8 formatting and fix return value for api
* add more test data for doing correlation
* modify tests for correlation
* pep8 formatting
* add getting formatted corr type method
* import json library
add process samples method for correlation
* fix issue with sample_vals key_error
* create utility module for correlation
* refactor endpoint for /corr_compute
* add test and mocks for compute_correlation function
* add compute correlation function and pep8 formatting
* move get genofile samplelist to utility module
* refactor code for CorrelationResults object
* pep8 formatting for module
* remove CorrelationResults from Api
* add base package
initialize data_set module with create_dataset,redis and Dataset_Getter
* set dataset_structure if redis is empty
* add callable for DatsetType
* add set_dataset_key method If name is not in the object's dataset dictionary
* add Dataset object and MrnaAssayDataSet
* add db_tools
* add mysql client
* add DatasetGroup object
* add species module
* get mapping method
* import helper functions and new dataset
* add connection to db before request
* add helper functions
* add logger module
* add get_group_samplelists module
* add logger for debug
* add code for adding sample_data
* pep8 formatting
* Add chunks module
* add correlation helper module
* add get_sample_r_and_p_values method
add get_header_fields function
* add generate corr json method
* add function to retrieve_trait_info
* remove comments and clean up code in show_corr_results
* remove comments and clean up code for data_set module
* pep8 formatting for helper_functions module
* pep8 formatting for trait module
* add module for species
* add Temp Dataset Object
* add Phenotype Dataset
* add Genotype Dataset
* add rettrieve sample_sample_data method
* add webqtlUtil module
* add do lit correlation for all traits
* add webqtlCaseData:Settings not ported
* return the_trait for create trait method
* add correlation_test json data
* add tests fore show corr results
* add dictfier package
* add tests for show_corr_results
* add assertion for trait_id
* refactor code for show_corr_results
* add test file for compute_corr intergration tests
* add scipy dependency
* refactor show_corr_results object
add do lit correlation for trait_list
* add hmac module
* add bunch module:Dictionary using object notation
* add correlation functions
* add rpy2 dependency
* add hmac module
* add MrnaAssayTissueData object and get_symbol_values_pairs function
* add config module
* add get json_results method
* pep8 formatting remove comments
* add config file
* add db package
* refactor correlatio compuatation module
* add do tissue correlation for trait list
* add do lit correlation for all traits
* add do tissue correlation for all traits
* add do_bicor for bicor method
* raise error for when initital start vars is None
* add support for both form and json data when for correlation input
* remove print statement and pep8 formatting
* add default settings file
* add tools module for locate_ignore_error
* refactor code remove comments for trait module
* Add new test data for computing correlation
* pep8 formatting and use pickle
* refactor function for filtering form/json data
* remove unused imports
* remove mock functions in correlation_utility module
* refactor tests for compute correlation and pep8 formatting
* add tests for show_correlation results
* modify tests for show_corr_results
* add json files for tests
* pep8 formatting for show_corr_results
* Todo:Lint base files
* pylint for intergration tests
* add test module for test_corr_helpers
* Add test chunk module
* lint utility package
* refactoring and pep8 formatting
* implement simple metric for correlation
* add hmac utility file
* add correlation prefix
* fix merge conflict
* minor fixes for endpoints
* import:python-scipy,python-sqlalchemy from guix
* add python mysqlclient
* remove pkg-resources from requirements
* add python-rpy3 from guix
* refactor code for species module
* pep8 formatting and refactor code
* add tests for genereating correlation results
* lint correlation functions
* fix failing tests for show_corr_results
* add new correlation test data fix errors
* fix issues related to getting group samplelists
* refactor intergration tests for correlation
* add todo for refactoring_wanted_inputs
* replace custom Attribute setter with SimpleNamespace
* comparison of sample r correlation results btwn genenenetwork2 and genenetwork3
* delete AttributeSetter
* test request for /api/correlation/compute_correlation took 18.55710196495056 Seconds
* refactor tests and show_correlation results
* remove unneccessary comments and print statements
* edit requirement txt file
* api/correlation took 114.29814600944519 Seconds for correlation resullts:20000
- corr-type:lit
- corr-method:pearson
corr-dataset:corr_dataset:HC_M2_0606_P
* capture SQL_URI and GENENETWORK FILES path
* pep8 formatting edit && remove print statements
* delete filter_input function
update test and data for correlation
* add docstring for required correlation_input
* /api/correlation took 12.905632972717285 Seconds
* pearson
* lit
*dataset:HX_M2_0606_P
trait_id :1444666
p_range:(lower->-0.60,uppper->0.74)
corr_return_results: 100
* update integration and unittest for correlation
* add simple markdown docs for correlation
* update docs
* add tests and catch for invalid correlation_input
* minor fix for api
* Remove jupyter from deps
* guix.scm: Remove duplicate entry
* guix.scm: Add extra action items as comments
* Trim requirements.txt file
Co-authored-by: BonfaceKilz <me@bonfacemunyoki.com>
|
|
|
|
|
|
|
|
|
|
Reviewed-by: BonfaceKilz <me@bonfacemunyoki.com>
|
|
|
|
|
|
|