diff options
41 files changed, 1988 insertions, 207 deletions
diff --git a/docs/dev/background_jobs.md b/docs/dev/background_jobs.md new file mode 100644 index 0000000..1a41636 --- /dev/null +++ b/docs/dev/background_jobs.md @@ -0,0 +1,62 @@ +# Background Jobs + +We run background jobs for long-running processes, e.g. quality-assurance checks +across multiple huge files, inserting huge data to databases, etc. The system +needs to keep track of the progress of these jobs and communicate the state to +the user whenever the user requests. + +This details some thoughts on how to handle these jobs, especially in failure +conditions. + +We currently use Redis[^redis] to keep track of the state of the background +processes. + +Every background job started will have a Redis[^redis] key with the prefix `gn-uploader:jobs` + +## Users + +Currently (2024-10-23T13:29UTC-05:00), we do not track the user that started the job. Moving forward, we will track this information. + +We could have the keys be something like, `gn-uploader:jobs:<user-id>:<job-id>`. + +Another option is track any particular users jobs with a key of the form +`gn-uploader:users:<user-id>:jobs` and in that case, have the job keys take the +form `gn-uploader:jobs:<job-id>`. I (@fredmanglis) favour this option over +having the user's ID in the jobs keys directly, since it provides a way to +interact with **ALL** the jobs without indirecting through each specific user. +This is a useful ability to have, especially for system administrative tasks. + +## Multiprocessing Within Jobs + +Some jobs, e.g. quality-assurance jobs, can run multiple threads/processes +themselves. This brings up a problem because Redis[^redis] does not allow +parallel access to a key, especially for writing. + +We also do not want to create bottlenecks by writing to the same key from +multiple threads/processes. + +The design I have currently come up with, that might work is as follows: + +- At any point just before where multiple threads/processes are started, a list + of new keys, each of which will collect the output from a single thread, will + be built. +- These keys are recorded in the parent's redis key data +- The threads/processes are started and do whatever they need, pushing their + outputs to the appropriate keys within redis. + +The new keys for the children threads/processe could build on the theme + + +## Fetching Jobs Status + +Different jobs could have different ways of requirements for handling/processing +their outputs, and those of any children they might spawn. The system will need +to provide a way to pass in the correct function/code to process the outputs at +the point where the job status is requested. + +This implies that we need to track the type of job in order to be able to select +the correct code for processing such output. + +## Links + +- [^redis]: https://redis.io/ diff --git a/docs/dev/quality_assurance_on_csv_files.md b/docs/dev/quality_assurance_on_csv_files.md new file mode 100644 index 0000000..02d63c9 --- /dev/null +++ b/docs/dev/quality_assurance_on_csv_files.md @@ -0,0 +1,52 @@ +# Quality Assurance/Control on CSV Files + +## Abbreviations + +- CSV files: Character-separated-values files — these are data files structured in a table-like format, with a specific character chosen as the column/field separator. The comma (,) is the most common field separator used by most csv files. It is, however, possible to encounter files with other characters separating the values. + +## General Pattern + +A general pattern has emerged when performing quality assurance on the data in +CSV files — the pseudocode below shows the general pattern: + +```python +def qc_function(filepath, …): + open(filepath, …) + + headers = read_first_line(…) + perform_qc_on_headings(headers, …) + + for each subsequent line in file: + perform_qc_on_first_column(line, …) + + for each subsequent field in line: + perform_qc_on_field(field, …) +``` + +We want to list the errors found in each file, so it makes sense for the `perform_qc_on*` functions in the pseudocode above to return the list of errors found for each file. + +The actual quality assurance done on the headers, first column of data rows, and the fields can differ from one type of file to the next, but the structure remains relatively unchanged. + +This implies we could make use of a higher-order function that contains the general structure with the actual qc steps passed in as functions that are called in the higher-order structuring function. This gives something like: + +```python +def qc_function(filepath, headers_qc, first_column_qc, data_qc, …): + for line in file: + if line is a comment line: + skip line and continue iteration + if line is first non-comment line: + line is the header line + call headers_qc on fields in this line + if line is not first non-comment line: + line is data line + call first_column_qc on first field of line + call data_qc on each of the subsequent fields of the line + + collect and return errors +``` + +## Improvements + +- Read the file in a separate generator function +- Parallelize QC if many files are present +- Add logging/output for user update (how do we do this correctly?) diff --git a/r_qtl/r_qtl2.py b/r_qtl/r_qtl2.py index 9da4081..c6307f5 100644 --- a/r_qtl/r_qtl2.py +++ b/r_qtl/r_qtl2.py @@ -18,6 +18,10 @@ FILE_TYPES = ( "geno", "founder_geno", "pheno", "covar", "phenocovar", "gmap", "pmap", "phenose") +__CONTROL_FILE_ERROR_MESSAGE__ = ( + "The zipped bundle that was provided does not contain a valid control file " + "in either JSON or YAML format.") + def __special_file__(filename): """ @@ -72,6 +76,8 @@ def transpose_csv( def __read_by_line__(_path): with open(_path, "r", encoding="utf8") as infile: for line in infile: + if line.startswith("#"): + continue yield line transposed_data= (f"{linejoinerfn(items)}\n" for items in zip(*( @@ -112,7 +118,7 @@ def __control_data_from_zipfile__(zfile: ZipFile) -> dict: or filename.endswith(".json")))) num_files = len(files) if num_files == 0: - raise InvalidFormat("Expected a json or yaml control file.") + raise InvalidFormat(__CONTROL_FILE_ERROR_MESSAGE__) if num_files > 1: raise InvalidFormat("Found more than one possible control file.") @@ -129,7 +135,6 @@ def __control_data_from_zipfile__(zfile: ZipFile) -> dict: else yaml.safe_load(zfile.read(files[0]))) } - def __control_data_from_dirpath__(dirpath: Path): """Load control data from a given directory path.""" files = tuple(path for path in dirpath.iterdir() @@ -137,7 +142,7 @@ def __control_data_from_dirpath__(dirpath: Path): and (path.suffix in (".yaml", ".json")))) num_files = len(files) if num_files == 0: - raise InvalidFormat("Expected a json or yaml control file.") + raise InvalidFormat(__CONTROL_FILE_ERROR_MESSAGE__) if num_files > 1: raise InvalidFormat("Found more than one possible control file.") @@ -200,8 +205,8 @@ def control_data(control_src: Union[Path, ZipFile]) -> dict: if control_src.is_dir(): return __cleanup__(__control_data_from_dirpath__(control_src)) raise InvalidFormat( - "Expects either a zipfile.ZipFile object or a pathlib.Path object " - "pointing to a directory containing the R/qtl2 bundle.") + "Expects either a zipped bundle of files or a path-like object " + "pointing to the zipped R/qtl2 bundle.") def replace_na_strings(cdata, val): @@ -549,3 +554,21 @@ def load_samples(zipfilepath: Union[str, Path], pass return tuple(samples) + + + +def read_text_file(filepath: Union[str, Path]) -> Iterator[str]: + """Read the raw text from a text file.""" + with open(filepath, "r", encoding="utf8") as _file: + for line in _file: + yield line + + +def read_csv_file(filepath: Union[str, Path], + separator: str = ",", + comment_char: str = "#") -> Iterator[tuple[str, ...]]: + """Read a file as a csv file.""" + for line in read_text_file(filepath): + if line.startswith(comment_char): + continue + yield tuple(field.strip() for field in line.split(separator)) diff --git a/r_qtl/r_qtl2_qc.py b/r_qtl/r_qtl2_qc.py index 7b26b50..2d9e9a8 100644 --- a/r_qtl/r_qtl2_qc.py +++ b/r_qtl/r_qtl2_qc.py @@ -95,7 +95,7 @@ def missing_files(bundlesrc: Union[Path, ZipFile]) -> tuple[tuple[str, str], ... "pointing to a directory containing the R/qtl2 bundle.") -def validate_bundle(zfile: ZipFile): +def validate_bundle(zfile: Union[Path, ZipFile]): """Ensure the R/qtl2 bundle is valid.""" missing = missing_files(zfile) if len(missing) > 0: diff --git a/scripts/cli_parser.py b/scripts/cli_parser.py index 308ee4b..d42ae66 100644 --- a/scripts/cli_parser.py +++ b/scripts/cli_parser.py @@ -19,6 +19,12 @@ def init_cli_parser(program: str, description: Optional[str] = None) -> Argument type=int, default=86400, help="How long to keep any redis keys around.") + parser.add_argument( + "--loglevel", + type=str, + default="INFO", + choices=["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"], + help="The severity of events to track with the logger.") return parser def add_global_data_arguments(parser: ArgumentParser) -> ArgumentParser: diff --git a/scripts/process_rqtl2_bundle.py b/scripts/process_rqtl2_bundle.py index 20cfd3b..ade9862 100644 --- a/scripts/process_rqtl2_bundle.py +++ b/scripts/process_rqtl2_bundle.py @@ -2,6 +2,7 @@ import sys import uuid import json +import argparse import traceback from typing import Any from pathlib import Path @@ -94,10 +95,11 @@ def process_bundle(dbconn: mdb.Connection, logger.info("Processing geno files.") genoexit = install_genotypes( dbconn, - meta["speciesid"], - meta["populationid"], - meta["geno-dataset-id"], - Path(meta["rqtl2-bundle-file"]), + argparse.Namespace( + speciesid=meta["speciesid"], + populationid=meta["populationid"], + datasetid=meta["geno-dataset-id"], + rqtl2bundle=Path(meta["rqtl2-bundle-file"])), logger) if genoexit != 0: raise Exception("Processing 'geno' file failed.") @@ -109,10 +111,11 @@ def process_bundle(dbconn: mdb.Connection, if has_pheno_file(thejob): phenoexit = install_pheno_files( dbconn, - meta["speciesid"], - meta["platformid"], - meta["probe-dataset-id"], - Path(meta["rqtl2-bundle-file"]), + argparse.Namespace( + speciesid=meta["speciesid"], + platformid=meta["platformid"], + dataset_id=meta["probe-dataset-id"], + rqtl2bundle=Path(meta["rqtl2-bundle-file"])), logger) if phenoexit != 0: raise Exception("Processing 'pheno' file failed.") diff --git a/scripts/qc_on_rqtl2_bundle2.py b/scripts/qc_on_rqtl2_bundle2.py new file mode 100644 index 0000000..7e5d253 --- /dev/null +++ b/scripts/qc_on_rqtl2_bundle2.py @@ -0,0 +1,346 @@ +"""Run Quality Control checks on R/qtl2 bundle.""" +import os +import sys +import json +from time import sleep +from pathlib import Path +from zipfile import ZipFile +from argparse import Namespace +from datetime import timedelta +import multiprocessing as mproc +from functools import reduce, partial +from logging import Logger, getLogger, StreamHandler +from typing import Union, Sequence, Callable, Iterator + +import MySQLdb as mdb +from redis import Redis + +from quality_control.errors import InvalidValue +from quality_control.checks import decimal_points_error + +from uploader import jobs +from uploader.db_utils import database_connection +from uploader.check_connections import check_db, check_redis + +from r_qtl import r_qtl2 as rqtl2 +from r_qtl import r_qtl2_qc as rqc +from r_qtl import exceptions as rqe +from r_qtl import fileerrors as rqfe + +from scripts.process_rqtl2_bundle import parse_job +from scripts.redis_logger import setup_redis_logger +from scripts.cli_parser import init_cli_parser, add_global_data_arguments +from scripts.rqtl2.bundleutils import build_line_joiner, build_line_splitter + + +def check_for_missing_files( + rconn: Redis, fqjobid: str, extractpath: Path, logger: Logger) -> bool: + """Check that all files listed in the control file do actually exist.""" + logger.info("Checking for missing files.") + missing = rqc.missing_files(extractpath) + # add_to_errors(rconn, fqjobid, "errors-generic", tuple( + # rqfe.MissingFile( + # mfile[0], mfile[1], ( + # f"File '{mfile[1]}' is listed in the control file under " + # f"the '{mfile[0]}' key, but it does not actually exist in " + # "the bundle.")) + # for mfile in missing)) + if len(missing) > 0: + logger.error(f"Missing files in the bundle!") + return True + return False + + +def open_file(file_: Path) -> Iterator: + """Open file and return one line at a time.""" + with open(file_, "r", encoding="utf8") as infile: + for line in infile: + yield line + + +def check_markers( + filename: str, + row: tuple[str, ...], + save_error: lambda val: val +) -> tuple[rqfe.InvalidValue]: + """Check that the markers are okay""" + errors = tuple() + counts = {} + for marker in row: + counts = {**counts, marker: counts.get(marker, 0) + 1} + if marker is None or marker == "": + errors = errors + (save_error(rqfe.InvalidValue( + filename, + "markers" + "-", + marker, + "A marker MUST be a valid value.")),) + + return errors + tuple( + save_error(rqfe.InvalidValue( + filename, + "markers", + key, + f"Marker '{key}' was repeated {value} times")) + for key,value in counts.items() if value > 1) + + +def check_geno_line( + filename: str, + headers: tuple[str, ...], + row: tuple[Union[str, None]], + cdata: dict, + save_error: lambda val: val +) -> tuple[rqfe.InvalidValue]: + """Check that the geno line is correct.""" + errors = tuple() + # Verify that line has same number of columns as headers + if len(headers) != len(row): + errors = errors + (save_error(rqfe.InvalidValue( + filename, + headers[0], + row[0], + row[0], + "Every line MUST have the same number of columns.")),) + + # First column is the individuals/cases/samples + if not bool(row[0]): + errors = errors + (save_error(rqfe.InvalidValue( + filename, + headers[0], + row[0], + row[0], + "The sample/case MUST be a valid value.")),) + + def __process_value__(val): + if val in cdata["na.strings"]: + return None + if val in cdata["alleles"]: + return cdata["genotypes"][val] + + genocode = cdata.get("genotypes", {}) + for coltitle, cellvalue in zip(headers[1:],row[1:]): + if ( + bool(genocode) and + cellvalue is not None and + cellvalue not in genocode.keys() + ): + errors = errors + (save_error(rqfe.InvalidValue( + filename, row[0], coltitle, cellvalue, + f"Value '{cellvalue}' is invalid. Expected one of " + f"'{', '.join(genocode.keys())}'.")),) + + return errors + + +def push_file_error_to_redis(rconn: Redis, key: str, error: InvalidValue) -> InvalidValue: + """Push the file error to redis a json string + + Parameters + ---------- + rconn: Connection to redis + key: The name of the list where we push the errors + error: The file error to save + + Returns + ------- + Returns the file error it saved + """ + if bool(error): + rconn.rpush(key, json.dumps(error._asdict())) + return error + + +def file_errors_and_details( + redisargs: dict[str, str], + file_: Path, + filetype: str, + cdata: dict, + linesplitterfn: Callable, + linejoinerfn: Callable, + headercheckers: tuple[Callable, ...], + bodycheckers: tuple[Callable, ...] +) -> dict: + """Compute errors, and other file metadata.""" + errors = tuple() + if cdata[f"{filetype}_transposed"]: + rqtl2.transpose_csv_with_rename(file_, linesplitterfn, linejoinerfn) + + with Redis.from_url(redisargs["redisuri"], decode_responses=True) as rconn: + save_error_fn = partial(push_file_error_to_redis, + rconn, + error_list_name(filetype, file_.name)) + for lineno, line in enumerate(open_file(file_), start=1): + row = linesplitterfn(line) + if lineno == 1: + headers = tuple(row) + errors = errors + reduce( + lambda errs, fnct: errs + fnct( + file_.name, row[1:], save_error_fn), + headercheckers, + tuple()) + continue + + errors = errors + reduce( + lambda errs, fnct: errs + fnct( + file_.name, headers, row, cdata, save_error_fn), + bodycheckers, + tuple()) + + filedetails = { + "filename": file_.name, + "filesize": os.stat(file_).st_size, + "linecount": lineno + } + rconn.hset(redisargs["fqjobid"], + f"file-details:{filetype}:{file_.name}", + json.dumps(filedetails)) + return {**filedetails, "errors": errors} + + +def error_list_name(filetype: str, filename: str): + """Compute the name of the list where the errors will be pushed. + + Parameters + ---------- + filetype: The type of file. One of `r_qtl.r_qtl2.FILE_TYPES` + filename: The name of the file. + """ + return f"errors:{filetype}:{filename}" + + +def check_for_geno_errors( + redisargs: dict[str, str], + extractdir: Path, + cdata: dict, + linesplitterfn: Callable[[str], tuple[Union[str, None]]], + linejoinerfn: Callable[[tuple[Union[str, None], ...]], str], + logger: Logger +) -> bool: + """Check for errors in genotype files.""" + if "geno" in cdata or "founder_geno" in cdata: + genofiles = tuple( + extractdir.joinpath(fname) for fname in cdata.get("geno", [])) + fgenofiles = tuple( + extractdir.joinpath(fname) for fname in cdata.get("founder_geno", [])) + allgenofiles = genofiles + fgenofiles + with Redis.from_url(redisargs["redisuri"], decode_responses=True) as rconn: + error_list_names = [ + error_list_name("geno", file_.name) for file_ in allgenofiles] + for list_name in error_list_names: + rconn.delete(list_name) + rconn.hset( + redisargs["fqjobid"], + "geno-errors-lists", + json.dumps(error_list_names)) + processes = [ + mproc.Process(target=file_errors_and_details, + args=( + redisargs, + file_, + ftype, + cdata, + linesplitterfn, + linejoinerfn, + (check_markers,), + (check_geno_line,)) + ) + for ftype, file_ in ( + tuple(("geno", file_) for file_ in genofiles) + + tuple(("founder_geno", file_) for file_ in fgenofiles)) + ] + for process in processes: + process.start() + # Set expiry for any created error lists + for key in error_list_names: + rconn.expire(name=key, + time=timedelta(seconds=redisargs["redisexpiry"])) + + # TOD0: Add the errors to redis + if any(rconn.llen(errlst) > 0 for errlst in error_list_names): + logger.error("At least one of the 'geno' files has (an) error(s).") + return True + logger.info("No error(s) found in any of the 'geno' files.") + + else: + logger.info("No 'geno' files to check.") + + return False + + +# def check_for_pheno_errors(...): +# """Check for errors in phenotype files.""" +# pass + + +# def check_for_phenose_errors(...): +# """Check for errors in phenotype, standard-error files.""" +# pass + + +# def check_for_phenocovar_errors(...): +# """Check for errors in phenotype-covariates files.""" +# pass + + +def run_qc(rconn: Redis, args: Namespace, fqjobid: str, logger: Logger) -> int: + """Run quality control checks on R/qtl2 bundles.""" + thejob = parse_job(rconn, args.redisprefix, args.jobid) + print(f"THE JOB =================> {thejob}") + jobmeta = thejob["job-metadata"] + inpath = Path(jobmeta["rqtl2-bundle-file"]) + extractdir = inpath.parent.joinpath(f"{inpath.name}__extraction_dir") + with ZipFile(inpath, "r") as zfile: + rqtl2.extract(zfile, extractdir) + + ### BEGIN: The quality control checks ### + cdata = rqtl2.control_data(extractdir) + splitter = build_line_splitter(cdata) + joiner = build_line_joiner(cdata) + + redisargs = { + "fqjobid": fqjobid, + "redisuri": args.redisuri, + "redisexpiry": args.redisexpiry + } + check_for_missing_files(rconn, fqjobid, extractdir, logger) + # check_for_pheno_errors(...) + check_for_geno_errors(redisargs, extractdir, cdata, splitter, joiner, logger) + # check_for_phenose_errors(...) + # check_for_phenocovar_errors(...) + ### END: The quality control checks ### + + def __fetch_errors__(rkey: str) -> tuple: + return tuple(json.loads(rconn.hget(fqjobid, rkey) or "[]")) + + return (1 if any(( + bool(__fetch_errors__(key)) + for key in + ("errors-geno", "errors-pheno", "errors-phenos", "errors-phenocovar"))) + else 0) + + +if __name__ == "__main__": + def main(): + """Enter R/qtl2 bundle QC runner.""" + args = add_global_data_arguments(init_cli_parser( + "qc-on-rqtl2-bundle", "Run QC on R/qtl2 bundle.")).parse_args() + check_redis(args.redisuri) + check_db(args.databaseuri) + + logger = getLogger("qc-on-rqtl2-bundle") + logger.addHandler(StreamHandler(stream=sys.stderr)) + logger.setLevel("DEBUG") + + fqjobid = jobs.job_key(args.redisprefix, args.jobid) + with Redis.from_url(args.redisuri, decode_responses=True) as rconn: + logger.addHandler(setup_redis_logger( + rconn, fqjobid, f"{fqjobid}:log-messages", + args.redisexpiry)) + + exitcode = run_qc(rconn, args, fqjobid, logger) + rconn.hset( + jobs.job_key(args.redisprefix, args.jobid), "exitcode", exitcode) + return exitcode + + sys.exit(main()) diff --git a/scripts/redis_logger.py b/scripts/redis_logger.py index 2ae682b..d3fde5f 100644 --- a/scripts/redis_logger.py +++ b/scripts/redis_logger.py @@ -1,5 +1,6 @@ """Utilities to log to redis for our worker scripts.""" import logging +from typing import Optional from redis import Redis @@ -26,6 +27,26 @@ class RedisLogger(logging.Handler): self.redisconnection.rpush(self.messageslistname, self.format(record)) self.redisconnection.expire(self.messageslistname, self.expiry) +class RedisMessageListHandler(logging.Handler): + """Send messages to specified redis list.""" + def __init__(self, + rconn: Redis, + fullyqualifiedkey: str, + loglevel: int = logging.NOTSET, + expiry: Optional[int] = 86400): + super().__init__(loglevel) + self.redisconnection = rconn + self.fullyqualifiedkey = fullyqualifiedkey + self.expiry = expiry + + def emit(self, record): + """Log out to specified `fullyqualifiedkey`.""" + self.redisconnection.rpush(self.fullyqualifiedkey, self.format(record)) + if bool(self.expiry): + self.redisconnection.expire(self.fullyqualifiedkey, self.expiry) + else: + self.redisconnection.persist(self.fullyqualifiedkey) + def setup_redis_logger(rconn: Redis, fullyqualifiedjobid: str, job_messagelist: str, diff --git a/scripts/rqtl2/bundleutils.py b/scripts/rqtl2/bundleutils.py new file mode 100644 index 0000000..17faa7c --- /dev/null +++ b/scripts/rqtl2/bundleutils.py @@ -0,0 +1,44 @@ +"""Common utilities to operate in R/qtl2 bundles.""" +from typing import Union, Callable + +def build_line_splitter(cdata: dict) -> Callable[[str], tuple[Union[str, None], ...]]: + """Build and return a function to use to split data in the files. + + Parameters + ---------- + cdata: A dict holding the control information included with the R/qtl2 + bundle. + + Returns + ------- + A function that takes a string and return a tuple of strings. + """ + separator = cdata["sep"] + na_strings = cdata["na.strings"] + def __splitter__(line: str) -> tuple[Union[str, None], ...]: + return tuple( + item if item not in na_strings else None + for item in + (field.strip() for field in line.strip().split(separator))) + return __splitter__ + + +def build_line_joiner(cdata: dict) -> Callable[[tuple[Union[str, None], ...]], str]: + """Build and return a function to use to split data in the files. + + Parameters + ---------- + cdata: A dict holding the control information included with the R/qtl2 + bundle. + + Returns + ------- + A function that takes a string and return a tuple of strings. + """ + separator = cdata["sep"] + na_strings = cdata["na.strings"] + def __joiner__(row: tuple[Union[str, None], ...]) -> str: + return separator.join( + (na_strings[0] if item is None else item) + for item in row) + return __joiner__ diff --git a/scripts/rqtl2/cli_parser.py b/scripts/rqtl2/cli_parser.py index bcc7a4f..9bb60a3 100644 --- a/scripts/rqtl2/cli_parser.py +++ b/scripts/rqtl2/cli_parser.py @@ -2,12 +2,22 @@ from pathlib import Path from argparse import ArgumentParser -def add_common_arguments(parser: ArgumentParser) -> ArgumentParser: - """Add common arguments to the CLI parser.""" - parser.add_argument("datasetid", - type=int, - help="The dataset to which the data belongs.") +def add_bundle_argument(parser: ArgumentParser) -> ArgumentParser: + """Add the `rqtl2bundle` argument.""" parser.add_argument("rqtl2bundle", type=Path, help="Path to R/qtl2 bundle zip file.") return parser + + +def add_datasetid_argument(parser: ArgumentParser) -> ArgumentParser: + """Add the `datasetid` argument.""" + parser.add_argument("datasetid", + type=int, + help="The dataset to which the data belongs.") + return parser + + +def add_common_arguments(parser: ArgumentParser) -> ArgumentParser: + """Add common arguments to the CLI parser.""" + return add_bundle_argument(add_datasetid_argument(parser)) diff --git a/scripts/rqtl2/entry.py b/scripts/rqtl2/entry.py index b7fb68e..bc4cd9f 100644 --- a/scripts/rqtl2/entry.py +++ b/scripts/rqtl2/entry.py @@ -1,5 +1,5 @@ """Build common script-entry structure.""" -from logging import Logger +import logging from typing import Callable from argparse import Namespace @@ -12,12 +12,19 @@ from uploader.check_connections import check_db, check_redis from scripts.redis_logger import setup_redis_logger -def build_main(args: Namespace, - run_fn: Callable[[Connection, Namespace], int], - logger: Logger, - loglevel: str = "INFO") -> Callable[[],int]: +def build_main( + args: Namespace, + run_fn: Callable[[Connection, Namespace, logging.Logger], int], + loggername: str +) -> Callable[[],int]: """Build a function to be used as an entry-point for scripts.""" def main(): + logging.basicConfig( + format=( + "%(asctime)s - %(levelname)s %(name)s: " + "(%(pathname)s: %(lineno)d) %(message)s"), + level=args.loglevel) + logger = logging.getLogger(loggername) check_db(args.databaseuri) check_redis(args.redisuri) if not args.rqtl2bundle.exists(): @@ -26,13 +33,12 @@ def build_main(args: Namespace, with (Redis.from_url(args.redisuri, decode_responses=True) as rconn, database_connection(args.databaseuri) as dbconn): - fqjobid = jobs.job_key(jobs.jobsnamespace(), args.jobid) + fqjobid = jobs.job_key(args.redisprefix, args.jobid) logger.addHandler(setup_redis_logger( rconn, fqjobid, f"{fqjobid}:log-messages", args.redisexpiry)) - logger.setLevel(loglevel) - return run_fn(dbconn, args) + return run_fn(dbconn, args, logger) return main diff --git a/scripts/rqtl2/install_genotypes.py b/scripts/rqtl2/install_genotypes.py index 6b89142..20a19da 100644 --- a/scripts/rqtl2/install_genotypes.py +++ b/scripts/rqtl2/install_genotypes.py @@ -1,11 +1,11 @@ """Load genotypes from R/qtl2 bundle into the database.""" import sys +import argparse import traceback -from pathlib import Path from zipfile import ZipFile from functools import reduce from typing import Iterator, Optional -from logging import Logger, getLogger, StreamHandler +from logging import Logger, getLogger import MySQLdb as mdb from MySQLdb.cursors import DictCursor @@ -19,6 +19,8 @@ from scripts.rqtl2.entry import build_main from scripts.rqtl2.cli_parser import add_common_arguments from scripts.cli_parser import init_cli_parser, add_global_data_arguments +__MODULE__ = "scripts.rqtl2.install_genotypes" + def insert_markers( dbconn: mdb.Connection, speciesid: int, @@ -185,13 +187,12 @@ def cross_reference_genotypes( def install_genotypes(#pylint: disable=[too-many-arguments, too-many-locals] dbconn: mdb.Connection, - speciesid: int, - populationid: int, - datasetid: int, - rqtl2bundle: Path, + args: argparse.Namespace, logger: Logger = getLogger(__name__) ) -> int: """Load any existing genotypes into the database.""" + (speciesid, populationid, datasetid, rqtl2bundle) = ( + args.speciesid, args.populationid, args.datasetid, args.rqtl2bundle) count = 0 with ZipFile(str(rqtl2bundle.absolute()), "r") as zfile: try: @@ -253,15 +254,5 @@ if __name__ == "__main__": return parser.parse_args() - thelogger = getLogger("install_genotypes") - thelogger.addHandler(StreamHandler(stream=sys.stderr)) - main = build_main( - cli_args(), - lambda dbconn, args: install_genotypes(dbconn, - args.speciesid, - args.populationid, - args.datasetid, - args.rqtl2bundle), - thelogger, - "INFO") + main = build_main(cli_args(), install_genotypes, __MODULE__) sys.exit(main()) diff --git a/scripts/rqtl2/install_phenos.py b/scripts/rqtl2/install_phenos.py index b5cab8e..a6e9fb2 100644 --- a/scripts/rqtl2/install_phenos.py +++ b/scripts/rqtl2/install_phenos.py @@ -1,10 +1,10 @@ """Load pheno from R/qtl2 bundle into the database.""" import sys +import argparse import traceback -from pathlib import Path from zipfile import ZipFile from functools import reduce -from logging import Logger, getLogger, StreamHandler +from logging import Logger, getLogger import MySQLdb as mdb from MySQLdb.cursors import DictCursor @@ -18,6 +18,8 @@ from r_qtl import r_qtl2_qc as rqc from functional_tools import take +__MODULE__ = "scripts.rqtl2.install_phenos" + def insert_probesets(dbconn: mdb.Connection, platformid: int, phenos: tuple[str, ...]) -> int: @@ -95,12 +97,11 @@ def cross_reference_probeset_data(dbconn: mdb.Connection, def install_pheno_files(#pylint: disable=[too-many-arguments, too-many-locals] dbconn: mdb.Connection, - speciesid: int, - platformid: int, - datasetid: int, - rqtl2bundle: Path, + args: argparse.Namespace, logger: Logger = getLogger()) -> int: """Load data in `pheno` files and other related files into the database.""" + (speciesid, platformid, datasetid, rqtl2bundle) = ( + args.speciesid, args.platformid, args.datasetid, args.rqtl2bundle) with ZipFile(str(rqtl2bundle), "r") as zfile: try: rqc.validate_bundle(zfile) @@ -155,16 +156,5 @@ if __name__ == "__main__": return parser.parse_args() - thelogger = getLogger("install_phenos") - thelogger.addHandler(StreamHandler(stream=sys.stderr)) - main = build_main( - cli_args(), - lambda dbconn, args: install_pheno_files(dbconn, - args.speciesid, - args.platformid, - args.datasetid, - args.rqtl2bundle, - thelogger), - thelogger, - "DEBUG") + main = build_main(cli_args(), install_pheno_files, __MODULE__) sys.exit(main()) diff --git a/scripts/rqtl2/phenotypes_qc.py b/scripts/rqtl2/phenotypes_qc.py new file mode 100644 index 0000000..83828e4 --- /dev/null +++ b/scripts/rqtl2/phenotypes_qc.py @@ -0,0 +1,468 @@ +"""Run quality control on phenotypes-specific files in the bundle.""" +import sys +import uuid +import shutil +import logging +import tempfile +import contextlib +from pathlib import Path +from logging import Logger +from zipfile import ZipFile +from argparse import Namespace +import multiprocessing as mproc +from functools import reduce, partial +from typing import Union, Iterator, Callable, Optional, Sequence + +import MySQLdb as mdb +from redis import Redis + +from r_qtl import r_qtl2 as rqtl2 +from r_qtl import r_qtl2_qc as rqc +from r_qtl import exceptions as rqe +from r_qtl.fileerrors import InvalidValue + +from functional_tools import chain + +from quality_control.checks import decimal_places_pattern + +from uploader.files import sha256_digest_over_file +from uploader.samples.models import samples_by_species_and_population + +from scripts.rqtl2.entry import build_main +from scripts.redis_logger import RedisMessageListHandler +from scripts.rqtl2.cli_parser import add_bundle_argument +from scripts.cli_parser import init_cli_parser, add_global_data_arguments +from scripts.rqtl2.bundleutils import build_line_joiner, build_line_splitter + +__MODULE__ = "scripts.rqtl2.phenotypes_qc" + +def validate(phenobundle: Path, logger: Logger) -> dict: + """Check that the bundle is generally valid""" + try: + rqc.validate_bundle(phenobundle) + except rqe.RQTLError as rqtlerr: + # logger.error("Bundle file validation failed!", exc_info=True) + return { + "skip": True, + "logger": logger, + "phenobundle": phenobundle, + "errors": (" ".join(rqtlerr.args),) + } + return { + "errors": tuple(), + "skip": False, + "phenobundle": phenobundle, + "logger": logger + } + + +def check_for_mandatory_pheno_keys( + phenobundle: Path, + logger: Logger, + **kwargs +) -> dict: + """Check that the mandatory keys exist for phenotypes.""" + if kwargs.get("skip", False): + return { + **kwargs, + "logger": logger, + "phenobundle": phenobundle + } + + _mandatory_keys = ("pheno", "phenocovar") + _cdata = rqtl2.read_control_file(phenobundle) + _errors = kwargs.get("errors", tuple()) + tuple( + f"Expected '{key}' file(s) are not declared in the bundle." + for key in _mandatory_keys if key not in _cdata.keys()) + return { + **kwargs, + "logger": logger, + "phenobundle": phenobundle, + "errors": _errors, + "skip": len(_errors) > 0 + } + + +def check_for_averages_files( + phenobundle: Path, + logger: Logger, + **kwargs +) -> dict: + """Check that averages files appear together""" + if kwargs.get("skip", False): + return { + **kwargs, + "logger": logger, + "phenobundle": phenobundle + } + + _together = (("phenose", "phenonum"), ("phenonum", "phenose")) + _cdata = rqtl2.read_control_file(phenobundle) + _errors = kwargs.get("errors", tuple()) + tuple( + f"'{first}' is defined in the control file but there is no " + f"corresponding '{second}'" + for first, second in _together + if ((first in _cdata.keys()) and (second not in _cdata.keys()))) + return { + **kwargs, + "logger": logger, + "phenobundle": phenobundle, + "errors": _errors, + "skip": len(_errors) > 0 + } + + +def extract_bundle( + bundle: Path, workdir: Path, jobid: uuid.UUID +) -> tuple[Path, tuple[Path, ...]]: + """Extract the bundle.""" + with ZipFile(bundle) as zfile: + extractiondir = workdir.joinpath( + f"{str(jobid)}-{sha256_digest_over_file(bundle)}-{bundle.name}") + return extractiondir, rqtl2.extract(zfile, extractiondir) + + +def undo_transpose(filetype: str, cdata: dict, extractiondir): + """Undo transposition of all files of type `filetype` in thebundle.""" + if len(cdata.get(filetype, [])) > 0 and cdata.get(f"{filetype}_transposed", False): + files = (extractiondir.joinpath(_file) for _file in cdata[filetype]) + for _file in files: + rqtl2.transpose_csv_with_rename( + _file, + build_line_splitter(cdata), + build_line_joiner(cdata)) + + +@contextlib.contextmanager +def redis_logger( + redisuri: str, loggername: str, filename: str, fqkey: str +) -> Iterator[logging.Logger]: + """Build a Redis message-list logger.""" + rconn = Redis.from_url(redisuri, decode_responses=True) + logger = logging.getLogger(loggername) + logger.propagate = False + handler = RedisMessageListHandler( + rconn, + fullyqualifiedkey(fqkey, filename))#type: ignore[arg-type] + handler.setFormatter(logging.getLogger().handlers[0].formatter) + logger.addHandler(handler) + try: + yield logger + finally: + rconn.close() + + +def qc_phenocovar_file( + filepath: Path, + redisuri, + fqkey: str, + separator: str, + comment_char: str): + """Check that `phenocovar` files are structured correctly.""" + with redis_logger( + redisuri, + f"{__MODULE__}.qc_phenocovar_file", + filepath.name, + fqkey) as logger: + logger.info("Running QC on file: %s", filepath.name) + _csvfile = rqtl2.read_csv_file(filepath, separator, comment_char) + _headings = tuple(heading.lower() for heading in next(_csvfile)) + _errors: tuple[InvalidValue, ...] = tuple() + for heading in ("description", "units"): + if heading not in _headings: + _errors = (InvalidValue( + filepath.name, + "header row", + "-", + "-", + (f"File {filepath.name} is missing the {heading} heading " + "in the header line.")),) + + def collect_errors(errors_and_linecount, line): + _errs, _lc = errors_and_linecount + logger.info("Testing record '%s'", line[0]) + if len(line) != len(_headings): + _errs = _errs + (InvalidValue( + filepath.name, + line[0], + "-", + "-", + (f"Record {_lc} in file {filepath.name} has a different " + "number of columns than the number of headings")),) + _line = dict(zip(_headings, line)) + if not bool(_line["description"]): + _errs = _errs + ( + InvalidValue(filepath.name, + _line[_headings[0]], + "description", + _line["description"], + "The description is not provided!"),) + + return _errs, _lc+1 + + return { + filepath.name: dict(zip( + ("errors", "linecount"), + reduce(collect_errors, _csvfile, (_errors, 1)))) + } + + +def merge_dicts(*dicts): + """Merge multiple dicts into a single one.""" + return reduce(lambda merged, dct: {**merged, **dct}, dicts, {}) + + +def decimal_points_error(# pylint: disable=[too-many-arguments] + filename: str, + rowtitle: str, + coltitle: str, + cellvalue: str, + message: str, + decimal_places: int = 1 +) -> Optional[InvalidValue]: + """Returns an error if the value does not meet the checks.""" + if not bool(decimal_places_pattern(decimal_places).match(cellvalue)): + return InvalidValue(filename, rowtitle, coltitle, cellvalue, message) + return None + + +def integer_error( + filename: str, + rowtitle: str, + coltitle: str, + cellvalue: str, + message: str +) -> Optional[InvalidValue]: + """Returns an error if the value does not meet the checks.""" + try: + value = int(cellvalue) + if value <= 0: + raise ValueError("Must be a non-zero, positive number.") + return None + except ValueError as _verr: + return InvalidValue(filename, rowtitle, coltitle, cellvalue, message) + + +def qc_pheno_file(# pylint: disable=[too-many-arguments] + filepath: Path, + redisuri: str, + fqkey: str, + samples: tuple[str, ...], + phenonames: tuple[str, ...], + separator: str, + comment_char: str, + na_strings: Sequence[str], + error_fn: Callable = decimal_points_error +): + """Run QC/QA on a `pheno` file.""" + with redis_logger( + redisuri, + f"{__MODULE__}.qc_pheno_file", + filepath.name, + fqkey) as logger: + logger.info("Running QC on file: %s", filepath.name) + _csvfile = rqtl2.read_csv_file(filepath, separator, comment_char) + _headings: tuple[str, ...] = tuple( + heading.lower() for heading in next(_csvfile)) + _errors: tuple[InvalidValue, ...] = tuple() + + _absent = tuple(pheno for pheno in _headings[1:] if pheno not in phenonames) + if len(_absent) > 0: + _errors = _errors + (InvalidValue( + filepath.name, + "header row", + "-", + ", ".join(_absent), + (f"The phenotype names ({', '.join(samples)}) do not exist in any " + "of the provided phenocovar files.")),) + + def collect_errors(errors_and_linecount, line): + _errs, _lc = errors_and_linecount + if line[0] not in samples: + _errs = _errs + (InvalidValue( + filepath.name, + line[0], + _headings[0], + line[0], + (f"The sample named '{line[0]}' does not exist in the database. " + "You will need to upload that first.")),) + + for field, value in zip(_headings[1:], line[1:]): + if value in na_strings: + continue + _err = error_fn( + filepath.name, + line[0], + field, + value) + _errs = _errs + ((_err,) if bool(_err) else tuple()) + + return _errs, _lc+1 + + return { + filepath.name: dict(zip( + ("errors", "linecount"), + reduce(collect_errors, _csvfile, (_errors, 1)))) + } + + +def phenotype_names(filepath: Path, + separator: str, + comment_char: str) -> tuple[str, ...]: + """Read phenotype names from `phenocovar` file.""" + return reduce(lambda tpl, line: tpl + (line[0],),#type: ignore[arg-type, return-value] + rqtl2.read_csv_file(filepath, separator, comment_char), + tuple())[1:] + +def fullyqualifiedkey( + prefix: str, + rest: Optional[str] = None +) -> Union[Callable[[str], str], str]: + """Compute fully qualified Redis key.""" + if not bool(rest): + return lambda _rest: f"{prefix}:{_rest}" + return f"{prefix}:{rest}" + +def run_qc(# pylint: disable=[too-many-locals] + dbconn: mdb.Connection, + args: Namespace, + logger: Logger +) -> int: + """Run quality control checks on the bundle.""" + logger.debug("Beginning the quality assuarance checks.") + results = check_for_averages_files( + **check_for_mandatory_pheno_keys( + **validate(args.rqtl2bundle, logger))) + errors = results.get("errors", tuple()) + if len(errors) > 0: + logger.error("We found the following errors:\n%s", + "\n".join(f" - {error}" for error in errors)) + return 1 + # Run QC on actual values + # Steps: + # - Extract file to specific directory + extractiondir, *_bundlefiles = extract_bundle( + args.rqtl2bundle, args.workingdir, args.jobid) + + # - For every pheno, phenocovar, phenose, phenonum file, undo + # transposition where relevant + cdata = rqtl2.control_data(extractiondir) + with mproc.Pool(mproc.cpu_count() - 1) as pool: + pool.starmap( + undo_transpose, + ((ftype, cdata, extractiondir) + for ftype in ("pheno", "phenocovar", "phenose", "phenonum"))) + + # - Fetch samples/individuals from database. + logger.debug("Fetching samples/individuals from the database.") + samples = tuple(#type: ignore[var-annotated] + item for item in set(reduce( + lambda acc, item: acc + ( + item["Name"], item["Name2"], item["Symbol"], item["Alias"]), + samples_by_species_and_population( + dbconn, args.speciesid, args.populationid), + tuple())) + if bool(item)) + + # - Check that `description` and `units` is present in phenocovar for + # all phenotypes + with mproc.Pool(mproc.cpu_count() - 1) as pool: + logger.debug("Check for errors in 'phenocovar' file(s).") + _phenocovar_qc_res = merge_dicts(*pool.starmap(qc_phenocovar_file, tuple( + (extractiondir.joinpath(_file), + args.redisuri, + chain( + "phenocovar", + fullyqualifiedkey(args.jobid), + fullyqualifiedkey(args.redisprefix)), + cdata["sep"], + cdata["comment.char"]) + for _file in cdata.get("phenocovar", [])))) + + # - Check all samples in pheno files exist in database + # - Check all phenotypes in pheno files exist in phenocovar files + # - Check all numeric values in pheno files + phenonames = tuple(set( + name for names in pool.starmap(phenotype_names, tuple( + (extractiondir.joinpath(_file), cdata["sep"], cdata["comment.char"]) + for _file in cdata.get("phenocovar", []))) + for name in names)) + + dec_err_fn = partial(decimal_points_error, message=( + "Expected a non-negative number with at least one decimal " + "place.")) + + logger.debug("Check for errors in 'pheno' file(s).") + _pheno_qc_res = merge_dicts(*pool.starmap(qc_pheno_file, tuple(( + extractiondir.joinpath(_file), + args.redisuri, + chain( + "pheno", + fullyqualifiedkey(args.jobid), + fullyqualifiedkey(args.redisprefix)), + samples, + phenonames, + cdata["sep"], + cdata["comment.char"], + cdata["na.strings"], + dec_err_fn + ) for _file in cdata.get("pheno", [])))) + + # - Check the 3 checks above for phenose and phenonum values too + # qc_phenose_files(…) + # qc_phenonum_files(…) + logger.debug("Check for errors in 'phenose' file(s).") + _phenose_qc_res = merge_dicts(*pool.starmap(qc_pheno_file, tuple(( + extractiondir.joinpath(_file), + args.redisuri, + chain( + "phenose", + fullyqualifiedkey(args.jobid), + fullyqualifiedkey(args.redisprefix)), + samples, + phenonames, + cdata["sep"], + cdata["comment.char"], + cdata["na.strings"], + dec_err_fn + ) for _file in cdata.get("phenose", [])))) + + logger.debug("Check for errors in 'phenonum' file(s).") + _phenonum_qc_res = merge_dicts(*pool.starmap(qc_pheno_file, tuple(( + extractiondir.joinpath(_file), + args.redisuri, + chain( + "phenonum", + fullyqualifiedkey(args.jobid), + fullyqualifiedkey(args.redisprefix)), + samples, + phenonames, + cdata["sep"], + cdata["comment.char"], + cdata["na.strings"], + partial(integer_error, message=( + "Expected a non-negative, non-zero integer value.")) + ) for _file in cdata.get("phenonum", [])))) + + # - Delete all extracted files + shutil.rmtree(extractiondir) + raise NotImplementedError("WIP!") + + +if __name__ == "__main__": + def cli_args(): + """Process command-line arguments for `install_phenos`""" + parser = add_bundle_argument(add_global_data_arguments(init_cli_parser( + program="PhenotypesQC", + description=( + "Perform Quality Control checks on a phenotypes bundle file")))) + parser.add_argument( + "--workingdir", + default=f"{tempfile.gettempdir()}/phenotypes_qc", + help=("The directory where this script will put its intermediate " + "files."), + type=Path) + return parser.parse_args() + + main = build_main(cli_args(), run_qc, __MODULE__) + sys.exit(main()) diff --git a/tests/qc_app/__init__.py b/tests/uploader/__init__.py index e69de29..e69de29 100644 --- a/tests/qc_app/__init__.py +++ b/tests/uploader/__init__.py diff --git a/tests/qc_app/test_entry.py b/tests/uploader/test_entry.py index 0c614a5..0c614a5 100644 --- a/tests/qc_app/test_entry.py +++ b/tests/uploader/test_entry.py diff --git a/tests/qc_app/test_expression_data_pages.py b/tests/uploader/test_expression_data_pages.py index c2f7de1..c2f7de1 100644 --- a/tests/qc_app/test_expression_data_pages.py +++ b/tests/uploader/test_expression_data_pages.py diff --git a/tests/uploader/test_files.py b/tests/uploader/test_files.py new file mode 100644 index 0000000..cb22fff --- /dev/null +++ b/tests/uploader/test_files.py @@ -0,0 +1,17 @@ +"""Tests functions in the `uploader.files` module.""" +from pathlib import Path + +import pytest + +from uploader.files import sha256_digest_over_file + +@pytest.mark.unit_test +@pytest.mark.parametrize( + "filepath,expectedhash", + ((Path("tests/test_data/average.tsv.zip"), + "a371c654c095c030edad468e1c3d6b176ea8adfbcd91a322afd37779044478d9"), + (Path("tests/test_data/standarderror.tsv"), + "a08332e0b06391d50eecb722f69d85fbdf374a2d77713ee879d3fd6c60419d55"))) +def test_sha256_digest_over_file(filepath: Path, expectedhash: str): + """Test the `sha256_digest_over_file` function.""" + assert sha256_digest_over_file(filepath) == expectedhash diff --git a/tests/qc_app/test_parse.py b/tests/uploader/test_parse.py index 076c47c..076c47c 100644 --- a/tests/qc_app/test_parse.py +++ b/tests/uploader/test_parse.py diff --git a/tests/qc_app/test_progress_indication.py b/tests/uploader/test_progress_indication.py index 14a1050..14a1050 100644 --- a/tests/qc_app/test_progress_indication.py +++ b/tests/uploader/test_progress_indication.py diff --git a/tests/qc_app/test_results_page.py b/tests/uploader/test_results_page.py index 8c8379f..8c8379f 100644 --- a/tests/qc_app/test_results_page.py +++ b/tests/uploader/test_results_page.py diff --git a/tests/qc_app/test_uploads_with_zip_files.py b/tests/uploader/test_uploads_with_zip_files.py index 1506cfa..1506cfa 100644 --- a/tests/qc_app/test_uploads_with_zip_files.py +++ b/tests/uploader/test_uploads_with_zip_files.py diff --git a/uploader/default_settings.py b/uploader/default_settings.py index 26fe665..1acb247 100644 --- a/uploader/default_settings.py +++ b/uploader/default_settings.py @@ -2,15 +2,12 @@ The default configuration file. The values here should be overridden in the actual configuration file used for the production and staging systems. """ - -import os - -LOG_LEVEL = os.getenv("LOG_LEVEL", "WARNING") +LOG_LEVEL = "WARNING" SECRET_KEY = b"<Please! Please! Please! Change This!>" UPLOAD_FOLDER = "/tmp/qc_app_files" REDIS_URL = "redis://" JOBS_TTL_SECONDS = 1209600 # 14 days -GNQC_REDIS_PREFIX="GNQC" +GNQC_REDIS_PREFIX="gn-uploader" SQL_URI = "" GN2_SERVER_URL = "https://genenetwork.org/" diff --git a/uploader/files.py b/uploader/files.py index b163612..d37a53e 100644 --- a/uploader/files.py +++ b/uploader/files.py @@ -1,7 +1,9 @@ """Utilities to deal with uploaded files.""" import hashlib from pathlib import Path +from typing import Iterator from datetime import datetime + from flask import current_app from werkzeug.utils import secure_filename @@ -21,6 +23,27 @@ def save_file(fileobj: FileStorage, upload_dir: Path) -> Path: fileobj.save(filepath) return filepath + def fullpath(filename: str): """Get a file's full path. This makes use of `flask.current_app`.""" return Path(current_app.config["UPLOAD_FOLDER"], filename).absolute() + + +def chunked_binary_read(filepath: Path, chunksize: int = 2048) -> Iterator: + """Read a file in binary mode in chunks.""" + with open(filepath, "rb") as inputfile: + while True: + data = inputfile.read(chunksize) + if data != b"": + yield data + continue + break + + +def sha256_digest_over_file(filepath: Path) -> str: + """Compute the sha256 digest over a file's contents.""" + filehash = hashlib.sha256() + for chunk in chunked_binary_read(filepath): + filehash.update(chunk) + + return filehash.hexdigest() diff --git a/uploader/input_validation.py b/uploader/input_validation.py index 9abe742..627c69e 100644 --- a/uploader/input_validation.py +++ b/uploader/input_validation.py @@ -1,14 +1,19 @@ """Input validation utilities""" +import re +import json +import base64 from typing import Any def is_empty_string(value: str) -> bool: """Check whether as string is empty""" return (isinstance(value, str) and value.strip() == "") + def is_empty_input(value: Any) -> bool: """Check whether user provided an empty value.""" return (value is None or is_empty_string(value)) + def is_integer_input(value: Any) -> bool: """ Check whether user provided a value that can be parsed into an integer. @@ -25,3 +30,42 @@ def is_integer_input(value: Any) -> bool: __is_int__(value, 10) or __is_int__(value, 8) or __is_int__(value, 16)))) + + +def is_valid_representative_name(repr_name: str) -> bool: + """ + Check whether the given representative name is a valid according to our rules. + + Parameters + ---------- + repr_name: a string of characters. + + Checks For + ---------- + * The name MUST start with an alphabet [a-zA-Z] + * The name MUST end with an alphabet [a-zA-Z] or number [0-9] + * The name MUST be composed of alphabets [a-zA-Z], numbers [0-9], + underscores (_) and/or hyphens (-). + + Returns + ------- + Boolean indicating whether or not the name is valid. + """ + pattern = re.compile(r"^[a-zA-Z]+[a-zA-Z0-9_-]*[a-zA-Z0-9]$") + return bool(pattern.match(repr_name)) + + +def encode_errors(errors: tuple[tuple[str, str], ...], form) -> bytes: + """Encode form errors into base64 string.""" + return base64.b64encode( + json.dumps({ + "errors": dict(errors), + "original_formdata": dict(form) + }).encode("utf8")) + + +def decode_errors(errorstr) -> dict[str, dict]: + """Decode errors from base64 string""" + if not bool(errorstr): + return {"errors": {}, "original_formdata": {}} + return json.loads(base64.b64decode(errorstr.encode("utf8")).decode("utf8")) diff --git a/uploader/jobs.py b/uploader/jobs.py index 21889da..4a3fc80 100644 --- a/uploader/jobs.py +++ b/uploader/jobs.py @@ -10,7 +10,7 @@ from typing import Union, Optional from redis import Redis from flask import current_app as app -JOBS_PREFIX = "JOBS" +JOBS_PREFIX = "jobs" class JobNotFound(Exception): """Raised if we try to retrieve a non-existent job.""" diff --git a/uploader/phenotypes/models.py b/uploader/phenotypes/models.py index eb5a189..9324601 100644 --- a/uploader/phenotypes/models.py +++ b/uploader/phenotypes/models.py @@ -1,6 +1,7 @@ """Database and utility functions for phenotypes.""" from typing import Optional from functools import reduce +from datetime import datetime import MySQLdb as mdb from MySQLdb.cursors import Cursor, DictCursor @@ -78,7 +79,7 @@ def __phenotype_se__(cursor: Cursor, xref_id: str) -> dict: """Fetch standard-error values (if they exist) for a phenotype.""" _sequery = ( - "SELECT pxr.Id AS xref_id, pxr.DataId, pse.error, nst.count " + "SELECT pxr.Id AS xref_id, pxr.DataId, str.Id AS StrainId, pse.error, nst.count " "FROM Phenotype AS pheno " "INNER JOIN PublishXRef AS pxr ON pheno.Id=pxr.PhenotypeId " "INNER JOIN PublishSE AS pse ON pxr.DataId=pse.DataId " @@ -90,7 +91,12 @@ def __phenotype_se__(cursor: Cursor, "WHERE (str.SpeciesId, pxr.InbredSetId, pf.Id, pxr.Id)=(%s, %s, %s, %s)") cursor.execute(_sequery, (species_id, population_id, dataset_id, xref_id)) - return {row["xref_id"]: dict(row) for row in cursor.fetchall()} + return {(row["DataId"], row["StrainId"]): { + "xref_id": row["xref_id"], + "DataId": row["DataId"], + "error": row["error"], + "count": row["count"] + } for row in cursor.fetchall()} def __organise_by_phenotype__(pheno, row): """Organise disparate data rows into phenotype 'objects'.""" @@ -105,11 +111,10 @@ def __organise_by_phenotype__(pheno, row): "Units": row["Units"], "Pre_publication_abbreviation": row["Pre_publication_abbreviation"], "Post_publication_abbreviation": row["Post_publication_abbreviation"], + "xref_id": row["pxr.Id"], "data": { - #TOD0: organise these by DataId and StrainId **(_pheno["data"] if bool(_pheno) else {}), - row["pxr.Id"]: { - "xref_id": row["pxr.Id"], + (row["DataId"], row["StrainId"]): { "DataId": row["DataId"], "mean": row["mean"], "Locus": row["Locus"], @@ -168,7 +173,7 @@ def phenotype_by_id( species_id, population_id, dataset_id, - xref_id)).items()) + xref_id)).values()) } if bool(_pheno) and len(_pheno.keys()) > 1: raise Exception( @@ -198,3 +203,30 @@ def phenotypes_data(conn: mdb.Connection, cursor.execute(_query, (population_id, dataset_id)) debug_query(cursor) return tuple(dict(row) for row in cursor.fetchall()) + + +def save_new_dataset(cursor: Cursor, + population_id: int, + dataset_name: str, + dataset_fullname: str, + dataset_shortname: str) -> dict: + """Create a new phenotype dataset.""" + params = { + "population_id": population_id, + "dataset_name": dataset_name, + "dataset_fullname": dataset_fullname, + "dataset_shortname": dataset_shortname, + "created": datetime.now().date().isoformat(), + "public": 2, + "confidentiality": 0, + "users": None + } + cursor.execute( + "INSERT INTO PublishFreeze(Name, FullName, ShortName, CreateTime, " + "public, InbredSetId, confidentiality, AuthorisedUsers) " + "VALUES(%(dataset_name)s, %(dataset_fullname)s, %(dataset_shortname)s, " + "%(created)s, %(public)s, %(population_id)s, %(confidentiality)s, " + "%(users)s)", + params) + debug_query(cursor) + return {**params, "Id": cursor.lastrowid} diff --git a/uploader/phenotypes/views.py b/uploader/phenotypes/views.py index 7fc3b08..02e8078 100644 --- a/uploader/phenotypes/views.py +++ b/uploader/phenotypes/views.py @@ -1,6 +1,13 @@ """Views handling ('classical') phenotypes.""" +import sys +import uuid +import json +from pathlib import Path from functools import wraps +from redis import Redis +from requests.models import Response +from MySQLdb.cursors import DictCursor from flask import (flash, request, url_for, @@ -9,6 +16,12 @@ from flask import (flash, render_template, current_app as app) +# from r_qtl import r_qtl2 as rqtl2 +from r_qtl import r_qtl2_qc as rqc +from r_qtl import exceptions as rqe + +from uploader import jobs +from uploader.files import save_file#, fullpath from uploader.oauth2.client import oauth2_post from uploader.authorisation import require_login from uploader.db_utils import database_connection @@ -18,10 +31,14 @@ from uploader.request_checks import with_species, with_population from uploader.datautils import safe_int, order_by_family, enumerate_sequence from uploader.population.models import (populations_by_species, population_by_species_and_id) +from uploader.input_validation import (encode_errors, + decode_errors, + is_valid_representative_name) from .models import (dataset_by_id, phenotype_by_id, phenotypes_count, + save_new_dataset, dataset_phenotypes, datasets_by_population) @@ -170,6 +187,8 @@ def view_dataset(# pylint: disable=[unused-argument] offset=start_at, limit=count), start=start_at+1), + start_from=start_at, + count=count, activelink="view-dataset") @@ -190,6 +209,31 @@ def view_phenotype(# pylint: disable=[unused-argument] **kwargs ): """View an individual phenotype from the dataset.""" + def __render__(privileges): + return render_template( + "phenotypes/view-phenotype.html", + species=species, + population=population, + dataset=dataset, + phenotype=phenotype_by_id(conn, + species["SpeciesId"], + population["Id"], + dataset["Id"], + xref_id), + privileges=(privileges + ### For demo! Do not commit this part + + ("group:resource:edit-resource", + "group:resource:delete-resource",) + ### END: For demo! Do not commit this part + ), + activelink="view-phenotype") + + def __fail__(error): + if isinstance(error, Response) and error.json() == "No linked resource!": + return __render__(tuple()) + return make_either_error_handler( + "There was an error fetching the roles and privileges.")(error) + with database_connection(app.config["SQL_URI"]) as conn: return oauth2_post( "/auth/resource/phenotypes/individual/linked-resource", @@ -200,19 +244,125 @@ def view_phenotype(# pylint: disable=[unused-argument] "xref_id": xref_id } ).then( - lambda resource: render_template( - "phenotypes/view-phenotype.html", - species=species, - population=population, - dataset=dataset, - phenotype=phenotype_by_id(conn, - species["SpeciesId"], - population["Id"], - dataset["Id"], - xref_id), - resource=resource, - activelink="view-phenotype") - ).either( - make_either_error_handler( - "There was an error fetching the roles and privileges."), - lambda resp: resp) + lambda resource: tuple( + privilege["privilege_id"] for role in resource["roles"] + for privilege in role["privileges"]) + ).then(__render__).either(__fail__, lambda resp: resp) + + +@phenotypesbp.route( + "<int:species_id>/populations/<int:population_id>/phenotypes/datasets/create", + methods=["GET", "POST"]) +@require_login +@with_population( + species_redirect_uri="species.populations.phenotypes.index", + redirect_uri="species.populations.phenotypes.select_population") +def create_dataset(species: dict, population: dict, **kwargs):# pylint: disable=[unused-argument] + """Create a new phenotype dataset.""" + with (database_connection(app.config["SQL_URI"]) as conn, + conn.cursor(cursorclass=DictCursor) as cursor): + if request.method == "GET": + return render_template("phenotypes/create-dataset.html", + activelink="create-dataset", + species=species, + population=population, + **decode_errors( + request.args.get("error_values", ""))) + + form = request.form + _errors: tuple[tuple[str, str], ...] = tuple() + if not is_valid_representative_name( + (form.get("dataset-name") or "").strip()): + _errors = _errors + (("dataset-name", "Invalid dataset name."),) + + if not bool((form.get("dataset-fullname") or "").strip()): + _errors = _errors + (("dataset-fullname", + "You must provide a value for 'Full Name'."),) + + if bool(_errors) > 0: + return redirect(url_for( + "species.populations.phenotypes.create_dataset", + species_id=species["SpeciesId"], + population_id=population["Id"], + error_values=encode_errors(_errors, form))) + + dataset_shortname = ( + form["dataset-shortname"] or form["dataset-name"]).strip() + _pheno_dataset = save_new_dataset( + cursor, + population["Id"], + form["dataset-name"].strip(), + form["dataset-fullname"].strip(), + dataset_shortname) + return redirect(url_for("species.populations.phenotypes.list_datasets", + species_id=species["SpeciesId"], + population_id=population["Id"])) + + +@phenotypesbp.route( + "<int:species_id>/populations/<int:population_id>/phenotypes/datasets" + "/<int:dataset_id>/add-phenotypes", + methods=["GET", "POST"]) +@require_login +@with_dataset( + species_redirect_uri="species.populations.phenotypes.index", + population_redirect_uri="species.populations.phenotypes.select_population", + redirect_uri="species.populations.phenotypes.list_datasets") +def add_phenotypes(species: dict, population: dict, dataset: dict, **kwargs):# pylint: disable=[unused-argument, too-many-locals] + """Add one or more phenotypes to the dataset.""" + add_phenos_uri = redirect(url_for( + "species.populations.phenotypes.add_phenotypes", + species_id=species["SpeciesId"], + population_id=population["Id"], + dataset_id=dataset["Id"])) + _redisuri = app.config["REDIS_URL"] + _sqluri = app.config["SQL_URI"] + with (Redis.from_url(_redisuri, decode_responses=True) as rconn, + # database_connection(_sqluri) as conn, + # conn.cursor(cursorclass=DictCursor) as cursor + ): + if request.method == "GET": + return render_template("phenotypes/add-phenotypes.html", + species=species, + population=population, + dataset=dataset, + activelink="add-phenotypes") + + try: + ## Handle huge files here... + phenobundle = save_file(request.files["phenotypes-bundle"], + Path(app.config["UPLOAD_FOLDER"])) + rqc.validate_bundle(phenobundle) + except AssertionError as _aerr: + app.logger.debug("File upload error!", exc_info=True) + flash("Expected a zipped bundle of files with phenotypes' " + "information.", + "alert-danger") + return add_phenos_uri + except rqe.RQTLError as rqtlerr: + app.logger.debug("Bundle validation error!", exc_info=True) + flash("R/qtl2 Error: " + " ".join(rqtlerr.args), "alert-danger") + return add_phenos_uri + + _jobid = uuid.uuid4() + _namespace = jobs.jobsnamespace() + _ttl_seconds = app.config["JOBS_TTL_SECONDS"] + _job = jobs.initialise_job( + rconn, + _namespace, + str(_jobid), + [sys.executable, "-m", "scripts.rqtl2.phenotypes_qc", _sqluri, + _redisuri, _namespace, str(_jobid), str(species["SpeciesId"]), + str(population["Id"]), str(dataset["Id"]), "--redisexpiry", + str(_ttl_seconds)], "phenotype_qc", _ttl_seconds, + {"job-metadata": json.dumps({ + "speciesid": species["SpeciesId"], + "populationid": population["Id"], + "datasetid": dataset["Id"], + "bundle": str(phenobundle.absolute())})}) + # jobs.launch_job( + # _job, + # redisuri, + # f"{app.config['UPLOAD_FOLDER']}/job_errors") + + raise NotImplementedError("Please implement this...") diff --git a/uploader/population/views.py b/uploader/population/views.py index 3638453..36201ba 100644 --- a/uploader/population/views.py +++ b/uploader/population/views.py @@ -1,5 +1,4 @@ """Views dealing with populations/inbredsets""" -import re import json import base64 @@ -21,6 +20,7 @@ from uploader.datautils import enumerate_sequence from uploader.phenotypes.views import phenotypesbp from uploader.expression_data.views import exprdatabp from uploader.monadic_requests import make_either_error_handler +from uploader.input_validation import is_valid_representative_name from uploader.species.models import (all_species, species_by_id, order_species_by_family) @@ -73,29 +73,6 @@ def list_species_populations(species_id: int): activelink="list-populations") -def valid_population_name(population_name: str) -> bool: - """ - Check whether the given name is a valid population name. - - Parameters - ---------- - population_name: a string of characters. - - Checks For - ---------- - * The name MUST start with an alphabet [a-zA-Z] - * The name MUST end with an alphabet [a-zA-Z] or number [0-9] - * The name MUST be composed of alphabets [a-zA-Z], numbers [0-9], - underscores (_) and/or hyphens (-). - - Returns - ------- - Boolean indicating whether or not the name is valid. - """ - pattern = re.compile(r"^[a-zA-Z]+[a-zA-Z0-9_-]*[a-zA-Z0-9]$") - return bool(pattern.match(population_name)) - - @popbp.route("/<int:species_id>/populations/create", methods=["GET", "POST"]) @require_login def create_population(species_id: int): @@ -139,7 +116,7 @@ def create_population(species_id: int): errors = errors + (("population_name", "You must provide a name for the population!"),) - if not valid_population_name(population_name): + if not is_valid_representative_name(population_name): errors = errors + (( "population_name", "The population name can only contain letters, numbers, " diff --git a/uploader/samples/__init__.py b/uploader/samples/__init__.py new file mode 100644 index 0000000..1bd6d2d --- /dev/null +++ b/uploader/samples/__init__.py @@ -0,0 +1 @@ +"""Samples package. Handle samples uploads and editing.""" diff --git a/uploader/samples/views.py b/uploader/samples/views.py index 9ba1df8..ed79101 100644 --- a/uploader/samples/views.py +++ b/uploader/samples/views.py @@ -3,11 +3,8 @@ import os import sys import uuid from pathlib import Path -from typing import Iterator -import MySQLdb as mdb from redis import Redis -from MySQLdb.cursors import DictCursor from flask import (flash, request, url_for, @@ -19,19 +16,16 @@ from uploader import jobs from uploader.files import save_file from uploader.ui import make_template_renderer from uploader.authorisation import require_login +from uploader.request_checks import with_population from uploader.input_validation import is_integer_input -from uploader.datautils import order_by_family, enumerate_sequence -from uploader.db_utils import ( - with_db_connection, - database_connection, - with_redis_connection) +from uploader.datautils import safe_int, order_by_family, enumerate_sequence +from uploader.population.models import population_by_id, populations_by_species +from uploader.db_utils import (with_db_connection, + database_connection, + with_redis_connection) from uploader.species.models import (all_species, species_by_id, order_species_by_family) -from uploader.population.models import(save_population, - population_by_id, - populations_by_species, - population_by_species_and_id) from .models import samples_by_species_and_population @@ -110,9 +104,7 @@ def list_samples(species_id: int, population_id: int): all_samples = enumerate_sequence(samples_by_species_and_population( conn, species_id, population_id)) total_samples = len(all_samples) - offset = int(request.args.get("from") or 0) - if offset < 0: - offset = 0 + offset = max(safe_int(request.args.get("from") or 0), 0) count = int(request.args.get("count") or 20) return render_template("samples/list-samples.html", species=species, @@ -233,53 +225,41 @@ def upload_samples(species_id: int, population_id: int):#pylint: disable=[too-ma "upload-samples/status/<uuid:job_id>", methods=["GET"]) @require_login -def upload_status(species_id: int, population_id: int, job_id: uuid.UUID): +@with_population(species_redirect_uri="species.populations.samples.index", + redirect_uri="species.populations.samples.select_population") +def upload_status(species: dict, population: dict, job_id: uuid.UUID, **kwargs):# pylint: disable=[unused-argument] """Check on the status of a samples upload job.""" - with database_connection(app.config["SQL_URI"]) as conn: - species = species_by_id(conn, species_id) - if not bool(species): - flash("You must provide a valid species.", "alert-danger") - return redirect(url_for("species.populations.samples.index")) + job = with_redis_connection(lambda rconn: jobs.job( + rconn, jobs.jobsnamespace(), job_id)) + if job: + status = job["status"] + if status == "success": + return render_template("samples/upload-success.html", + job=job, + species=species, + population=population,) - population = population_by_species_and_id( - conn, species_id, population_id) - if not bool(population): - flash("You must provide a valid population.", "alert-danger") + if status == "error": return redirect(url_for( - "species.populations.samples.select_population", - species_id=species_id)) + "species.populations.samples.upload_failure", job_id=job_id)) - job = with_redis_connection(lambda rconn: jobs.job( - rconn, jobs.jobsnamespace(), job_id)) - if job: - status = job["status"] - if status == "success": - return render_template("samples/upload-success.html", - job=job, - species=species, - population=population,) - - if status == "error": + error_filename = Path(jobs.error_filename( + job_id, f"{app.config['UPLOAD_FOLDER']}/job_errors")) + if error_filename.exists(): + stat = os.stat(error_filename) + if stat.st_size > 0: return redirect(url_for( - "species.populations.samples.upload_failure", job_id=job_id)) + "samples.upload_failure", job_id=job_id)) - error_filename = Path(jobs.error_filename( - job_id, f"{app.config['UPLOAD_FOLDER']}/job_errors")) - if error_filename.exists(): - stat = os.stat(error_filename) - if stat.st_size > 0: - return redirect(url_for( - "samples.upload_failure", job_id=job_id)) - - return render_template("samples/upload-progress.html", - species=species, - population=population, - job=job) # maybe also handle this? - - return render_template("no_such_job.html", - job_id=job_id, + return render_template("samples/upload-progress.html", species=species, - population=population), 400 + population=population, + job=job) # maybe also handle this? + + return render_template("no_such_job.html", + job_id=job_id, + species=species, + population=population), 400 @samplesbp.route("/upload/failure/<uuid:job_id>", methods=["GET"]) @require_login diff --git a/uploader/static/css/styles.css b/uploader/static/css/styles.css index 574f53e..f482c1b 100644 --- a/uploader/static/css/styles.css +++ b/uploader/static/css/styles.css @@ -125,3 +125,37 @@ input[type="submit"], .btn { border-color: #AAAAAA; background-color: #EFEFEF; } + +.danger { + color: #A94442; + border-color: #DCA7A7; + background-color: #F2DEDE; +} + +.heading { + border-bottom: solid #EEBB88; +} + +.subheading { + padding: 1em 0 0.1em 0.5em; + border-bottom: solid #88BBEE; +} + +form { + margin-top: 0.3em; + background: #E5E5FF; + padding: 0.5em; + border-radius:0.5em; +} + +form .form-control { + background-color: #EAEAFF; +} + +.sidebar-content .card .card-title { + font-size: 1.5em; +} + +.sidebar-content .card-text table tbody td:nth-child(1) { + font-weight: bolder; +} diff --git a/uploader/templates/macro-table-pagination.html b/uploader/templates/macro-table-pagination.html new file mode 100644 index 0000000..292c531 --- /dev/null +++ b/uploader/templates/macro-table-pagination.html @@ -0,0 +1,26 @@ +{%macro table_pagination(start_at, page_count, total_count, base_uri, name)%} +{%set ns = namespace(forward_uri=base_uri, back_uri=base_uri)%} +{%set ns.forward_uri="brr"%} + <div class="row"> + <div class="col-md-2" style="text-align: start;"> + {%if start_at > 0%} + <a href="{{base_uri + + '?start_at='+((start_at-page_count)|string) + + '&count='+(page_count|string)}}"> + <span class="glyphicon glyphicon-backward"></span> + Previous + </a> + {%endif%} + </div> + <div class="col-md-8" style="text-align: center;"> + Displaying {{name}} {{start_at+1}} to {{start_at+page_count if start_at+page_count < total_count else total_count}} of {{total_count}}</div> + <div class="col-md-2" style="text-align: end;"> + {%if start_at + page_count < total_count%} + <a href="{{base_uri + + '?start_at='+((start_at+page_count)|string) + + '&count='+(page_count|string)}}"> + Next<span class="glyphicon glyphicon-forward"></span></a> + {%endif%} + </div> + </div> +{%endmacro%} diff --git a/uploader/templates/phenotypes/add-phenotypes.html b/uploader/templates/phenotypes/add-phenotypes.html new file mode 100644 index 0000000..196bc69 --- /dev/null +++ b/uploader/templates/phenotypes/add-phenotypes.html @@ -0,0 +1,231 @@ +{%extends "phenotypes/base.html"%} +{%from "flash_messages.html" import flash_all_messages%} +{%from "macro-table-pagination.html" import table_pagination%} +{%from "phenotypes/macro-display-pheno-dataset-card.html" import display_pheno_dataset_card%} + +{%block title%}Phenotypes{%endblock%} + +{%block pagetitle%}Phenotypes{%endblock%} + +{%block lvl4_breadcrumbs%} +<li {%if activelink=="add-phenotypes"%} + class="breadcrumb-item active" + {%else%} + class="breadcrumb-item" + {%endif%}> + <a href="{{url_for('species.populations.phenotypes.add_phenotypes', + species_id=species.SpeciesId, + population_id=population.Id, + dataset_id=dataset.Id)}}">View Datasets</a> +</li> +{%endblock%} + +{%block contents%} +{{flash_all_messages()}} + +<div class="row"> + <form id="frm-add-phenotypes" + method="POST" + enctype="multipart/form-data" + action="{{url_for('species.populations.phenotypes.add_phenotypes', + species_id=species.SpeciesId, + population_id=population.Id, + dataset_id=dataset.Id)}}"> + <legend>Add New Phenotypes</legend> + + <div class="form-text help-block"> + <p>Select the zip file bundle containing information on the phenotypes you + wish to upload, then click the "Upload Phenotypes" button below to + upload the data.</p> + <p>See the <a href="#section-file-formats">File Formats</a> section below + to get an understanding of what is expected of the bundle files you + upload.</p> + <p><strong>This will not update any existing phenotypes!</strong></p> + </div> + + <div class="form-group"> + <label for="finput-phenotypes-bundle" class="form-label"> + Phenotypes Bundle</label> + <input type="file" + id="finput-phenotypes-bundle" + name="phenotypes-bundle" + accept="application/zip, .zip" + required="required" + class="form-control" /> + </div> + + <div class="form-group"> + <input type="submit" + value="upload phenotypes" + class="btn btn-primary" /> + </div> + </form> +</div> + +<div class="row"> + <h2 class="heading" id="section-file-formats">File Formats</h2> + <p>We accept an extended form of the + <a href="https://kbroman.org/qtl2/assets/vignettes/input_files.html#format-of-the-data-files" + title="R/qtl2 software input file format documentation"> + input files' format used with the R/qtl2 software</a> as a single ZIP + file</p> + <p>The files that are used for this feature are: + <ul> + <li>the <em>control</em> file</li> + <li><em>pheno</em> file(s)</li> + <li><em>phenocovar</em> file(s)</li> + <li><em>phenose</em> files(s)</li> + </ul> + </p> + <p>Other files within the bundle will be ignored, for this feature.</p> + <p>The following section will detail the expectations for each of the + different file types within the uploaded ZIP file bundle for phenotypes:</p> + + <h3 class="subheading">Control File</h3> + <p>There <strong>MUST be <em>one, and only one</em></strong> file that acts + as the control file. This file can be: + <ul> + <li>a <em>JSON</em> file, or</li> + <li>a <em>YAML</em> file.</li> + </ul> + </p> + + <p>The control file is useful for defining things about the bundle such as:</p> + <ul> + <li>The field separator value (default: <code>sep: ','</code>). There can + only ever be one field separator and it <strong>MUST</strong> be the same + one for <strong>ALL</strong> files in the bundle.</li> + <li>The comment character (default: <code>comment.char: '#'</code>). Any + line that starts with this character will be considered a comment line and + be ignored in its entirety.</li> + <li>Code for missing values (default: <code>na.strings: 'NA'</code>). You + can specify more than one code to indicate missing values, e.g. + <code>{…, "na.strings": ["NA", "N/A", "-"], …}</code></li> + </ul> + + <h3 class="subheading"><em>pheno</em> File(s)</h3> + <p>These files are the main data files. You must have at least one of these + files in your bundle for it to be valid for this step.</p> + <p>The data is a matrix of <em>individuals × phenotypes</em> by default, as + below:<br /> + <code> + id,10001,10002,10003,10004,…<br /> + BXD1,61.400002,54.099998,483,49.799999,…<br /> + BXD2,49,50.099998,403,45.5,…<br /> + BXD5,62.5,53.299999,501,62.900002,…<br /> + BXD6,53.099998,55.099998,403,NA,…<br /> + ⋮<br /></code> + </p> + <p>If the <code>pheno_transposed</code> value is set to <code>True</code>, + then the data will be a <em>phenotypes × individuals</em> matrix as in the + example below:<br /> + <code> + id,BXD1,BXD2,BXD5,BXD6,…<br /> + 10001,61.400002,49,62.5,53.099998,…<br /> + 10002,54.099998,50.099998,53.299999,55.099998,…<br /> + 10003,483,403,501,403,…<br /> + 10004,49.799999,45.5,62.900002,NA,…<br /> + ⋮ + </code> + </p> + + + <h3 class="subheading"><em>phenocovar</em> File(s)</h3> + <p>At least one phenotypes metadata file with the metadata values such as + descriptions, PubMed Identifier, publication titles (if present), etc.</p> + <p>The data in this/these file(s) is a matrix of + <em>phenotypes × phenotypes-covariates</em>. The first column is always the + phenotype names/identifiers — same as in the R/qtl2 format.</p> + <p><em>phenocovar</em> files <strong>should never be transposed</strong>!</p> + <p>This file <strong>MUST</strong> be present in the bundle, and have data for + the bundle to be considered valid by our system for this step.<br /> + In addition to that, the following are the fields that <strong>must be + present</strong>, and + have values, in the file before the file is considered valid: + <ul> + <li><em>description</em>: A description for each phenotype. Useful + for users to know what the phenotype is about.</li> + <li><em>units</em>: The units of measurement for the phenotype, + e.g. milligrams for brain weight, centimetres/millimetres for + tail-length, etc.</li> + </ul></p> + + <p>The following <em>optional</em> fields can also be provided: + <ul> + <li><em>pubmedid</em>: A PubMed Identifier for the publication where + the phenotype is published. If this field is not provided, the system will + assume your phenotype is not published.</li> + </ul> + </p> + <p>These files will be marked up in the control file with the + <code>phenocovar</code> key, as in the examples below: + <ol> + <li>JSON: single file<br /> + <code>{<br /> + ⋮,<br /> + "phenocovar": "your_covariates_file.csv",<br /> + ⋮<br /> + } + </code> + </li> + <li>JSON: multiple files<br /> + <code>{<br /> + ⋮,<br /> + "phenocovar": [<br /> + "covariates_file_01.csv",<br /> + "covariates_file_01.csv",<br /> + ⋮<br /> + ],<br /> + ⋮<br /> + } + </code> + </li> + <li>YAML: single file or<br /> + <code> + ⋮<br /> + phenocovar: your_covariates_file.csv<br /> + ⋮ + </code> + </li> + <li>YAML: multiple files<br /> + <code> + ⋮<br /> + phenocovar:<br /> + - covariates_file_01.csv<br /> + - covariates_file_02.csv<br /> + - covariates_file_03.csv<br /> + …<br /> + ⋮ + </code> + </li> + </ol> + </p> + + <h3 class="subheading"><em>phenose</em> and <em>phenonum</em> File(s)</h3> + <p>These are extensions to the R/qtl2 standard, i.e. these types ofs file are + not supported by the original R/qtl2 file format</p> + <p>We use these files to upload the standard errors (<em>phenose</em>) when + the data file (<em>pheno</em>) is average data. In that case, the + <em>phenonum</em> file(s) contains the number of individuals that were + involved when computing the averages.</p> + <p>Both types of files are matrices of <em>individuals × phenotypes</em> by + default. Like the related <em>pheno</em> files, if + <code>pheno_transposed: True</code>, then the file will be a matrix of + <em>phenotypes × individuals</em>.</p> +</div> + +<div class="row text-warning"> + <h3 class="subheading">Notes for Devs (well… Fred, really.)</h3> + <p>Use the following resources for automated retrieval of certain data</p> + <ul> + <li><a href="https://www.ncbi.nlm.nih.gov/pmc/tools/developers/" + title="NCBI APIs: Retrieve articles' metadata etc."> + NCBI APIS</a></li> + </ul> +</div> + +{%endblock%} + +{%block sidebarcontents%} +{{display_pheno_dataset_card(species, population, dataset)}} +{%endblock%} diff --git a/uploader/templates/phenotypes/create-dataset.html b/uploader/templates/phenotypes/create-dataset.html new file mode 100644 index 0000000..93de92f --- /dev/null +++ b/uploader/templates/phenotypes/create-dataset.html @@ -0,0 +1,106 @@ +{%extends "phenotypes/base.html"%} +{%from "flash_messages.html" import flash_all_messages%} +{%from "macro-table-pagination.html" import table_pagination%} +{%from "populations/macro-display-population-card.html" import display_population_card%} + +{%block title%}Phenotypes{%endblock%} + +{%block pagetitle%}Phenotypes{%endblock%} + +{%block lvl4_breadcrumbs%} +<li {%if activelink=="create-dataset"%} + class="breadcrumb-item active" + {%else%} + class="breadcrumb-item" + {%endif%}> + <a href="{{url_for('species.populations.phenotypes.create_dataset', + species_id=species.SpeciesId, + population_id=population.Id)}}">Create Datasets</a> +</li> +{%endblock%} + +{%block contents%} +{{flash_all_messages()}} + +<div class="row"> + <p>Create a new phenotype dataset.</p> +</div> + +<div class="row"> + <form id="frm-create-pheno-dataset" + action="{{url_for('species.populations.phenotypes.create_dataset', + species_id=species.SpeciesId, + population_id=population.Id)}}" + method="POST"> + + <div class="form-group"> + <label class="form-label" for="txt-dataset-name">Name</label> + {%if errors["dataset-name"] is defined%} + <small class="form-text text-muted danger"> + <p>{{errors["dataset-name"]}}</p></small> + {%endif%} + <input type="text" + name="dataset-name" + id="txt-dataset-name" + value="{{original_formdata.get('dataset-name') or (population.InbredSetCode + 'Publish')}}" + {%if errors["dataset-name"] is defined%} + class="form-control danger" + {%else%} + class="form-control" + {%endif%} + required="required" /> + <small class="form-text text-muted"> + <p>A short representative name for the dataset.</p> + <p>Recommended: Use the population code and append "Publish" at the end. + <br />This field will only accept names composed of + letters ('A-Za-z'), numbers (0-9), hyphens and underscores.</p> + </small> + </div> + + <div class="form-group"> + <label class="form-label" for="txt-dataset-fullname">Full Name</label> + {%if errors["dataset-fullname"] is defined%} + <small class="form-text text-muted danger"> + <p>{{errors["dataset-fullname"]}}</p></small> + {%endif%} + <input id="txt-dataset-fullname" + name="dataset-fullname" + type="text" + value="{{original_formdata.get('dataset-fullname', '')}}" + {%if errors["dataset-fullname"] is defined%} + class="form-control danger" + {%else%} + class="form-control" + {%endif%} + required="required" /> + <small class="form-text text-muted"> + <p>A longer, descriptive name for the dataset — useful for humans. + </p></small> + </div> + + <div class="form-group"> + <label class="form-label" for="txt-dataset-shortname">Short Name</label> + <input id="txt-dataset-shortname" + name="dataset-shortname" + type="text" + class="form-control" + value="{{original_formdata.get('dataset-shortname') or (population.InbredSetCode + ' Publish')}}" /> + <small class="form-text text-muted"> + <p>An optional, short name for the dataset. <br /> + If this is not provided, it will default to the value provided for the + <strong>Name</strong> field above.</p></small> + </div> + + <div class="form-group"> + <input type="submit" + class="btn btn-primary" + value="create phenotype dataset" /> + </div> + + </form> +</div> +{%endblock%} + +{%block sidebarcontents%} +{{display_population_card(species, population)}} +{%endblock%} diff --git a/uploader/templates/phenotypes/list-datasets.html b/uploader/templates/phenotypes/list-datasets.html index 360fd2c..2eaf43a 100644 --- a/uploader/templates/phenotypes/list-datasets.html +++ b/uploader/templates/phenotypes/list-datasets.html @@ -51,8 +51,10 @@ <p class="text-warning"> <span class="glyphicon glyphicon-exclamation-sign"></span> There is no dataset for this population!</p> - <p><a href="#" - class="not-implemented btn btn-primary" + <p><a href="{{url_for('species.populations.phenotypes.create_dataset', + species_id=species.SpeciesId, + population_id=population.Id)}}" + class="btn btn-primary" title="Create a new phenotype dataset.">create dataset</a></p> {%endif%} </div> diff --git a/uploader/templates/phenotypes/macro-display-pheno-dataset-card.html b/uploader/templates/phenotypes/macro-display-pheno-dataset-card.html new file mode 100644 index 0000000..11b108b --- /dev/null +++ b/uploader/templates/phenotypes/macro-display-pheno-dataset-card.html @@ -0,0 +1,31 @@ +{%from "populations/macro-display-population-card.html" import display_population_card%} + +{%macro display_pheno_dataset_card(species, population, dataset)%} +{{display_population_card(species, population)}} + +<div class="card"> + <div class="card-body"> + <h5 class="card-title">Phenotypes' Dataset</h5> + <div class="card-text"> + <table class="table"> + <tbody> + <tr> + <td>Name</td> + <td>{{dataset.Name}}</td> + </tr> + + <tr> + <td>Full Name</td> + <td>{{dataset.FullName}}</td> + </tr> + + <tr> + <td>Short Name</td> + <td>{{dataset.ShortName}}</td> + </tr> + </tbody> + </table> + </div> + </div> +</div> +{%endmacro%} diff --git a/uploader/templates/phenotypes/view-dataset.html b/uploader/templates/phenotypes/view-dataset.html index 959b02a..b136bb6 100644 --- a/uploader/templates/phenotypes/view-dataset.html +++ b/uploader/templates/phenotypes/view-dataset.html @@ -1,5 +1,6 @@ {%extends "phenotypes/base.html"%} {%from "flash_messages.html" import flash_all_messages%} +{%from "macro-table-pagination.html" import table_pagination%} {%from "populations/macro-display-population-card.html" import display_population_card%} {%block title%}Phenotypes{%endblock%} @@ -36,10 +37,7 @@ <tbody> <tr> - <td><a href="{{url_for('species.populations.phenotypes.view_dataset', - species_id=species.SpeciesId, - population_id=population.Id, - dataset_id=dataset.Id)}}">{{dataset.Name}}</a></td> + <td>{{dataset.Name}}</td> <td>{{dataset.FullName}}</td> <td>{{dataset.ShortName}}</td> </tr> @@ -48,13 +46,20 @@ </div> <div class="row"> + <p><a href="{{url_for('species.populations.phenotypes.add_phenotypes', + species_id=species.SpeciesId, + population_id=population.Id, + dataset_id=dataset.Id)}}" + title="Add a bunch of phenotypes" + class="btn btn-primary">Add phenotypes</a></p> +</div> + +<div class="row"> <h2>Phenotype Data</h2> <p>This dataset has a total of {{phenotype_count}} phenotypes.</p> - <p class="text-danger"> - <span class="glyphicon glyphicon-exclamation-sign"></span> - Limit access here, according to the authorisation privileges the user has on - this dataset!</p> + + {{table_pagination(start_from, count, phenotype_count, url_for('species.populations.phenotypes.view_dataset', species_id=species.SpeciesId, population_id=population.Id, dataset_id=dataset.Id), "phenotypes")}} <table class="table"> <thead> diff --git a/uploader/templates/phenotypes/view-phenotype.html b/uploader/templates/phenotypes/view-phenotype.html index 5aef6c1..99bb8e5 100644 --- a/uploader/templates/phenotypes/view-phenotype.html +++ b/uploader/templates/phenotypes/view-phenotype.html @@ -24,18 +24,101 @@ {{flash_all_messages()}} <div class="row"> - {{resource}} -</div> + <div class="panel panel-default"> + <div class="panel-heading"><strong>Basic Phenotype Details</strong></div> -<div class="row"> - <h2>Dataset</h2> - {{dataset}} + <table class="table"> + <tbody> + <tr> + <td><strong>Phenotype</strong></td> + <td>{{phenotype.Post_publication_description or phenotype.Pre_publication_abbreviation or phenotype.Original_description}} + </tr> + <tr> + <td><strong>Cross-Reference ID</strong></td> + <td>{{phenotype.xref_id}}</td> + </tr> + <tr> + <td><strong>Collation</strong></td> + <td>{{dataset.FullName}}</td> + </tr> + <tr> + <td><strong>Units</strong></td> + <td>{{phenotype.Units}}</td> + </tr> + </tbody> + </table> + + <form action="#edit-delete-phenotype" + method="POST" + id="frm-delete-phenotype"> + + <input type="hidden" name="species_id" value="{{species.SpeciesId}}" /> + <input type="hidden" name="population_id" value="{{population.Id}}" /> + <input type="hidden" name="dataset_id" value="{{dataset.Id}}" /> + <input type="hidden" name="phenotype_id" value="{{phenotype.Id}}" /> + + <div class="btn-group btn-group-justified"> + <div class="btn-group"> + {%if "group:resource:edit-resource" in privileges%} + <input type="submit" + title="Edit the values for the phenotype. This is meant to be used when you need to update only a few values." + class="btn btn-primary not-implemented" + value="edit" /> + {%endif%} + </div> + <div class="btn-group"></div> + <div class="btn-group"> + {%if "group:resource:delete-resource" in privileges%} + <input type="submit" + title="Delete the entire phenotype. This is useful when you need to change data for most or all of the fields for this phenotype." + class="btn btn-danger not-implemented" + value="delete" /> + {%endif%} + </div> + </div> + </form> + </div> </div> <div class="row"> - <h2>Phenotype</h2> - {{phenotype}} + <div class="panel panel-default"> + <div class="panel-heading"><strong>Phenotype Data</strong></div> + {%if "group:resource:view-resource" in privileges%} + <table class="table"> + <thead> + <tr> + <th>#</th> + <th>Sample</th> + <th>Value</th> + <th>Symbol</th> + <th>SE</th> + <th>N</th> + </tr> + </thead> + + <tbody> + {%for item in phenotype.data%} + <tr> + <td>{{loop.index}}</td> + <td>{{item.StrainName}}</td> + <td>{{item.value}}</td> + <td>{{item.Symbol or "-"}}</td> + <td>{{item.error or "-"}}</td> + <td>{{item.count or "-"}}</td> + </tr> + {%endfor%} + </tbody> + </table> + {%else%} + <p class="text-danger"> + <span class="glyphicon glyphicon-exclamation-sign"></span> + You do not currently have privileges to view this phenotype in greater + detail. + </p> + {%endif%} + </div> </div> + {%endblock%} {%block sidebarcontents%} diff --git a/uploader/templates/populations/macro-display-population-card.html b/uploader/templates/populations/macro-display-population-card.html index e68f8e3..79f7925 100644 --- a/uploader/templates/populations/macro-display-population-card.html +++ b/uploader/templates/populations/macro-display-population-card.html @@ -7,25 +7,39 @@ <div class="card-body"> <h5 class="card-title">Population</h5> <div class="card-text"> - <dl> - <dt>Name</dt> - <dd>{{population.Name}}</dd> + <table class="table"> + <tbody> + <tr> + <td>Name</td> + <td>{{population.Name}}</td> + </tr> - <dt>Full Name</dt> - <dd>{{population.FullName}}</dd> + <tr> + <td>Full Name</td> + <td>{{population.FullName}}</td> + </tr> - <dt>Code</dt> - <dd>{{population.InbredSetCode}}</dd> + <tr> + <td>Code</td> + <td>{{population.InbredSetCode}}</td> + </tr> - <dt>Genetic Type</dt> - <dd>{{population.GeneticType}}</dd> + <tr> + <td>Genetic Type</td> + <td>{{population.GeneticType}}</td> + </tr> - <dt>Family</dt> - <dd>{{population.Family}}</dd> + <tr> + <td>Family</td> + <td>{{population.Family}}</td> + </tr> - <dt>Description</dt> - <dd>{{population.Description or "-"}}</dd> - </dl> + <tr> + <td>Description</td> + <td>{{(population.Description or "")[0:500]}}…</td> + </tr> + </tbody> + </table> </div> </div> </div> diff --git a/uploader/templates/species/macro-display-species-card.html b/uploader/templates/species/macro-display-species-card.html index 857c0f0..166c7b9 100644 --- a/uploader/templates/species/macro-display-species-card.html +++ b/uploader/templates/species/macro-display-species-card.html @@ -3,13 +3,19 @@ <div class="card-body"> <h5 class="card-title">Species</h5> <div class="card-text"> - <dl> - <dt>Common Name</dt> - <dd>{{species.SpeciesName}}</dd> + <table class="table"> + <tbody> + <tr> + <td>Common Name</td> + <td>{{species.SpeciesName}}</td> + </tr> - <dt>Scientific Name</dt> - <dd>{{species.FullName}}</dd> - </dl> + <tr> + <td>Scientific Name</td> + <td>{{species.FullName}}</td> + </tr> + </tbody> + </table> </div> </div> </div> |