... | ... | @@ -22,7 +22,7 @@ In turn, each level is divided in 5 sublevels or **coordinates** representing di |
|
|
|`A4`|Structural keys|166 functional groups and substructures widely accepted by medicinal chemists (MACCS keys).|
|
|
|
|`A5`|Physicochemistry|Physicochemical properties such as molecular weight, logP and refractivity. Number of hydrogen-bond donors and acceptors, rings, etc. Drug-likeness measurements e.g. number of structural alerts, Lipinski’s rule-of-5 violations or chemical beauty (QED).|
|
|
|
|`B1`|Mechanism of action|Drug targets with known pharmacological action and modes (agonist, antagonist, etc.).|
|
|
|
|`B2`|Metabolic genes Drug metabolizing enzymes, transporters and carriers.|
|
|
|
|`B2`|Metabolic genes|Drug metabolizing enzymes, transporters and carriers.|
|
|
|
|`B3`|Crystals|Small molecules co-crystalized with protein chains. Data is organized according to the structural families of the protein chains.|
|
|
|
|`B4`|Binding|Compound--protein binding data available in major public chemogenomics databases. Data mainly comes from academic publications and patents. Only binding affinities below a class-specific threshold are kept (kinases ≤ 30 nM, GPCRs ≤ 100 nM, nuclear receptors ≤ 100 nM, ion channels ≤ 10 uM and others ≤ 1 uM).|
|
|
|
|`B5`|HTS bioassays|Hits from screening campaigns against protein targets (mainly confirmatory functional assays below 10 uM).|
|
... | ... | @@ -44,4 +44,46 @@ In turn, each level is divided in 5 sublevels or **coordinates** representing di |
|
|
|
|
|
Each of the coordinates can contain an arbitrary number of **datasets**. All datasets are fully described in the [PostGreSQL database](database), and searchable at `http://chemicalchecker.org/datasets/`. They receive a numbered coding (e.g. `A1.001`).
|
|
|
|
|
|
## Dataset structure of the |
|
|
## Dataset characteristics
|
|
|
|
|
|
In **Phase I**, the CC contained *only* 25 datasets. We want to **maintain the 5x5 CC structure**, but we really need to have the ability to incorporate an arbitrary number of datasets with little effort in **Phase II**.
|
|
|
|
|
|
This is how we define a dataset:
|
|
|
|
|
|
* Code: e.g. `A1.001` *// Identifier of the dataset.*
|
|
|
* Level: e.g. `A` *// Whether it is Chemistry (A), Targets (B), Networks (C), Cells (D) or Clinics (E).*
|
|
|
* Coordinate: `A1` *// Coordinates in the CC organization.*
|
|
|
* Name: 2D fingerprints *// Display, short-name of the dataset.*
|
|
|
* Technical name: 1024-bit Morgan fingerprints *// A more technical name for the dataset, suitable for chemo-/bio-informaticians.*
|
|
|
* Description: 2D fingerprints are... *// This field contains a long description of the dataset. It is important that the curator outlines here the importance of the dataset, why did he/she made the decision to include it, and what are the scenarios where this dataset may be useful.*
|
|
|
* Unknowns: `True` / `False` *// Does the dataset contain known/unknown data? Binding data from chemogenomics datasets, for example, are positive-unlabeled, so they do contain unknowns. Conversely, chemical fingreprints or gene expression data do not contain unknowns.*
|
|
|
* Permanent: `True` / `False` *// Are measurements for each entry permanent? 2D fingerprints, for example, are permanent. However, most of biological data may change/evolve with the different versions of the CC. This field, in essence, dictates whether the dataset should be completely updated in every update of the CC, or whether new entries can be simply appended.*
|
|
|
* Finished: `True` / `False` *// Is the dataset considered to be finished? For examples, datasets coming from supplementary data of scientific papers are immutable, and they consequently need no updates in posterior versions of the CC.*
|
|
|
* Data type: `Discrete` / `Continuous` *// The type of data that ultimately expresses de dataset, after the pre-processing. Categorical variables are not allowed; they must be converted to one-hot encoding or binarized. Mixed variables are not allowed, either.*
|
|
|
* Predicted: `True` / `False` *// Is the dataset a result of a prediction (by us or by others?). Prediction results are perfectly valid CC datasets, in principle.*
|
|
|
* Connectivity: `True` / `False` *// Is there a way to connect this dataset to other biological entities? We understand connectivity as a generalization of the cMap idea of matching gene expression signatures.*
|
|
|
* Connectivity comments: Free text commenting on the connectivity strategy (e.g. type of distance) *// This field needs to be self-explanatory. .*
|
|
|
* Keys: e.g. `CPD` (we use @afernandez `Bioteque` nomenclature). May be `NULL`. *// In the core CC database, most of the times this field will correspond to `CPD`, as the CC is centered on small molecules. It only makes sense to have keys of different types when we do connectivity attempts, that is, for example, when mapping disease gene expression signatures.*
|
|
|
* Number of keys: e.g. `800000` *// Number of samples in the dataset.*
|
|
|
* Features: e.g. `GEN` (we use `Bioteque` nomenclature). May be `NULL`. *// When features correspond to explicit knowledge, such as proteins, gene ontology processes, or indications, we express with this field the type of biological entities. It is not allowed to mix different feature types. Features can, however, have no type, typically when they come from a heavily-processed dataset, such as gene-expression data. Even if we use `Bioteque` nomenclature to the define the type of biological data, it is not mandatory that the vocabularies are the ones used by the `Bioteque`; for example, I can use non-human Uniprot ACs, if I deem it necessary.*
|
|
|
* Number of features: e.g. `1000` *// Number of features in the dataset.*
|
|
|
* Exemplar: `True` / `False` *// Is the dataset exemplar of the coordinate (A1, A2...). Only one exemplar dataset is valid for each coordinate. Exemplar datasets should have good coverage (both in keys space and feature space) and acceptable quality of the data.*
|
|
|
* Source: Free text defining the source of data. *// More than one source is allowed. We have mild constraints in the nomenclature, here.*
|
|
|
* Version: CC version *// The CC is updated every 6 months.*
|
|
|
* Public: True / False *// Some datasets are public, and some are not, especially those that come from collaborations with the pharma industry.*
|
|
|
|
|
|
The information above can be stored in a `postgresql` table named `datasets`.
|
|
|
|
|
|
It is important that we decide how to store and organize datasets in the file-system, though. For this I need advice and help from all of you, @mbertoni , @oguitart and @afernandez.
|
|
|
|
|
|
I suggest the following structure in e.g. `aloy/web_checker/` (or somewhere else):
|
|
|
|
|
|
* Each dataset is stored correspondingly, e.g. `./datasets/A/A1/001/`.
|
|
|
* A `./data.h5` file:
|
|
|
* `V`: the matrix of values (`np.int8`, `np.float32`, ...)
|
|
|
* `keys`: *sorted* alphabetically
|
|
|
* `features`: *sorted* alphabetically
|
|
|
* etc (de
|
|
|
* A `./processing/` folder where a mini-pipeline is devoted to processing the data until obtaining the final dataset (`data.h5`). **Downloads are not here!**, since many downloads are shared between datasets. Downloads and download scripts are wherever @oguitart decides.
|
|
|
* A `./connectivity/` folder where connectivity scripts are stored. I don't know how to organized this, yet. This folder will obviously be empty if no connectivity is possible for this dataset.
|
|
|
* A `./models/` folder where persistent models are stored. This folder may be empty many times. |