Arbitrary number of data sets
In Phase I, the CC contained only 25 datasets. We want to maintain the 5x5 CC structure, but we really need to have the ability to incorporate an arbitrary number of datasets with little effort in Phase II.
This is how we define a dataset:
- Code: e.g.
A1.001
// Identifier of the dataset. - Level: e.g.
A
// Whether it is Chemistry (A), Targets (B), Networks (C), Cells (D) or Clinics (E). - Coordinate:
A1
// Coordinates in the CC organization. - Name: 2D fingerprints // Display, short-name of the dataset.
- Technical name: 1024-bit Morgan fingerprints // A more technical name for the dataset, suitable for chemo-/bio-informaticians.
- Description: 2D fingerprints are... // This field contains a long description of the dataset. It is important that the curator outlines here the importance of the dataset, why did he/she made the decision to include it, and what are the scenarios where this dataset may be useful.
- Unknowns:
True
/False
// Does the dataset contain known/unknown data? Binding data from chemogenomics datasets, for example, are positive-unlabeled, so they do contain unknowns. Conversely, chemical fingreprints or gene expression data do not contain unknowns. - Permanent:
True
/False
// Are measurements for each entry permanent? 2D fingerprints, for example, are permanent. However, most of biological data may change/evolve with the different versions of the CC. This field, in essence, dictates whether the dataset should be completely updated in every update of the CC, or whether new entries can be simply appended. - Finished:
True
/False
// Is the dataset considered to be finished? For examples, datasets coming from supplementary data of scientific papers are immutable, and they consequently need no updates in posterior versions of the CC. - Data type:
Discrete
/Continuous
// The type of data that ultimately expresses de dataset, after the pre-processing. Categorical variables are not allowed; they must be converted to one-hot encoding or binarized. Mixed variables are not allowed, either. - Predicted:
True
/False
// Is the dataset a result of a prediction (by us or by others?). Prediction results are perfectly valid CC datasets, in principle. - Connectivity:
True
/False
// Is there a way to connect this dataset to other biological entities? We understand connectivity as a generalization of the cMap idea of matching gene expression signatures. - Connectivity comments: Free text commenting on the connectivity strategy (e.g. type of distance) // This field needs to be self-explanatory. .
- Keys: e.g.
CPD
(we use @afernandezBioteque
nomenclature). May beNULL
. // In the core CC database, most of the times this field will correspond toCPD
, as the CC is centered on small molecules. It only makes sense to have keys of different types when we do connectivity attempts, that is, for example, when mapping disease gene expression signatures. - Number of keys: e.g.
800000
// Number of samples in the dataset. - Features: e.g.
GEN
(we useBioteque
nomenclature). May beNULL
. // When features correspond to explicit knowledge, such as proteins, gene ontology processes, or indications, we express with this field the type of biological entities. It is not allowed to mix different feature types. Features can, however, have no type, typically when they come from a heavily-processed dataset, such as gene-expression data. Even if we useBioteque
nomenclature to the define the type of biological data, it is not mandatory that the vocabularies are the ones used by theBioteque
; for example, I can use non-human Uniprot ACs, if I deem it necessary. - Number of features: e.g.
1000
// Number of features in the dataset. - Exemplar:
True
/False
// Is the dataset exemplar of the coordinate (A1, A2...). Only one exemplar dataset is valid for each coordinate. Exemplar datasets should have good coverage (both in keys space and feature space) and acceptable quality of the data. - Source: Free text defining the source of data. // More than one source is allowed. We have mild constraints in the nomenclature, here.
- Version: CC version // The CC is updated every 6 months.
- Public: True / False // Some datasets are public, and some are not, especially those that come from collaborations with the pharma industry.
The information above can be stored in a postgresql
table named datasets
.
It is important that we decide how to store and organize datasets in the file-system, though. For this I need advice and help from all of you, @mbertoni , @oguitart and @afernandez.
I suggest the following structure in e.g. aloy/web_checker/
(or somewhere else):
- Each dataset is stored correspondingly, e.g.
./datasets/A/A1/001/
. - A
./data.h5
file: -
V
: the matrix of values (np.int8
,np.float32
, ...) -
keys
: sorted alphabetically -
features
: sorted alphabetically - etc (de
- A
./processing/
folder where a mini-pipeline is devoted to processing the data until obtaining the final dataset (data.h5
). Downloads are not here!, since many downloads are shared between datasets. Downloads and download scripts are wherever @oguitart decides. - A
./connectivity/
folder where connectivity scripts are stored. I don't know how to organize this, yet. This folder will obviously be empty if no connectivity is possible for this dataset. - A
./models/
folder where persistent models are stored. This folder may be empty many times.