... | @@ -152,9 +152,6 @@ Pipeline scripts are used to produce CC signatures, models and analyses. These s |
... | @@ -152,9 +152,6 @@ Pipeline scripts are used to produce CC signatures, models and analyses. These s |
|
* [Updates every 6 months](#six-month-pipeline).
|
|
* [Updates every 6 months](#six-month-pipeline).
|
|
* [Sporadic addition or updates of datasets](#sporadic-datasets).
|
|
* [Sporadic addition or updates of datasets](#sporadic-datasets).
|
|
2. [Mapping of new data](#new-data-mapping)
|
|
2. [Mapping of new data](#new-data-mapping)
|
|
* Mapping of data to a dataset
|
|
|
|
* Individually querying
|
|
|
|
* Connectivity
|
|
|
|
|
|
|
|
### Dataset addition
|
|
### Dataset addition
|
|
|
|
|
... | @@ -293,3 +290,7 @@ Whenever, during a research project, we want to introduce a new dataset to the C |
... | @@ -293,3 +290,7 @@ Whenever, during a research project, we want to introduce a new dataset to the C |
|
Note that, necessarily, adding new data to the CC will require some scripting. Please refer to [dataset processing](datasets#dataset-processing) for guidelines.
|
|
Note that, necessarily, adding new data to the CC will require some scripting. Please refer to [dataset processing](datasets#dataset-processing) for guidelines.
|
|
|
|
|
|
### New data mapping
|
|
### New data mapping
|
|
|
|
|
|
|
|
When we want to map or connect external data, we make heavy use of the predictors learned in the dataset addition phase. There is, therefore, no need for heavy computations in this step and we might consider offering the ability to perform this part of the pipeline outside the `pac-one cluster`. The only potentially heavy predictions are the ones regarding the processing step.
|
|
|
|
|
|
|
|
![cc_pipelines-new-molecule.svg](/uploads/69875e65be5fd093faaf647166237eb4/cc_pipelines-new-molecule.svg) |
|
|
|
\ No newline at end of file |