diff --git a/.github/workflows/docker-image.yml b/.github/workflows/docker-image.yml index 6d7681231..b2ac124b2 100644 --- a/.github/workflows/docker-image.yml +++ b/.github/workflows/docker-image.yml @@ -11,7 +11,7 @@ jobs: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - - name: Cleanup + - name: Cleanup # https://github.com/actions/virtual-environments/issues/2840 run: sudo rm -rf /usr/share/dotnet && sudo rm -rf /opt/ghc && sudo rm -rf "/usr/local/share/boost" && sudo rm -rf "$AGENT_TOOLSDIRECTORY" - name: Build the Docker image run: docker build . --file Dockerfile --tag openfold:$(date +%s) diff --git a/docs/source/Aux_seq_files.md b/docs/source/Aux_seq_files.md index 820872fc3..41a94da1f 100644 --- a/docs/source/Aux_seq_files.md +++ b/docs/source/Aux_seq_files.md @@ -68,9 +68,9 @@ All together, the file directory would look like: └── 6kwc.cif └── alignment_db ├── alignment_db_0.db - ├── alignment_db_1.db - ... - ├── alignment_db_9.db + ├── alignment_db_1.db + ... + ├── alignment_db_9.db └── alignment_db.index ``` diff --git a/docs/source/Inference.md b/docs/source/Inference.md index 02f35685d..1e40f59ff 100644 --- a/docs/source/Inference.md +++ b/docs/source/Inference.md @@ -42,7 +42,7 @@ $ bash scripts/download_openfold_params.sh $PARAMS_DIR We recommend selecting `openfold/resources` as the params directory as this is the default directory used by the `run_pretrained_openfold.py` to locate parameters. -If you choose to use a different directory, you may make a symlink to the `openfold/resources` directory, or specify an alternate parameter path with the command line argument `--jax_path` for AlphaFold parameters or `--openfold_checkpoint_path` for OpenFold parameters. +If you choose to use a different directory, you may make a symlink to the `openfold/resources` directory, or specify an alternate parameter path with the command line argument `--jax_param_path` for AlphaFold parameters or `--openfold_checkpoint_path` for OpenFold parameters. ### Model Inference @@ -62,7 +62,7 @@ python3 run_pretrained_openfold.py \ $TEMPLATE_MMCIF_DIR --output_dir $OUTPUT_DIR \ --config_preset model_1_ptm \ - --uniref90_database_path $BASE_DATA_DIR/uniref90 \ + --uniref90_database_path $BASE_DATA_DIR/uniref90/uniref90.fasta \ --mgnify_database_path $BASE_DATA_DIR/mgnify/mgy_clusters_2018_12.fa \ --pdb70_database_path $BASE_DATA_DIR/pdb70 \ --uniclust30_database_path $BASE_DATA_DIR/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \ @@ -138,6 +138,7 @@ Some commonly used command line flags are here. A full list of flags can be view - `--data_random_seed`: Specifies a random seed to use. - `--save_outputs`: Saves a copy of all outputs from the model, e.g. the output of the msa track, ptm heads. - `--experiment_config_json`: Specify configuration settings using a json file. For example, passing a json with `{globals.relax.max_iterations = 10}` specifies 10 as the maximum number of relaxation iterations. See for [`openfold/config.py`](https://github.com/aqlaboratory/openfold/blob/main/openfold/config.py#L283) the full dictionary of configuration settings. Any parameters that are not manually set in these configuration settings will refer to the defaults specified by your `config_preset`. +- `--use_custom_template`: Uses all .cif files in `template_mmcif_dir` as template input. Make sure the chains of interest have the identifier _A_ and have the same length as the input sequence. The same templates will be read for all sequences that are passed for inference. ### Advanced Options for Increasing Efficiency @@ -159,12 +160,12 @@ Note that chunking (as defined in section 1.11.8 of the AlphaFold 2 supplement) #### Long sequence inference To minimize memory usage during inference on long sequences, consider the following changes: -- As noted in the AlphaFold-Multimer paper, the AlphaFold/OpenFold template stack is a major memory bottleneck for inference on long sequences. OpenFold supports two mutually exclusive inference modes to address this issue. One, `average_templates` in the `template` section of the config, is similar to the solution offered by AlphaFold-Multimer, which is simply to average individual template representations. Our version is modified slightly to accommodate weights trained using the standard template algorithm. Using said weights, we notice no significant difference in performance between our averaged template embeddings and the standard ones. The second, `offload_templates`, temporarily offloads individual template embeddings into CPU memory. The former is an approximation while the latter is slightly slower; both are memory-efficient and allow the model to utilize arbitrarily many templates across sequence lengths. Both are disabled by default, and it is up to the user to determine which best suits their needs, if either. -- Inference-time low-memory attention (LMA) can be enabled in the model config. This setting trades off speed for vastly improved memory usage. By default, LMA is run with query and key chunk sizes of 1024 and 4096, respectively. These represent a favorable tradeoff in most memory-constrained cases. Powerusers can choose to tweak these settings in `openfold/model/primitives.py`. For more information on the LMA algorithm, see the aforementioned Staats & Rabe preprint. -- Disable `tune_chunk_size` for long sequences. Past a certain point, it only wastes time. -- As a last resort, consider enabling `offload_inference`. This enables more extensive CPU offloading at various bottlenecks throughout the model. +- As noted in the AlphaFold-Multimer paper, the AlphaFold/OpenFold template stack is a major memory bottleneck for inference on long sequences. OpenFold supports two mutually exclusive inference modes to address this issue. One, `average_templates` in the `template` section of the config, is similar to the solution offered by AlphaFold-Multimer, which is simply to average individual template representations. Our version is modified slightly to accommodate weights trained using the standard template algorithm. Using said weights, we notice no significant difference in performance between our averaged template embeddings and the standard ones. The second, `offload_templates`, temporarily offloads individual template embeddings into CPU memory. The former is an approximation while the latter is slightly slower; both are memory-efficient and allow the model to utilize arbitrarily many templates across sequence lengths. Both are disabled by default, and it is up to the user to determine which best suits their needs, if either. +- Inference-time low-memory attention (LMA) can be enabled in the model config. This setting trades off speed for vastly improved memory usage. By default, LMA is run with query and key chunk sizes of 1024 and 4096, respectively. These represent a favorable tradeoff in most memory-constrained cases. Powerusers can choose to tweak these settings in `openfold/model/primitives.py`. For more information on the LMA algorithm, see the aforementioned Staats & Rabe preprint. +- Disable `tune_chunk_size` for long sequences. Past a certain point, it only wastes time. +- As a last resort, consider enabling `offload_inference`. This enables more extensive CPU offloading at various bottlenecks throughout the model. - Disable FlashAttention, which seems unstable on long sequences. -Using the most conservative settings, we were able to run inference on a 4600-residue complex with a single A100. Compared to AlphaFold's own memory offloading mode, ours is considerably faster; the same complex takes the more efficent AlphaFold-Multimer more than double the time. Use the `long_sequence_inference` config option to enable all of these interventions at once. The `run_pretrained_openfold.py` script can enable this config option with the `--long_sequence_inference` command line option +Using the most conservative settings, we were able to run inference on a 4600-residue complex with a single A100. Compared to AlphaFold's own memory offloading mode, ours is considerably faster; the same complex takes the more efficent AlphaFold-Multimer more than double the time. Use the `long_sequence_inference` config option to enable all of these interventions at once. The `run_pretrained_openfold.py` script can enable this config option with the `--long_sequence_inference` command line option -Input FASTA files containing multiple sequences are treated as complexes. In this case, the inference script runs AlphaFold-Gap, a hack proposed [here](https://twitter.com/minkbaek/status/1417538291709071362?lang=en), using the specified stock AlphaFold/OpenFold parameters (NOT AlphaFold-Multimer). \ No newline at end of file +Input FASTA files containing multiple sequences are treated as complexes. In this case, the inference script runs AlphaFold-Gap, a hack proposed [here](https://twitter.com/minkbaek/status/1417538291709071362?lang=en), using the specified stock AlphaFold/OpenFold parameters (NOT AlphaFold-Multimer). \ No newline at end of file diff --git a/docs/source/installation.md b/docs/source/Installation.md similarity index 68% rename from docs/source/installation.md rename to docs/source/Installation.md index fc9ecddd7..6b9599e83 100644 --- a/docs/source/installation.md +++ b/docs/source/Installation.md @@ -4,7 +4,7 @@ In this guide, we will OpenFold and its dependencies. **Pre-requisites** -This package is currently supported for CUDA 11 and Pytorch 1.12. All dependencies are listed in the [`environment.yml`](https://github.com/aqlaboratory/openfold/blob/main/environment.yml) +This package is currently supported for CUDA 12 and Pytorch 2. All dependencies are listed in the [`environment.yml`](https://github.com/aqlaboratory/openfold/blob/main/environment.yml). At this time, only Linux systems are supported. @@ -19,10 +19,17 @@ At this time, only Linux systems are supported. Mamba is recommended as the dependencies required by OpenFold are quite large and mamba can speed up the process. - Activate the environment, e.g `conda activate openfold_env` 1. Run the setup script to configure kernels and folding resources. - > scripts/install_third_party_dependencies.sh` -3. Prepend the conda environment to the $LD_LIBRARY_PATH., e.g. - `export $LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH``. You may optionally set this as a conda environment variable according to the [conda docs](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#saving-environment-variables) to activate each time the environment is used. -4. Download parameters. We recommend using a destination as `openfold/resources` as our unittests will look for the weights there. + > scripts/install_third_party_dependencies.sh +1. Prepend the conda environment to the `$LD_LIBRARY_PATH` and `$LIBRARY_PATH`., e.g. + + ``` + export LIBRARY_PATH=$CONDA_PREFIX/lib:$LIBRARY_PATH + export LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH + ``` + + You may optionally set this as a conda environment variable according to the [conda docs](https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#saving-environment-variables) to activate each time the environment is used. + +1. Download parameters. We recommend using a destination as `openfold/resources` as our unittests will look for the weights there. - For AlphaFold2 weights, use > ./scripts/download_alphafold_params.sh - For OpenFold weights, use : @@ -46,10 +53,9 @@ Certain tests perform equivalence comparisons with the AlphaFold implementation. ## Environment specific modifications -### CUDA 12 -To use OpenFold on CUDA 12 environment rather than a CUDA 11 environment. - In step 1, use the branch [`pl_upgrades`](https://github.com/aqlaboratory/openfold/tree/pl_upgrades) rather than the main branch, i.e. replace the URL in step 1 with https://github.com/aqlaboratory/openfold/tree/pl_upgrades - Follow the rest of the steps of [Installation Guide](#Installation) +### MPI +To use OpenFold with MPI support, you will need to add the package [`mpi4py`](https://pypi.org/project/mpi4py/). This can be done with pip in your OpenFold environment, e.g. `$ pip install mpi4py`. + ### Install OpenFold parameters without aws If you don't have access to `aws` on your system, you can use a different download source: @@ -59,4 +65,4 @@ If you don't have access to `aws` on your system, you can use a different downlo ### Docker setup -A [`Dockerfile`] is provided to build an OpenFold Docker image. Additional notes for setting up a docker container for OpenFold and running inference can be found [here](original_readme.md#building-and-using-the-docker-container). +A [`Dockerfile`](https://github.com/aqlaboratory/openfold/blob/main/Dockerfile) is provided to build an OpenFold Docker image. Additional notes for setting up a docker container for OpenFold and running inference can be found [here](original_readme.md#building-and-using-the-docker-container). diff --git a/docs/source/Multimer_Inference.md b/docs/source/Multimer_Inference.md index c32d4b030..d8c7a36b9 100644 --- a/docs/source/Multimer_Inference.md +++ b/docs/source/Multimer_Inference.md @@ -72,8 +72,7 @@ python3 run_pretrained_openfold.py \ --output_dir ./ ``` -Note that template searching in the multimer pipeline -uses HMMSearch with the PDB SeqRes database, replacing HHSearch and PDB70 used in the monomer pipeline. - -As with monomer inference, if you've already computed alignments for the query, you can use -the `--use_precomputed_alignments` option. \ No newline at end of file +**Notes:** +- Template searching in the multimer pipeline uses HMMSearch with the PDB SeqRes database, replacing HHSearch and PDB70 used in the monomer pipeline. +- As with monomer inference, if you've already computed alignments for the query, you can use the `--use_precomputed_alignments` option. +- At this time, only AlphaFold parameter weights are available for multimer mode. \ No newline at end of file diff --git a/docs/source/index.md b/docs/source/index.md index ae58587b4..5da449197 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -5,7 +5,7 @@ :align: center :alt: Comparison of OpenFold and AlphaFold2 predictions to the experimental structure of PDB 7KDX, chain B._ ``` -Welcome to the Documentation for OpenFold, the fully open source, trainable, PyTorch-based reproduction of DeepMind's +Welcome to the Documentation for [OpenFold](https://github.com/aqlaboratory/openfold), the fully open source, trainable, PyTorch-based reproduction of DeepMind's [AlphaFold 2](https://github.com/deepmind/alphafold). Here, you will find guides and documentation for: @@ -115,4 +115,4 @@ Aux_seq_files.md OpenFold_Parameters.md FAQ.md original_readme.md -``` \ No newline at end of file +``` diff --git a/environment.yml b/environment.yml index ad28bf507..448959007 100644 --- a/environment.yml +++ b/environment.yml @@ -8,34 +8,33 @@ dependencies: - cuda - gcc=12.4 - python=3.10 - - libgcc=7.2 - setuptools=59.5.0 - pip - - openmm=7.7 + - openmm - pdbfixer - pytorch-lightning - biopython - - numpy<2.0.0 + - numpy - pandas - - PyYAML==5.4.1 + - PyYAML - requests - scipy - - tqdm==4.62.2 + - tqdm - typing-extensions - wandb - modelcif==0.7 - awscli - ml-collections - - mkl=2022.1 - aria2 + - mkl - git - bioconda::hmmer - bioconda::hhsuite - bioconda::kalign2 - - pytorch::pytorch=2.1 - - pytorch::pytorch-cuda=12.1 + - pytorch::pytorch=2.5 + - pytorch::pytorch-cuda=12.4 - pip: - - deepspeed==0.12.4 + - deepspeed==0.14.5 - dm-tree==0.1.6 - git+https://github.com/NVIDIA/dllogger.git - flash-attn diff --git a/notebooks/OpenFold.ipynb b/notebooks/OpenFold.ipynb index d864abca1..dfdaa1022 100644 --- a/notebooks/OpenFold.ipynb +++ b/notebooks/OpenFold.ipynb @@ -1,5 +1,15 @@ { "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "view-in-github", + "colab_type": "text" + }, + "source": [ + "\"Open" + ] + }, { "cell_type": "markdown", "metadata": { @@ -107,11 +117,11 @@ "\n", "python_version = f\"{version_info.major}.{version_info.minor}\"\n", "\n", - "\n", - "os.system(\"wget -qnc https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh\")\n", - "os.system(\"bash Mambaforge-Linux-x86_64.sh -bfp /usr/local\")\n", + "os.system(\"wget -qnc https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh\")\n", + "os.system(\"bash Miniforge3-Linux-x86_64.sh -bfp /usr/local\")\n", + "os.environ[\"PATH\"] = \"/usr/local/bin:\" + os.environ[\"PATH\"]\n", "os.system(\"mamba config --set auto_update_conda false\")\n", - "os.system(f\"mamba install -y -c conda-forge -c bioconda kalign2=2.04 hhsuite=3.3.0 openmm=7.7.0 python={python_version} pdbfixer biopython=1.79\")\n", + "os.system(f\"mamba install -y -c conda-forge -c bioconda kalign2=2.04 hhsuite=3.3.0 openmm=8.2.0 python={python_version} pdbfixer biopython=1.83\")\n", "os.system(\"pip install -q torch ml_collections py3Dmol modelcif\")\n", "\n", "try:\n", @@ -127,7 +137,7 @@ "\n", " %shell mkdir -p /content/openfold/openfold/resources\n", "\n", - " commit = \"a96ffd67f8c96f8c4decc3abdd2cffbb57fc5764\"\n", + " commit = \"1ffd197489aa5f35a5fbce1f00d7dd49bce1bd2f\"\n", " os.system(f\"pip install -q git+https://github.com/aqlaboratory/openfold.git@{commit}\")\n", "\n", " os.system(f\"cp -f -p /content/stereo_chemical_props.txt /usr/local/lib/python{python_version}/site-packages/openfold/resources/\")\n", @@ -893,8 +903,7 @@ "metadata": { "colab": { "provenance": [], - "gpuType": "T4", - "toc_visible": true + "gpuType": "T4" }, "kernelspec": { "display_name": "Python 3", @@ -907,4 +916,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} \ No newline at end of file +} diff --git a/openfold/config.py b/openfold/config.py index 7bf30e391..a738b9f07 100644 --- a/openfold/config.py +++ b/openfold/config.py @@ -660,7 +660,7 @@ def model_config( }, "relax": { "max_iterations": 0, # no max - "tolerance": 2.39, + "tolerance": 10.0, "stiffness": 10.0, "max_outer_iterations": 20, "exclude_residues": [], diff --git a/openfold/data/data_pipeline.py b/openfold/data/data_pipeline.py index adde0b73b..393c1cef3 100644 --- a/openfold/data/data_pipeline.py +++ b/openfold/data/data_pipeline.py @@ -23,8 +23,19 @@ from typing import Mapping, Optional, Sequence, Any, MutableMapping, Union import numpy as np import torch -from openfold.data import templates, parsers, mmcif_parsing, msa_identifiers, msa_pairing, feature_processing_multimer -from openfold.data.templates import get_custom_template_features, empty_template_feats +from openfold.data import ( + templates, + parsers, + mmcif_parsing, + msa_identifiers, + msa_pairing, + feature_processing_multimer, +) +from openfold.data.templates import ( + get_custom_template_features, + empty_template_feats, + CustomHitFeaturizer, +) from openfold.data.tools import jackhmmer, hhblits, hhsearch, hmmsearch from openfold.np import residue_constants, protein @@ -38,7 +49,9 @@ def make_template_features( template_featurizer: Any, ) -> FeatureDict: hits_cat = sum(hits.values(), []) - if(len(hits_cat) == 0 or template_featurizer is None): + if template_featurizer is None or ( + len(hits_cat) == 0 and not isinstance(template_featurizer, CustomHitFeaturizer) + ): template_features = empty_template_feats(len(input_sequence)) else: templates_result = template_featurizer.get_templates( diff --git a/openfold/data/mmcif_parsing.py b/openfold/data/mmcif_parsing.py index 6a6c1fd5b..f83fec8f8 100644 --- a/openfold/data/mmcif_parsing.py +++ b/openfold/data/mmcif_parsing.py @@ -283,7 +283,7 @@ def parse( author_chain = mmcif_to_author_chain_id[chain_id] seq = [] for monomer in seq_info: - code = PDBData.protein_letters_3to1.get(monomer.id, "X") + code = PDBData.protein_letters_3to1_extended.get(monomer.id, "X") seq.append(code if len(code) == 1 else "X") seq = "".join(seq) author_chain_to_sequence[author_chain] = seq diff --git a/openfold/data/templates.py b/openfold/data/templates.py index 8c55b5f38..5c82b7025 100644 --- a/openfold/data/templates.py +++ b/openfold/data/templates.py @@ -22,6 +22,7 @@ import json import logging import os +from pathlib import Path import re from typing import Any, Dict, Mapping, Optional, Sequence, Tuple @@ -947,55 +948,71 @@ def _process_single_hit( def get_custom_template_features( - mmcif_path: str, - query_sequence: str, - pdb_id: str, - chain_id: str, - kalign_binary_path: str): - - with open(mmcif_path, "r") as mmcif_path: - cif_string = mmcif_path.read() - - mmcif_parse_result = mmcif_parsing.parse( - file_id=pdb_id, mmcif_string=cif_string - ) - template_sequence = mmcif_parse_result.mmcif_object.chain_to_seqres[chain_id] - - - mapping = {x:x for x, _ in enumerate(query_sequence)} - - - features, warnings = _extract_template_features( - mmcif_object=mmcif_parse_result.mmcif_object, - pdb_id=pdb_id, - mapping=mapping, - template_sequence=template_sequence, - query_sequence=query_sequence, - template_chain_id=chain_id, - kalign_binary_path=kalign_binary_path, - _zero_center_positions=True - ) - features["template_sum_probs"] = [1.0] - - # TODO: clean up this logic - template_features = {} - for template_feature_name in TEMPLATE_FEATURES: - template_features[template_feature_name] = [] - - for k in template_features: - template_features[k].append(features[k]) - - for name in template_features: - template_features[name] = np.stack( - template_features[name], axis=0 - ).astype(TEMPLATE_FEATURES[name]) + mmcif_path: str, + query_sequence: str, + pdb_id: str, + chain_id: Optional[str] = "A", + kalign_binary_path: Optional[str] = None, +): + if os.path.isfile(mmcif_path): + template_paths = [Path(mmcif_path)] + elif os.path.isdir(mmcif_path): + template_paths = list(Path(mmcif_path).glob("*.cif")) + else: + logging.error("Custom template path %s does not exist", mmcif_path) + raise ValueError(f"Custom template path {mmcif_path} does not exist") + + warnings = [] + template_features = dict() + for template_path in template_paths: + logging.info("Featurizing template: %s", template_path) + # pdb_id only for error reporting, take file name + pdb_id = Path(template_path).stem + with open(template_path, "r") as mmcif_path: + cif_string = mmcif_path.read() + mmcif_parse_result = mmcif_parsing.parse( + file_id=pdb_id, mmcif_string=cif_string + ) + # mapping skipping "-" + mapping = { + x: x for x, curr_char in enumerate(query_sequence) if curr_char.isalnum() + } + realigned_sequence, realigned_mapping = _realign_pdb_template_to_query( + old_template_sequence=query_sequence, + template_chain_id=chain_id, + mmcif_object=mmcif_parse_result.mmcif_object, + old_mapping=mapping, + kalign_binary_path=kalign_binary_path, + ) + curr_features, curr_warnings = _extract_template_features( + mmcif_object=mmcif_parse_result.mmcif_object, + pdb_id=pdb_id, + mapping=realigned_mapping, + template_sequence=realigned_sequence, + query_sequence=query_sequence, + template_chain_id=chain_id, + kalign_binary_path=kalign_binary_path, + _zero_center_positions=True, + ) + curr_features["template_sum_probs"] = [ + 1.0 + ] # template given by user, 100% confident + template_features = { + curr_name: template_features.get(curr_name, []) + [curr_item] + for curr_name, curr_item in curr_features.items() + } + warnings.append(curr_warnings) + template_features = { + template_feature_name: np.stack( + template_features[template_feature_name], axis=0 + ).astype(template_feature_type) + for template_feature_name, template_feature_type in TEMPLATE_FEATURES.items() + } return TemplateSearchResult( features=template_features, errors=None, warnings=warnings ) - - @dataclasses.dataclass(frozen=True) class TemplateSearchResult: features: Mapping[str, Any] @@ -1188,6 +1205,23 @@ def get_templates( ) +class CustomHitFeaturizer(TemplateHitFeaturizer): + """Featurizer for templates given in folder. + Chain of interest has to be chain A and of same sequence length as input sequence.""" + def get_templates( + self, + query_sequence: str, + hits: Sequence[parsers.TemplateHit], + ) -> TemplateSearchResult: + """Computes the templates for given query sequence (more details above).""" + logging.info("Featurizing mmcif_dir: %s", self._mmcif_dir) + return get_custom_template_features( + self._mmcif_dir, + query_sequence=query_sequence, + pdb_id="test", + chain_id="A", + kalign_binary_path=self._kalign_binary_path, + ) class HmmsearchHitFeaturizer(TemplateHitFeaturizer): def get_templates( self, diff --git a/openfold/model/primitives.py b/openfold/model/primitives.py index e5735d1f0..c35472539 100644 --- a/openfold/model/primitives.py +++ b/openfold/model/primitives.py @@ -808,7 +808,7 @@ def _flash_attn(q, k, v, kv_mask): # [B_flat, N, 2 * H * C] kv = kv.reshape(*kv.shape[:-3], -1) - kv_unpad, _, kv_cu_seqlens, kv_max_s = unpad_input(kv, kv_mask) + kv_unpad, _, kv_cu_seqlens, kv_max_s, _ = unpad_input(kv, kv_mask) kv_unpad = kv_unpad.reshape(-1, *kv_shape[-3:]) out = flash_attn_varlen_kvpacked_func( diff --git a/openfold/np/relax/amber_minimize.py b/openfold/np/relax/amber_minimize.py index 02816bb81..43d9337e4 100644 --- a/openfold/np/relax/amber_minimize.py +++ b/openfold/np/relax/amber_minimize.py @@ -34,6 +34,7 @@ from openmm.app.internal.pdbstructure import PdbStructure ENERGY = unit.kilocalories_per_mole +FORCE = unit.kilojoules_per_mole / unit.nanometer LENGTH = unit.angstroms @@ -439,7 +440,7 @@ def _run_one_iteration( exclude_residues = exclude_residues or [] # Assign physical dimensions. - tolerance = tolerance * ENERGY + tolerance = tolerance * FORCE stiffness = stiffness * ENERGY / (LENGTH ** 2) start = time.perf_counter() diff --git a/run_pretrained_openfold.py b/run_pretrained_openfold.py index 3cd7c25c4..510610493 100644 --- a/run_pretrained_openfold.py +++ b/run_pretrained_openfold.py @@ -185,12 +185,7 @@ def main(args): use_deepspeed_evoformer_attention=args.use_deepspeed_evoformer_attention, ) - if args.experiment_config_json: - with open(args.experiment_config_json, 'r') as f: - custom_config_dict = json.load(f) - config.update_from_flattened_dict(custom_config_dict) - - if args.experiment_config_json: + if args.experiment_config_json: with open(args.experiment_config_json, 'r') as f: custom_config_dict = json.load(f) config.update_from_flattened_dict(custom_config_dict) @@ -202,8 +197,15 @@ def main(args): ) is_multimer = "multimer" in args.config_preset - - if is_multimer: + is_custom_template = "use_custom_template" in args and args.use_custom_template + if is_custom_template: + template_featurizer = templates.CustomHitFeaturizer( + mmcif_dir=args.template_mmcif_dir, + max_template_date="9999-12-31", # just dummy, not used + max_hits=-1, # just dummy, not used + kalign_binary_path=args.kalign_binary_path + ) + elif is_multimer: template_featurizer = templates.HmmsearchHitFeaturizer( mmcif_dir=args.template_mmcif_dir, max_template_date=args.max_template_date, @@ -221,11 +223,9 @@ def main(args): release_dates_path=args.release_dates_path, obsolete_pdbs_path=args.obsolete_pdbs_path ) - data_processor = data_pipeline.DataPipeline( template_featurizer=template_featurizer, ) - if is_multimer: data_processor = data_pipeline.DataPipelineMultimer( monomer_data_pipeline=data_processor, @@ -238,7 +238,6 @@ def main(args): np.random.seed(random_seed) torch.manual_seed(random_seed + 1) - feature_processor = feature_pipeline.FeaturePipeline(config.data) if not os.path.exists(output_dir_base): os.makedirs(output_dir_base) @@ -273,6 +272,11 @@ def main(args): seq_sort_fn = lambda target: sum([len(s) for s in target[1]]) sorted_targets = sorted(zip(tag_list, seq_list), key=seq_sort_fn) feature_dicts = {} + + if is_multimer and args.openfold_checkpoint_path: + raise ValueError( + '`openfold_checkpoint_path` was specified, but no OpenFold checkpoints are available for multimer mode') + model_generator = load_models_from_command_line( config, args.model_device, @@ -308,7 +312,6 @@ def main(args): ) feature_dicts[tag] = feature_dict - processed_feature_dict = feature_processor.process_features( feature_dict, mode='predict', is_multimer=is_multimer ) @@ -395,6 +398,10 @@ def main(args): help="""Path to alignment directory. If provided, alignment computation is skipped and database path arguments are ignored.""" ) + parser.add_argument( + "--use_custom_template", action="store_true", default=False, + help="""Use mmcif given with "template_mmcif_dir" argument as template input.""" + ) parser.add_argument( "--use_single_seq_mode", action="store_true", default=False, help="""Use single sequence embeddings instead of MSAs.""" @@ -489,5 +496,4 @@ def main(args): """The model is being run on CPU. Consider specifying --model_device for better performance""" ) - main(args) diff --git a/scripts/alignment_data_to_fasta.py b/scripts/alignment_data_to_fasta.py new file mode 100644 index 000000000..1c5383d70 --- /dev/null +++ b/scripts/alignment_data_to_fasta.py @@ -0,0 +1,144 @@ +""" +This script generates a FASTA file for all chains in an alignment directory or +alignment DB. +""" + +import json +from argparse import ArgumentParser +from concurrent.futures import ThreadPoolExecutor, as_completed +from pathlib import Path +from typing import Optional + +from tqdm import tqdm + + +def chain_dir_to_fasta(dir: Path) -> str: + """ + Generates a FASTA string from a chain directory. + """ + # take some alignment file + for alignment_file_type in [ + "mgnify_hits.a3m", + "uniref90_hits.a3m", + "bfd_uniclust_hits.a3m", + ]: + alignment_file = dir / alignment_file_type + if alignment_file.exists(): + break + + with open(alignment_file, "r") as f: + next(f) # skip the first line + seq = next(f).strip() + + try: + next_line = next(f) + except StopIteration: + pass + else: + assert next_line.startswith(">") # ensure that sequence ended + + chain_id = dir.name + + return f">{chain_id}\n{seq}\n" + + +def index_entry_to_fasta(index_entry: dict, db_dir: Path, chain_id: str) -> str: + """ + Generates a FASTA string from an alignment-db index entry. + """ + db_file = db_dir / index_entry["db"] + + # look for an alignment file + for alignment_file_type in [ + "mgnify_hits.a3m", + "uniref90_hits.a3m", + "bfd_uniclust_hits.a3m", + ]: + for file_info in index_entry["files"]: + if file_info[0] == alignment_file_type: + start, size = file_info[1], file_info[2] + break + + with open(db_file, "rb") as f: + f.seek(start) + msa_lines = f.read(size).decode("utf-8").splitlines() + seq = msa_lines[1] + + try: + next_line = msa_lines[2] + except IndexError: + pass + else: + assert next_line.startswith(">") # ensure that sequence ended + + return f">{chain_id}\n{seq}\n" + + +def main( + output_path: Path, alignment_db_index: Optional[Path], alignment_dir: Optional[Path] +) -> None: + """ + Generate a FASTA file from either an alignment-db index or a chain directory using multi-threading. + """ + fasta = [] + + if alignment_dir and alignment_db_index: + raise ValueError( + "Only one of alignment_db_index and alignment_dir can be provided." + ) + + if alignment_dir: + print("Creating FASTA from alignment directory...") + chain_dirs = list(alignment_dir.iterdir()) + + with ThreadPoolExecutor() as executor: + futures = [ + executor.submit(chain_dir_to_fasta, chain_dir) + for chain_dir in chain_dirs + ] + for future in tqdm(as_completed(futures), total=len(chain_dirs)): + fasta.append(future.result()) + + elif alignment_db_index: + print("Creating FASTA from alignment dbs...") + + with open(alignment_db_index, "r") as f: + index = json.load(f) + + db_dir = alignment_db_index.parent + + with ThreadPoolExecutor() as executor: + futures = [ + executor.submit(index_entry_to_fasta, index_entry, db_dir, chain_id) + for chain_id, index_entry in index.items() + ] + for future in tqdm(as_completed(futures), total=len(index)): + fasta.append(future.result()) + else: + raise ValueError("Either alignment_db_index or alignment_dir must be provided.") + + with open(output_path, "w") as f: + f.write("".join(fasta)) + print(f"FASTA file written to {output_path}.") + + +if __name__ == "__main__": + parser = ArgumentParser(description=__doc__) + parser.add_argument( + "output_path", + type=Path, + help="Path to output FASTA file.", + ) + parser.add_argument( + "--alignment_db_index", + type=Path, + help="Path to alignment-db index file.", + ) + parser.add_argument( + "--alignment_dir", + type=Path, + help="Path to alignment directory.", + ) + + args = parser.parse_args() + main(args.output_path, args.alignment_db_index, args.alignment_dir) diff --git a/scripts/alignment_db_scripts/create_alignment_db_sharded.py b/scripts/alignment_db_scripts/create_alignment_db_sharded.py index 49f44824b..3fccd157c 100644 --- a/scripts/alignment_db_scripts/create_alignment_db_sharded.py +++ b/scripts/alignment_db_scripts/create_alignment_db_sharded.py @@ -5,17 +5,19 @@ run on the output index. Additionally this script uses threading and multiprocessing and is much faster than the old version. """ + import argparse +import json from collections import defaultdict from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, as_completed -import json +from math import ceil +from multiprocessing import cpu_count from pathlib import Path -from typing import List + from tqdm import tqdm -from math import ceil -def split_file_list(file_list, n_shards): +def split_file_list(file_list: list[Path], n_shards: int): """ Split up the total file list into n_shards sublists. """ @@ -29,26 +31,25 @@ def split_file_list(file_list, n_shards): return split_list -def chunked_iterator(lst, chunk_size): +def chunked_iterator(lst: list, chunk_size: int): """Iterate over a list in chunks of size chunk_size.""" for i in range(0, len(lst), chunk_size): yield lst[i : i + chunk_size] -def read_chain_dir(chain_dir) -> dict: +def read_chain_dir(chain_dir: Path) -> dict: """ Read all alignment files in a single chain directory and return a dict mapping chain name to file names and bytes. """ if not chain_dir.is_dir(): raise ValueError(f"chain_dir must be a directory, but is {chain_dir}") - + # ensure that PDB IDs are all lowercase pdb_id, chain = chain_dir.name.split("_") pdb_id = pdb_id.lower() chain_name = f"{pdb_id}_{chain}" - - + file_data = [] for file_path in sorted(chain_dir.iterdir()): @@ -62,7 +63,7 @@ def read_chain_dir(chain_dir) -> dict: return {chain_name: file_data} -def process_chunk(chain_files: List[Path]) -> dict: +def process_chunk(chain_files: list[Path]) -> dict: """ Returns the file names and bytes for all chains in a chunk of files. """ @@ -83,7 +84,7 @@ def create_index_default_dict() -> dict: def create_shard( - shard_files: List[Path], output_dir: Path, output_name: str, shard_num: int + shard_files: list[Path], output_dir: Path, output_name: str, shard_num: int ) -> dict: """ Creates a single shard of the alignment database, and returns the @@ -92,7 +93,7 @@ def create_shard( CHUNK_SIZE = 200 shard_index = defaultdict( create_index_default_dict - ) # {chain_name: {db: str, files: [(file_name, db_offset, file_length)]}, ...} + ) # e.g. {chain_name: {db: str, files: [(file_name, db_offset, file_length)]}, ...} chunk_iter = chunked_iterator(shard_files, CHUNK_SIZE) pbar_desc = f"Shard {shard_num}" @@ -101,7 +102,11 @@ def create_shard( db_offset = 0 db_file = open(output_path, "wb") for files_chunk in tqdm( - chunk_iter, total=ceil(len(shard_files) / CHUNK_SIZE), desc=pbar_desc, position=shard_num, leave=False + chunk_iter, + total=ceil(len(shard_files) / CHUNK_SIZE), + desc=pbar_desc, + position=shard_num, + leave=False, ): # get processed files for one chunk chunk_data = process_chunk(files_chunk) @@ -125,9 +130,17 @@ def create_shard( def main(args): alignment_dir = args.alignment_dir output_dir = args.output_db_path + output_dir.mkdir(exist_ok=True, parents=True) output_db_name = args.output_db_name n_shards = args.n_shards + n_cpus = cpu_count() + if n_shards > n_cpus: + print( + f"Warning: Your number of shards ({n_shards}) is greater than the number of cores on your machine ({n_cpus}). " + "This may result in slower performance. Consider using a smaller number of shards." + ) + # get all chain dirs in alignment_dir print("Getting chain directories...") all_chain_dirs = sorted([f for f in tqdm(alignment_dir.iterdir())]) @@ -153,12 +166,36 @@ def main(args): super_index.update(shard_index) print("\nCreated all shards.") + if args.duplicate_chains_file: + print("Extending super index with duplicate chains...") + duplicates_added = 0 + with open(args.duplicate_chains_file, "r") as fp: + duplicate_chains = [line.strip().split() for line in fp] + + for chains in duplicate_chains: + # find representative with alignment + for chain in chains: + if chain in super_index: + representative_chain = chain + break + else: + print(f"No representative chain found for {chains}, skipping...") + continue + + # add duplicates to index + for chain in chains: + if chain != representative_chain: + super_index[chain] = super_index[representative_chain] + duplicates_added += 1 + + print(f"Added {duplicates_added} duplicate chains to index.") + # write super index to file print("\nWriting super index...") index_path = output_dir / f"{output_db_name}.index" with open(index_path, "w") as fp: json.dump(super_index, fp, indent=4) - + print("Done.") @@ -179,13 +216,27 @@ def main(args): parser.add_argument( "alignment_dir", type=Path, - help="""Path to precomputed alignment directory, with one subdirectory - per chain.""", + help="""Path to precomputed flattened alignment directory, with one + subdirectory per chain.""", ) parser.add_argument("output_db_path", type=Path) parser.add_argument("output_db_name", type=str) parser.add_argument( - "n_shards", type=int, help="Number of shards to split the database into" + "--n_shards", + type=int, + help="Number of shards to split the database into", + default=10, + ) + parser.add_argument( + "--duplicate_chains_file", + type=Path, + help=""" + Optional path to file containing duplicate chain information, where each + line contains chains that are 100% sequence identical. If provided, + duplicate chains will be added to the index and point to the same + underlying database entry as their representatives in the alignment dir. + """, + default=None, ) args = parser.parse_args() diff --git a/scripts/expand_alignment_duplicates.py b/scripts/expand_alignment_duplicates.py new file mode 100644 index 000000000..d32bf8223 --- /dev/null +++ b/scripts/expand_alignment_duplicates.py @@ -0,0 +1,79 @@ +""" +The OpenProteinSet alignment database is non-redundant, meaning that it only +stores one explicit representative alignment directory for all PDB chains in a +100% sequence identity cluster. In order to add explicit alignments for all PDB +chains, this script will add the missing chain directories and symlink them to +their representative alignment directories. This is required in order to train +OpenFold on the full PDB, not just one representative chain per cluster. +""" + +from argparse import ArgumentParser +from pathlib import Path + +from tqdm import tqdm + + +def create_duplicate_dirs(duplicate_chains: list[list[str]], alignment_dir: Path): + """ + Create duplicate directory symlinks for all chains in the given duplicate lists. + + Args: + duplicate_lists (list[list[str]]): A list of lists, where each inner list + contains chains that are 100% sequence identical. + alignment_dir (Path): Path to flattened alignment directory, with one + subdirectory per chain. + """ + print("Creating duplicate directory symlinks...") + dirs_created = 0 + for chains in tqdm(duplicate_chains): + # find the chain that has an alignment + for chain in chains: + if (alignment_dir / chain).exists(): + representative_chain = chain + break + else: + print(f"No representative chain found for {chains}, skipping...") + continue + + # create symlinks for all other chains + for chain in chains: + if chain != representative_chain: + target_path = alignment_dir / chain + if target_path.exists(): + print(f"Chain {chain} already exists, skipping...") + else: + (target_path).symlink_to(alignment_dir / representative_chain) + dirs_created += 1 + + print(f"Created directories for {dirs_created} duplicate chains.") + + +def main(alignment_dir: Path, duplicate_chains_file: Path): + # read duplicate chains file + with open(duplicate_chains_file, "r") as fp: + duplicate_chains = [list(line.strip().split()) for line in fp] + + # convert to absolute path for symlink creation + alignment_dir = alignment_dir.resolve() + + create_duplicate_dirs(duplicate_chains, alignment_dir) + + +if __name__ == "__main__": + parser = ArgumentParser(description=__doc__) + parser.add_argument( + "alignment_dir", + type=Path, + help="""Path to flattened alignment directory, with one subdirectory + per chain.""", + ) + parser.add_argument( + "duplicate_chains_file", + type=Path, + help="""Path to file containing duplicate chains, where each line + contains a space-separated list of chains that are 100%% + sequence identical. + """, + ) + args = parser.parse_args() + main(args.alignment_dir, args.duplicate_chains_file) diff --git a/scripts/fasta_to_clusterfile.py b/scripts/fasta_to_clusterfile.py index 0ec383023..b59a8089b 100644 --- a/scripts/fasta_to_clusterfile.py +++ b/scripts/fasta_to_clusterfile.py @@ -85,7 +85,7 @@ def main(args): if __name__ == "__main__": parser = ArgumentParser( - description="Creates a sequence cluster file from a .fasta file using mmseqs2 with PDB settings." + description=__doc__ ) parser.add_argument( "input_fasta", diff --git a/scripts/flatten_roda.sh b/scripts/flatten_roda.sh index 074736a69..788f9a71c 100755 --- a/scripts/flatten_roda.sh +++ b/scripts/flatten_roda.sh @@ -9,8 +9,8 @@ # output_dir: # The directory in which to construct the reformatted data -if [[ $# != 2 ]]; then - echo "usage: ./flatten_roda.sh " +if [ "$#" -ne 2 ]; then + echo "Usage: ./flatten_roda.sh " exit 1 fi @@ -23,25 +23,36 @@ ALIGNMENT_DIR="${OUTPUT_DIR}/alignments" mkdir -p "${DATA_DIR}" mkdir -p "${ALIGNMENT_DIR}" -for chain_dir in $(ls "${RODA_DIR}"); do - CHAIN_DIR_PATH="${RODA_DIR}/${chain_dir}" - for subdir in $(ls "${CHAIN_DIR_PATH}"); do - if [[ ! -d "$subdir" ]]; then - echo "$subdir is not directory" +for chain_dir in "${RODA_DIR}"/*; do + if [ ! -d "$chain_dir" ]; then + continue + fi + + chain_name=$(basename "$chain_dir") + + for subdir in "$chain_dir"/*; do + if [ ! -d "$subdir" ]; then + echo "$subdir is not a directory" continue - elif [[ -z $(ls "${subdir}")]]; then + fi + + if [ -z "$(ls -A "$subdir")" ]; then continue - elif [[ $subdir = "pdb" ]] || [[ $subdir = "cif" ]]; then - mv "${CHAIN_DIR_PATH}/${subdir}"/* "${DATA_DIR}" + fi + + subdir_name=$(basename "$subdir") + + if [ "$subdir_name" = "pdb" ] || [ "$subdir_name" = "cif" ]; then + mv "$subdir"/* "${DATA_DIR}/" else - CHAIN_ALIGNMENT_DIR="${ALIGNMENT_DIR}/${chain_dir}" + CHAIN_ALIGNMENT_DIR="${ALIGNMENT_DIR}/${chain_name}" mkdir -p "${CHAIN_ALIGNMENT_DIR}" - mv "${CHAIN_DIR_PATH}/${subdir}"/* "${CHAIN_ALIGNMENT_DIR}" + mv "$subdir"/* "${CHAIN_ALIGNMENT_DIR}/" fi done done NO_DATA_FILES=$(find "${DATA_DIR}" -type f | wc -l) -if [[ $NO_DATA_FILES = 0 ]]; then - rm -rf ${DATA_DIR} -fi +if [ "$NO_DATA_FILES" -eq 0 ]; then + rm -rf "${DATA_DIR}" +fi \ No newline at end of file diff --git a/scripts/install_third_party_dependencies.sh b/scripts/install_third_party_dependencies.sh index fe2a6a0ba..e9d91002a 100755 --- a/scripts/install_third_party_dependencies.sh +++ b/scripts/install_third_party_dependencies.sh @@ -14,7 +14,7 @@ gunzip -c tests/test_data/sample_feats.pickle.gz > tests/test_data/sample_feats. python setup.py install echo "Download CUTLASS, required for Deepspeed Evoformer attention kernel" -git clone https://github.com/NVIDIA/cutlass --depth 1 +git clone https://github.com/NVIDIA/cutlass --branch v3.6.0 --depth 1 conda env config vars set CUTLASS_PATH=$PWD/cutlass # This setting is used to fix a worker assignment issue during data loading diff --git a/setup.py b/setup.py index 487357067..3750d9fe9 100644 --- a/setup.py +++ b/setup.py @@ -54,6 +54,7 @@ def get_cuda_bare_metal_version(cuda_dir): compute_capabilities = set([ (5, 2), # Titan X (6, 1), # GeForce 1000-series + (9, 0), # Hopper ]) compute_capabilities.add((7, 0)) @@ -112,7 +113,7 @@ def get_cuda_bare_metal_version(cuda_dir): setup( name='openfold', - version='2.0.0', + version='2.2.0', description='A PyTorch reimplementation of DeepMind\'s AlphaFold 2', author='OpenFold Team', author_email='jennifer.wei@omsf.io', diff --git a/tests/test_deepspeed_evo_attention.py b/tests/test_deepspeed_evo_attention.py index dd203a05c..a65a76317 100644 --- a/tests/test_deepspeed_evo_attention.py +++ b/tests/test_deepspeed_evo_attention.py @@ -315,8 +315,9 @@ def test_compare_model(self): # Move the recycling dimension to the end move_dim = lambda t: t.permute(*range(len(t.shape))[1:], 0) batch = tensor_tree_map(move_dim, batch) - with torch.no_grad(): - with torch.cuda.amp.autocast(dtype=torch.bfloat16): + # Restrict this test to use only torch.float32 precision due to instability with torch.bfloat16 + # https://github.com/aqlaboratory/openfold/issues/532 + with torch.no_grad(), torch.cuda.amp.autocast(dtype=torch.float32): model = compare_utils.get_global_pretrained_openfold() model.globals.use_deepspeed_evo_attention = False out_repro = model(batch)