Skip to content

Commit bebc144

Browse files
committed
Docs
1 parent 72e05e4 commit bebc144

File tree

3 files changed

+38
-48
lines changed

3 files changed

+38
-48
lines changed

doc/quickstart/matching/constrained.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -115,11 +115,11 @@ To integrate orientational constraints, we need to ensure the template used for
115115
--sampling-rate 6.8 \
116116
--lowpass 15 \
117117
--box-size 60 \
118-
--align-axis 2 \
119-
--invert-contrast \
120-
--flip-axis
118+
--align-axis 2
119+
120+
.. note::
121121

122-
For NA we need to provide the ``--flip-axis`` flag due to the handedness of the alignment problem. When aligning a protein structure to a principal axis, the algorithm determines the orientation based on the distribution of mass around the center. However, this can result in two possible orientations that are 180° apart - the protein could point "up" or "down" along the chosen axis.
122+
In some cases we need to provide the ``--flip-axis`` flag due to the handedness of the alignment problem. When aligning a protein structure to a principal axis, the algorithm determines the orientation based on the distribution of mass around the center. However, this can result in two possible orientations that are 180° apart - the protein could point "up" or "down" along the chosen axis.
123123

124124
After alignment, your templates should look similar to what is shown here, with the transmembrane region pointing in the direction of negative z and the extracellular domain pointing in direction of z
125125

@@ -159,7 +159,7 @@ Alternatively, you can do this using Python
159159
mask_type="tube",
160160
shape=(60,60,60),
161161
symmetry_axis=2,
162-
base_center=(29,29,23.5),
162+
center=(29,29,23.5),
163163
inner_radius=0,
164164
outer_radius=10,
165165
height=37
@@ -200,7 +200,7 @@ For NA, simply use ``-i templates/na_6.8_aligned.mrc`` and ``-o results/na_match
200200

201201
You can also constrain the rotational search to account for properties like template symmetry. For instance for the C3 symmetric HA, try replacing ``--angular-sampling 10`` with ``--cone-angle 180 --cone-sampling 10 --axis-symmetry 3``.
202202

203-
The output of constrained template matching is a pickle file containing the score space and identified orientations. We can explore the score space in the ``preprocessor_gui.py`` using the **Import Pickle** button. Shown below is a comparison of HA and NA matching using constrained and unconstrained matching, respectively. Note the increase in peak sharpness and decreased contribution of the membrane density in constrained matching. Achieving more uniform matching scores for HA would require a more stringently created mask. In essence, HAs orthogonal to the missing wedge score lower, because applying a wedge mask to the template density stretches the template, and pushes a considerable amount outside the mask. Alternatively, background correction could be performed, for instance using ``--background-correction phase-scrambling``.
203+
The output of constrained template matching is a pickle file containing the score space and identified orientations. We can explore the score space in the ``preprocessor_gui.py`` using the **Import Pickle** button. Shown below is a comparison of HA and NA matching using constrained and unconstrained matching, respectively. Note the increase in peak sharpness and decreased contribution of the membrane density in constrained matching.
204204

205205
.. figure:: ../../_static/examples/constrained/scores.png
206206

doc/quickstart/postprocessing/motivation.rst

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ In all cases, the tool will report statistics for foreground, background, and no
108108
> Background mean 0.089, std 0.023, max 0.234
109109
> Normalized mean 0.067, std 0.078, max 0.298
110110
111-
Since the background of the individual entities may differ, we can also compare SNR-like cross-correlations instead, using ``--snr``. This is also useful when comparing the scores across an entire dataset.
111+
Since the background of the individual entities may differ, we can also compare SNR-like cross-correlations instead, using ``--snr``. This is also useful when comparing scores across an entire dataset.
112112

113113

114114
Local Optimization and Refinement
@@ -140,25 +140,25 @@ Our convention follows the schematics outlined in [1]_. We use a right-handed co
140140
Details for Developers
141141
----------------------
142142

143-
The output of ``match_template.py`` is a `pickle <https://docs.python.org/3/library/pickle.html>`_ file. All but the last element will correspond to the return value of a given :doc:`analyzer </reference/analyzer/base>`'s merge method. The file can be read using :py:meth:`load_pickle <tme.matching_utils.load_pickle>`. For the default analyzer :py:class:`MaxScoreOverRotations <tme.analyzer.MaxScoreOverRotations>` the pickle file contains
143+
The output of ``match_template.py`` is a `pickle <https://docs.python.org/3/library/pickle.html>`_ file containing a tuple. All but the last element will correspond to the return value of a given :doc:`analyzer </reference/analyzer/base>`'s merge method. The file can be read using :py:meth:`load_pickle <tme.matching_utils.load_pickle>`. For the default analyzer :py:class:`MaxScoreOverRotations <tme.analyzer.MaxScoreOverRotations>` the pickle file contains
144144

145-
- **Scores**: An array with scores mapped to translations.
146-
- **Offset**: Offset informing about shifts in coordinate sytems.
147-
- **Rotations**: An array of optimal rotation indices for each translation.
148-
- **Rotation Dictionary**: Mapping of rotation indices to rotation matrices.
145+
- **Scores**: Score for each position in the target.
146+
- **Offset**: Coordinate system shift.
147+
- **Rotations**: Optimal rotation index for each translation.
148+
- **Rotation Dictionary**: Dictionary mapping rotation indices to rotation matrices.
149149
- **Sum of Squares**: Sum of squares of scores for statistics.
150150
- **Metadata**: Coordinate system information and parameters for reproducibility.
151151

152152
However, when you use the `-p` flag the output structure differs
153153

154-
- **Translations**: A numpy array containing translations of peaks.
155-
- **Rotations**: A numpy array containing rotations of peaks.
156-
- **Scores**: Score of each peak.
157-
- **Details**: Additional information regarding each peak.
154+
- **Translations**: Peak position.
155+
- **Rotations**: Rotation matrix describing template orientation at peak.
156+
- **Scores**: Score at peak.
157+
- **Details**: Additional properties of peak.
158158
- **Metadata**: Coordinate system information and parameters for reproducibility.
159159

160160

161161
References
162162
----------
163163

164-
.. [1] Heymann, J.B.; Chagoyen, M.; Belnap, D.M. Common conventions for interchange and archiving of three-dimensional electron microscopy information in structural biology. J Struct Biol 2005, 151, 196-207.
164+
.. [1] Heymann, J.B.; Chagoyen, M.; Belnap, D.M. Common conventions for interchange and archiving of three-dimensional electron microscopy information in structural biology. J Struct Biol 2005, 151, 196-207.

tme/memory.py

Lines changed: 21 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ def estimate_memory_usage(
176176
integer_nbytes: int = 4,
177177
) -> int:
178178
"""
179-
Estimate the memory usage for a given template matching run.
179+
Estimate the memory usage of a given template matching run.
180180
181181
Parameters
182182
----------
@@ -185,19 +185,19 @@ def estimate_memory_usage(
185185
shape2 : tuple
186186
Shape of the template array.
187187
matching_method : str
188-
Matching method to estimate memory usage for.
188+
Matching method used to compute scores.
189189
analyzer_method : str, optional
190-
The method used for score analysis.
190+
Analyzer used for score analysis.
191191
backend : str, optional
192192
Backend used for computation.
193193
ncores : int
194-
The number of CPU cores used for the operation.
194+
The number of operations running in parallel.
195195
float_nbytes : int
196-
Number of bytes of the used float, defaults to 4 (float32).
196+
Byte size of used float, defaults to 4 (float32).
197197
complex_nbytes : int
198-
Number of bytes of the used complex, defaults to 8 (complex64).
198+
Byte size of used complex, defaults to 8 (complex64).
199199
integer_nbytes : int
200-
Number of bytes of the used integer, defaults to 4 (int32).
200+
Byte size of used integer, defaults to 4 (int32).
201201
202202
Returns
203203
-------
@@ -215,34 +215,24 @@ def estimate_memory_usage(
215215
)
216216

217217
_, fast_shape, ft_shape = be.compute_convolution_shapes(shape1, shape2)
218-
memory_instance = MATCHING_MEMORY_REGISTRY[matching_method](
219-
fast_shape=fast_shape,
220-
ft_shape=ft_shape,
221-
float_nbytes=float_nbytes,
222-
complex_nbytes=complex_nbytes,
223-
integer_nbytes=integer_nbytes,
224-
)
225218

226-
nbytes = memory_instance.base_usage() + memory_instance.per_fork() * ncores
219+
kwargs = {
220+
"fast_shape": fast_shape,
221+
"ft_shape": ft_shape,
222+
"float_nbytes": float_nbytes,
223+
"complex_nbytes": complex_nbytes,
224+
"integer_nbytes": integer_nbytes,
225+
}
226+
227+
instance = MATCHING_MEMORY_REGISTRY[matching_method](**kwargs)
228+
nbytes = instance.base_usage() + instance.per_fork() * ncores
227229

228230
if analyzer_method in MATCHING_MEMORY_REGISTRY:
229-
analyzer_instance = MATCHING_MEMORY_REGISTRY[analyzer_method](
230-
fast_shape=fast_shape,
231-
ft_shape=ft_shape,
232-
float_nbytes=float_nbytes,
233-
complex_nbytes=complex_nbytes,
234-
integer_nbytes=integer_nbytes,
235-
)
236-
nbytes += analyzer_instance.base_usage() + analyzer_instance.per_fork() * ncores
231+
instance = MATCHING_MEMORY_REGISTRY[analyzer_method](**kwargs)
232+
nbytes += instance.base_usage() + instance.per_fork() * ncores
237233

238234
if backend in MATCHING_MEMORY_REGISTRY:
239-
backend_instance = MATCHING_MEMORY_REGISTRY[backend](
240-
fast_shape=fast_shape,
241-
ft_shape=ft_shape,
242-
float_nbytes=float_nbytes,
243-
complex_nbytes=complex_nbytes,
244-
integer_nbytes=integer_nbytes,
245-
)
246-
nbytes += backend_instance.base_usage() + backend_instance.per_fork() * ncores
235+
instance = MATCHING_MEMORY_REGISTRY[backend](**kwargs)
236+
nbytes += instance.base_usage() + instance.per_fork() * ncores
247237

248238
return nbytes

0 commit comments

Comments
 (0)