0% found this document useful (0 votes)
9 views

Moreta 2019

A Probabilistic Programming Approach to Protein Structure Superposition. Optimal superposition of protein structures is cru- cial for understanding their structure, function, dynamics and evolution. We investigate the use of probabilistic programming to superimpose protein structures guided by a Bayesian model. Our model THESEUS-PP is based on the THESEUS model, a probabilistic model of protein superposition.

Uploaded by

basilerommes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Moreta 2019

A Probabilistic Programming Approach to Protein Structure Superposition. Optimal superposition of protein structures is cru- cial for understanding their structure, function, dynamics and evolution. We investigate the use of probabilistic programming to superimpose protein structures guided by a Bayesian model. Our model THESEUS-PP is based on the THESEUS model, a probabilistic model of protein superposition.

Uploaded by

basilerommes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

A Probabilistic Programming Approach to Protein

Structure Superposition
Lys Sanz Moreta1* , Ahmad Salim Al-Sibahi1 , 2* , Douglas Theobald3 , William Bullock4 ,
Basile Nicolas Rommes4 , Andreas Manoukian4 , and Thomas Hamelryck1 , 4*

1
Department of Computer Science. University of Copenhagen, Denmark
2 Skanned.com, Denmark
3 Department of Biochemistry. Brandeis University. Waltham, MA 02452, USA
4 The Bioinformatics Centre. Section for Computational and RNA Biology. University of Copenhagen, Denmark

*Corresponding authors.
[email protected] (Lys Sanz Moreta),
[email protected] (Ahmad Salim Al-Sibahi),
[email protected] (Thomas Hamelryck)

Abstract—Optimal superposition of protein structures is cru- assume that all atoms have equal variance (homoscedastic-
cial for understanding their structure, function, dynamics and ity) and are uncorrelated. This is problematic in the case
evolution. We investigate the use of probabilistic programming of proteins with flexible loops or flexible terminal regions,
to superimpose protein structures guided by a Bayesian model.
Our model THESEUS-PP is based on the THESEUS model, a where the atoms can posit high variance. Here we present
probabilistic model of protein superposition based on rotation, a Bayesian model that is based on the previously reported
translation and perturbation of an underlying, latent mean THESEUS model [4]–[6]. THESEUS is a probabilistic model
structure. The model was implemented in the deep probabilistic of protein superposition that allows for regions with low and
programming language Pyro. Unlike conventional methods that high variance (heteroscedasticity), corresponding respectively
minimize the sum of the squared distances, THESEUS takes
into account correlated atom positions and heteroscedasticity (i.e., to conserved and variable regions [4], [5]. THESEUS assumes
atom positions can feature different variances). THESEUS per- that the structures which are to be superimposed are translated,
forms maximum likelihood estimation using iterative expectation- rotated and perturbed observations of an underlying latent,
maximization. In contrast, THESEUS-PP allows automated max- mean structure M.
imum a-posteriori (MAP) estimation using suitable priors over
In contrast to the THESEUS model which features max-
rotation, translation, variances and latent mean structure. The
results indicate that probabilistic programming is a powerful new imum likelihood parameter estimation using iterative ex-
paradigm for the formulation of Bayesian probabilistic models pectation maximization, we formulate a Bayesian model
concerning biomolecular structure. Specifically, we envision the (THESEUS-PP) and perform maximum a-posteriori (MAP)
use of the THESEUS-PP model as a suitable error model or parameter estimation. We provide suitable prior distributions
likelihood in Bayesian protein structure prediction using deep
over the rotation, the translations, the variances and the la-
probabilistic programming.
Index Terms—protein superposition, Bayesian modelling, deep tent, mean model. We implemented the entire model in the
probabilistic programming, protein structure prediction deep probabilistic programming language Pyro [7], using its
automatic inference features. The results indicate that deep
I. I NTRODUCTION probabilistic programming readily allows the implementation,
In order to compare biomolecular structures, it is necessary estimation and deployment of advanced non-Euclidean models
to superimpose them onto each other in an optimal way. The relevant to structural bioinformatics. Specifically, we envision
standard method minimizes the sum of the squared distances that THESEUS-PP can be used as a likelihood function in
(root mean square deviation, RMSD) between the matching Bayesian protein structure prediction using deep probabilistic
atom pairs. This can be easily accomplished by shifting the programming.
centre of mass of the two proteins to the origin and obtaining
the optimal rotation using singular value decomposition [1] or II. M ETHODS
quaternion algebra [2], [3]. These methods however typically
A. Overall model
According to the THESEUS model [4], each observed
978-1-7281-1462-0/19/$31.00 2019 IEEE protein structure Xn is a noisy observation of a rotated and
u To ensure identifiability, one (arbitrary) protein X1 is as-
sumed to be a fixed noisy observation of the structure M:

X1 ∼ MN (M, U, V). (4)


M q The other protein X2 is assumed to be a noisy observation of
the rotated as well as translated mean structure M:
X2 ∼ MN (MR − 1N T, U, V). (5)
U T R Thus, the model uses the same covariance matrices U and V
for the matrix-normal distributions of both X1 and X2 .
X1 X2 B. Bayesian posterior
Fig. 1: The THESEUS-PP model as a Bayesian graphical The graphical model of THESEUS-PP is shown in Figure
model. M is the latent, mean structure, which is an N - 1. The corresponding Bayesian posterior distribution is
by-3 coordinate matrix, where N is the number of atoms.
T is the translation. q is a unit quaternion calculated from p(R, T, M, U|X1 , X2 ) ∝
three random variables u sampled from the unit interval. R p(X1 , X2 |M, R, T, U)p(M)p(T)p(R)p(U) =
is the corresponding rotation matrix. U is the among-row p(X1 |M, U)p(X2 |MR − 1N T, U)
variance matrix of a matrix-normal distribution. X1 and X2
p(M)p(T)p(R)p(U). (6)
are N -by-3 coordinate matrices representing the proteins to
be superimposed. Circles denote random variables. A lozenge Below, we specify how each of the priors and the likelihood
denotes a deterministic transformation of a random variable. function is formulated and implemented.
Shaded circles denote observed variables. Bold capital and
bold small letters represent matrices and vectors, respectively. C. Prior for the mean structure
Recall that according to the THESEUS-PP model, the atoms
of the structures to be superimposed are noisy observations of
translated latent, mean structure M with noise En , a mean structure M. Typically, only Cα atoms are considered
Xn = (M + En )Rn − 1N Tn (1) and in that case, N corresponds to the number of amino acids.
Hence, we need to formulate a prior distribution over the
where n is an index that identifies the protein, R is a rotation latent, mean structure M.
matrix, T is a three-dimensional translation, E is the error We use an uninformative prior for M. Each element of
and M and X are matrices with the atomic coordinate vectors M is sampled from a Student’s t-distribution with degrees
along the rows, of freedom (ν = 1), mean (µ = 0) and a uniform diagonal
variance (σ 2 = 3). The Student’s t-distribution is chosen
    over the normal distribution for reasons of numerical stability:
m0 xn,0
M =  . . . , Xn =  . . .  . (2) the fatter tails of the Student’s t-distribution avoid numerical
mN −1 xn,N −1 problems associated with highly variable regions.
Another way of representing the model is seeing Xn as D. Prior over the rotation
distributed according to a matrix-normal distribution with In the general case, we have no a priori information on
mean M and covariance matrices U and V - one concerning the optimal rotation. Hence, we use a uniform prior over the
the rows and the other the columns. space of rotations. There are several ways to construct such
The matrix-normal distribution can be considered as an a uniform prior. We have chosen a method that makes use of
extension of the standard multivariate normal distribution from quaternions [8]. Quaternions are the 4-dimensional extensions
vector-valued to matrix-valued random variables. Consider a of the better known 2-dimensional complex numbers. Unit
random variable X distributed according to a matrix-normal quaternions form a convenient way to represent rotation matri-
distribution with mean M, which in our case is an N ×3 matrix ces. For our goal, the overall idea is to sample uniformly from
where N is the number of atoms. In this case, the matrix- the space of unit quaternions. Subsequently, the sampled unit
normal distribution is further characterized by an N × N row quaternions are transformed into the corresponding rotation
covariance matrix U and a 3 × 3 column covariance V. Then, matrices, which establishes a uniform prior over rotations.
X ∼ MN (M, U, V) will be equal to A unit quaternion q = (w, x, y, z) is sampled in the
√ √
X = M + UQ V, (3) following way. First, three independent random variables are
sampled from the unit interval,
where Q is an N × 3 matrix with elements distributed
according to the standard normal distribution. u0 , u1 , u2 ∼ U (0, 1). (7)
Then, four auxiliary deterministic variables (θ1 , θ2 , r1 , r2 ) are H. Algorithm
calculated from u1 , u2 , u3 ,
Algorithm 1 The Theseus-PP model.
� Prior over the elements of M
θ1 = 2πu1 , (8a) mi ∼ t1 (0, σM I3 ), where i indicates the atom position
θ2 = 2πu2 (8b) � Prior over the translation
√ T ∼ N (0, I3 )
r1 = 1 − u 0 , (8c)
√ � Prior over rotation
r2 = u 0 . (8d)
uj ∼ U [0, 1], for j from 0 to 2
q ← Quaternion(u)
The unit quaternion q is then obtained in the following way,
R ← RotationMatrix(q)
� Prior over diagonal covariance matrix U
q = (w, x, y, z)
σi ∼ N+ (1)
= (r2 cos θ2 , r1 sin θ1 , r1 cos θ1 , r2 sin θ2 ). (9) � Likelihood over the N atom coordinates
x1,i ∼ t1 (mi , σi I3 ))
Finally, the unit quaternion q is transformed into its corre- x2,i ∼ t1 ([RM − 1N T]i , σi I3 )
sponding rotation matrix R as follows,

� � I. Initialization
w2 +x2 −y 2 −z 2 2(xy−wz) 2(xz+wy)
R= 2(xy+wz) w2 −x2 +y 2 −z 2 2(yz−wx) (10) Convergence of the MAP estimation can be greatly im-
2(xz−wy) 2(yz+wx) w2 −x2 −y 2 +z 2 proved by selecting suitable starting values for certain vari-
ables and by transforming the two structures X1 and X2 in
E. Prior over the translation a suitable way. First, we pre-superimpose the two structures
For the translation, we use a standard trivariate normal using conventional least-squares superposition. Therefore, the
distribution, starting rotation can be initialized close to the identity matrix
(ie., no rotation). This is done by setting the vector u to
T ∼ N (0, I3 ) (11) (0.9, 0.1, 0.9).
We further improve performance by initializing the mean
where I3 is the three-dimensional identity matrix.
structure M to the average of the two pre-superimposed
structures X1 and X2 .
F. Prior over U
J. Maximum a-posteriori optimization
The Student’s t-distribution variance over the rows is sam-
pled from the half-normal distribution with standard deviation We performed MAP estimation using Pyro’s AutoDelta
set to 1. guide. For optimization, we used AdagradRMSProp [9], [10]
with the default parameters for the learning rate (1.0), momen-
σi ∼ N+ (1). (12)
tum (0.1) and step size modulator (1.0 × 10−16 ). A fragment
of the model implementation in Pyro can be seen in Figure 3
G. Likelihood
in the Appendix.
In our case, the matrix-normal likelihood of THESEUS Convergence was detected using Earlystop from Pytorch’s
reduces to a product of univariate Student’s t-distributions. Ignite library (version 0.2.0) [11]. This method evaluates
Again, we use the Student’s t-distribution rather than the the stabilization of the error loss and stops the optimization
normal distribution (as in THESEUS) for reasons of nu- according to the value of the patience parameter. The patience
merical stability. Below, we have used trivariate Student’s t- value was set to 25.
distributions with diagonal covariance matrices for ease of
notation. The likelihood can thus be written as III. M ATERIALS
Proteins
p(X1 , X2 | M, T, R, U) The algorithm was tested on several proteins from the
= p(X1 | M, U)p(X2 | M, T, R, U) RCSB protein database [12] that were obtained from Nuclear
N Magnetic Resonance (NMR) experiments. Such structures

= t1 (x1,i | mi , σi I3 ) typically contain several models of the same protein. These
i=1 models represent the structural dynamics of the protein in an
× t1 (x2,i | [MR − 1N T]i , σi I3 ), (13) aqueous medium and thus typically contain both conserved
and variable regions. This makes them challenging targets
where the product runs over the matrix rows that contain the for conventional RMSD superposition. We used the following
x, y, z coordinates of X1 , X2 and the rotated and translated structures: 1ADZ, 1AHL, 1AK7, 2CPD, 2KHI, 2LKL and
latent, mean structure M. 2YS9.
IV. R ESULTS
The algorithm was executed 15 times on each protein (see TABLE I) with different seeds. The computations where carried
on a Intel Core i7-8750H CPU 2.20GHz processor.

TABLE I: Results of applying THESEUS-PP to the test structures. First column: PDB identifier. Second column: the number of
Cα atoms used in the superposition. Third column: the model identifiers. Fourth column: mean convergence time and standard
deviation. Last column: Number of epochs.
Average
Length Protein
PBD ID Computational Time Epochs
(Amino Acids) Models
(seconds)
1ADZ 71 0 and 1 0.64± 0.15 210±45
1AHL 49 0 and 2 0.53± 0.119 166±36
1AK7 174 0 and 1 0.74± 0.23 222±67
2CPD 99 0 and 2 0.51± 0.0.093 161±30
2KHI 95 0 and 1 0.47±0.11 149±39
2LKL 81 0 and 8 0.59±0.092 187±30
2YS9 70 0 and 3 0.47±0.11 144±33

An example of a pair of superimposed structures is shown in Figure 2. For comparison, the superposition resulting from
the conventional RMSD method, as calculated using Biopython [13], is shown on the left (Figure 2a). The THESEUS-
PP superposition is shown on the right (Figure 2b). Note how the former fails to adequately distinguish regions with high
from regions with low variance, resulting in poor matching of conserved regions. Additional, similar figures of superimposed
structures can be found in the Appendix.

2YS9

(a) Kabsch-RMSD (b) Theseus-PP

(c)

Fig. 2: Protein superposition for two conformations of protein 2YS9 obtained from (a) conventional RMSD superimposition
and (b) THESEUS-PP. The protein in green is rotated (X2 ). The images are generated with PyMOL [14]. Graph (c) shows
the pairwise distances (in Å) between the Cα coordinates of the structure pairs. The blue and orange lines represent RMSD
and THESEUS-PP superposition, respectively.
V. C ONCLUSION R EFERENCES
[1] W. Kabsch, “A discussion of the solution for the best rotation to relate
Probabilistic programming is a powerful, emerging two sets of vectors,” Acta Cryst. A, vol. 34, pp. 827–828, 1978.
paradigm for probabilistic protein structure analysis, prediction [2] B. Horn, “Closed-form solution of absolute orientation using unit
and design. Here, we present a Bayesian model for protein quaternions,” J. Opt. Soc. Am. A, vol. 4, pp. 629–642, 1987.
[3] E. Coutsias, C. Seok, and K. Dill, “Using quaternions to calculate rmsd,”
structure superposition implemented in the deep probabilistic J. Comp. Chem., vol. 25, pp. 1849–1857, 2004.
programming language Pyro and building on the previously re- [4] D. L. Theobald and D. S. Wuttke, “Empirical Bayes hierarchical models
ported THESEUS maximum likelihood model. MAP estimates for regularizing maximum likelihood estimation in the matrix Gaussian
Procrustes problem,” PNAS, vol. 103, pp. 18 521–18 527, 2006.
of its parameters are readily obtained using Pyro’s automated [5] D. L. Theobald and P. A. Steindel, “Optimal simultaneous superposition-
inference engine. ing of multiple structures with missing data,” Bioinformatics, vol. 28,
The original THESEUS algorithm, which makes use of pp. 1972–1979, 2012.
[6] K. Mardia and I. Dryden, “The statistical analysis of shape data,”
maximum likelihood estimation using iterative expectation Biometrika, vol. 76, no. 2, pp. 271–281, 1989.
maximization, is considerably faster with an average execution [7] E. Bingham, J. P. Chen, M. Jankowiak, F. Obermeyer, N. Pradhan,
time under 0.1 s. Although some of the longer execution time T. Karaletsos, R. Singh, P. Szerlip, P. Horsfall, and N. D. Goodman,
“Pyro: Deep Universal Probabilistic Programming,” Journal of Machine
in THESEUS-PP is due to the use of variational inference Learning Research, 2018.
and priors, it is clear that the flexibility and productivity of [8] X. Perez-Sala, L. Igual, S. Escalera, and C. Angulo, “Uniform sampling
a probabilistic programming language can come with a speed of rotations for discrete and continuous learning of 2D shape models,”
in Robotic Vision: Technologies for Machine Learning and Vision
penalty. Applications. IGI Global, 2013, pp. 23–42.
Recently, end-to-end protein protein structure prediction [9] J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods
using deep learning methods has become possible [15]. We for online learning and stochastic optimization,” Journal of Machine
Learning Research, vol. 12, no. Jul, pp. 2121–2159, 2011.
envision that Bayesian protein structure prediction will soon [10] A. Graves, “Generating sequences with recurrent neural networks,” arXiv
be possible using a deep probabilistic programming approach, preprint arXiv:1308.0850, 2013.
which will lead to protein structure predictions with asso- [11] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin,
A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in
ciated statistical uncertainties. In order to achieve this goal, pytorch,” in NIPS-W, 2017.
suitable error models and likelihood functions need to be [12] H. M. Berman, J. Westbrook, Z. Feng, G. Gilliland, T. N. Bhat,
developed and incorporated in these models. The THESEUS- H. Weissig, I. N. Shindyalov, and P. E. Bourne, “The protein data bank,”
Nucleic acids research, vol. 28, no. 1, pp. 235–242, 2000.
PP model can potentially serve as such an error model, [13] P. J. Cock, T. Antao, J. T. Chang, B. A. Chapman, C. J. Cox, A. Dalke,
by interpreting M as the predicted structure and a single I. Friedberg, T. Hamelryck, F. Kauff, B. Wilczynski et al., “Biopython:
rotated and translated X as the observed protein structure. freely available python tools for computational molecular biology and
bioinformatics,” Bioinformatics, vol. 25, no. 11, pp. 1422–1423, 2009.
During training of the probabilistic model, regions in M that [14] Schrödinger, LLC, “The PyMOL molecular graphics system, ver-
are wrongly predicted can be assigned high variance, while sion 1.8,” November 2015.
correctly predicted regions can be assigned low variance. Thus, [15] M. AlQuraishi, “End-to-end differentiable learning of protein structure,”
Cell Systems, 2019.
it can be expected that an error model based on THESEUS- [16] I. Kufareva and R. Abagyan, “Methods of protein structure comparison,”
PP will make estimation of these models easier, as the error in Homology Modeling. Springer, 2011, pp. 231–257.
function can more readily distinguish between partly correct [17] J. Salvatier, T. V. Wiecki, and C. Fonnesbeck, “Probabilistic program-
ming in python using PyMC3,” PeerJ Computer Science, vol. 2, p. e55,
and entirely wrong predictions, which is notoriously difficult 2016.
for RMSD-based methods [16].

C ONTRIBUTIONS AND ACKNOWLEDGEMENTS


Implemented algorithm in Pyro: LSM. Contributed code:
ASA, AM. Wrote article: LSM, TH, ASA. Prototyped algorithm
in the probabilistic programming language PyMC3 [17]: TH,
WB, BNR. Performed experiments: LSM. Designed experi-
ments: TH, DT. LSM and ASA acknowledge support from the
Independent Research Fund Denmark (grant: "Resurrecting
ancestral proteins in silico to understand how cancer drugs
work") and Innovationsfonden/Skanned.com (grant: "Intelli-
gent accounting document management using probabilistic
programming"), respectively. We thank Jotun Hein, Michael
Golden, Kanti Mardia and Wouter Boomsma for suggestions
and discussions.

DATA AND S OFTWARE AVAILABILITY


Pyro [7] and PyTorch [11] based code is a available at https:
//github.com/LysSanzMoreta/Theseus-PP

You might also like