Skip to main content
  • Research article
  • Open access
  • Published:

Full “Laplacianised” posterior naive Bayesian algorithm

Abstract

Background

In the last decade the standard Naive Bayes (SNB) algorithm has been widely employed in multi–class classification problems in cheminformatics. This popularity is mainly due to the fact that the algorithm is simple to implement and in many cases yields respectable classification results. Using clever heuristic arguments “anchored” by insightful cheminformatics knowledge, Xia et al. have simplified the SNB algorithm further and termed it the Laplacian Corrected Modified Naive Bayes (LCMNB) approach, which has been widely used in cheminformatics since its publication.

In this note we mathematically illustrate the conditions under which Xia et al.’s simplification holds. It is our hope that this clarification could help Naive Bayes practitioners in deciding when it is appropriate to employ the LCMNB algorithm to classify large chemical datasets.

Results

A general formulation that subsumes the simplified Naive Bayes version is presented. Unlike the widely used NB method, the Standard Naive Bayes description presented in this work is discriminative (not generative) in nature, which may lead to possible further applications of the SNB method.

Conclusions

Starting from a standard Naive Bayes (SNB) algorithm, we have derived mathematically the relationship between Xia et al.’s ingenious, but heuristic algorithm, and the SNB approach. We have also demonstrated the conditions under which Xia et al.’s crucial assumptions hold. We therefore hope that the new insight and recommendations provided can be found useful by the cheminformatics community.

Background

Broadly speaking there are two conceptually different ways to solve statistical problems: the frequentist and the Bayesian approaches. On the pros and cons of each method there are numerous excellent review articles and text books, such as the recent book by Murphy [1]. Unlike the frequentist approach, in the Bayesian approach any a priori knowledge about the probability distribution function that one assumes might have generated the given data (in the first place) can be taken into account when estimating this distribution function from the data at hand. If the data are noise–free and “complete”, the role of the a priori information in estimating the distribution function diminishes drastically. However, the a priori information can be crucial when the data are noisy and sparse. The latter scenario is typical in realistic large chemical datasets, which, arguably, makes Bayesian based statistics a powerful data analysis tool.

Unfortunately, Bayesian statistics in its fullest form is not computationally feasible in realistic cheminformatics data analyses. However, in recent years, a simplified version of the Bayesian approach, which is commonly known as the “Naive” Bayesian algorithm, has been found to be a useful classification tool in multi–class classification problems in cheminformactics. To this end a Naive Bayesian classifier is built on binary descriptor space. The descriptors/features x j , representing the compounds to be classified, assume binary values 0 or 1, where (j = 1,2,...,L) and L can typically be more than 1,000. Thus for some cheminformatics practitioners even the Naive Bayesian algorithm in its standard form is computationally prohibitive when the dataset is large. In this regard, Xia et al. [2] proposed a simpler version of the standard Naive Bayesian algorithm, albeit for binary classification problems; slight variants of this algorithm for multi–class classification can also be found in [3, 4]. According to Rogers et al.[5], Rogers being a co–author of the work presented in [2], “the standard Naive Bayes was modified by considering only the effect of the presence of a feature and not its absence”. There are also a few more noticeable aspects of this proposed simplification: (a) the authors cleverly estimate directly – albeit heuristically – the a posteriori class probability for the present feature; (b) these authors (rather ingeniously) incorporate a Laplacian–correction into the estimated posterior class probability; and (c) the authors deem absent features not discriminating enough and therefore discard their contributions to the estimation of the posterior class. More than anything else it is this omission of the absent features from the Standard Naive Bayes (SNB) algorithm that makes Xia et al.’s proposed Naive Bayes Algorithm, termed Laplacian Corrected Modified Naive Bayes (LCMNB), (and its variants by different groups) computationally fast.

It is these three points, (a), (b) and (c), that we expound on in a mathematical setting to demonstrate under which conditions they hold – not only in an abstract sense, but also in the practical sense for a NB practitioner to make an informed decision as to when it is appropriate to employ SNB or LCMNB, in the cheminformatics context.

Methods

Naive Bayes

From Bayes’ theorem recall that [6]:

p ( ω i | x ) p ( ω i ) = p ( x | ω i ) p ( x )
(1)

where x = (x1,x2,...,x L ) and ω i denote the feature vectors and class labels, respectively; x j and L being as described before, whereas i is just an index for the class labels. The terms p(ω i |x), p(x|ω i ), p(ω i ), and p(x) refer to the posterior probability for ω i given x, the descriptor vector distribution conditioned on class ω i , the a priori probability of class ω i occurring, and the descriptor vector density function, respectively – for more details, see ref. [3, 4, 6].

The left hand side of Eq. 1 can be expressed as follows [1, 7]

p ( ω i | x ) p ( ω i ) = p ( ω i | x 1 , x 2 , ... , x L ) p ( ω i )
(2)

By virtue of Bayes’ theorem p(ω i |x1,x2,...,x L ) can be rewritten as

p( ω i | x 1 , x 2 ,..., x L )= p ( x 1 , x 2 , ... , x L | ω i ) p ( ω i ) p ( x 1 , x 2 , ... , x L )
(3)

which in turn allows us to rewrite Eq. 2 as

p ( ω i | x ) p ( ω i ) = p ( ω i ) p ( x 1 , x 2 , ... , x L | ω i ) p ( ω i ) p ( x 1 , x 2 , ... , x L ) = p ( x 1 , x 2 , ... , x L | ω i ) p ( x 1 , x 2 , ... , x L )
(4)

Making use of the chain rule of probability [1, 8], we can express p(x1,x2,...,x L |ω i ) as

p ( x 1 , x 2 , ... , x L | ω i ) = p ( x 1 | ω i ) p ( x 2 | ω i , x 1 ) ..... p ( x L | ω i , x 1 , x 2 , ... , x L 1 )
(5)

Plugging the right hand side of the equation above into Eq. 4 results in

p ( ω i | x ) p ( ω i ) = p ( x 1 | ω i ) p ( x 2 | ω i , x 1 ) .....p ( x L | ω i , x 1 , x 2 , ... , x L 1 ) p ( x 1 , x 2 , ... , x L )
(6)

In practice, it is extremely difficult to estimate p(ω i |x) or p(x|ω i ). This reality inevitably forces one to make concessions over the degree of accuracy the estimated p(ω i |x) or p(x|ω i ) can deliver. One widely employed scheme to obtain these probability distributions with compromised accuracy is to assume that individual descriptors x j , j = 1,2,...,L, are independent conditional on ω i . It is this naive assumption of independence among features to which the term “Naive” in “Naive Bayesian” refers.

Under this naive assumption, in Eq. 6, p(x2|ω i ) = p(x2|ω i ,x1), p(x3|ω i ) = p(x3|ω i ,x1,x2),..., p(x L |ω i ) = p(x L |ω i ,x1,x2...,xL−1). Thus, Eq. 6 modifies to

p ( ω i | x ) p ( ω i ) = p ( x 1 | ω i ) p ( x 2 | ω i ) ... p ( x L | ω i ) p ( x 1 , x 2 , ... , x L )
(7)

Multiplying top and bottom of Eq. 7 by p L ( ω i ) Π j = 1 L p( x j ) yields

p ( ω i | x ) p ( ω i ) = p L ( ω i ) p ( x 1 | ω i ) p ( x 2 | ω i ) ... p ( x L | ω i ) Π j = 1 L p ( x j ) p L ( ω i ) p ( x 1 , x 2 , ... , x L ) Π j = 1 L p ( x j )
(8)
= p L ( ω i ) p ( x 1 | ω i ) p ( x 2 | ω i ) ... p ( x L | ω i ) Π j = 1 L p ( x j ) Π j = 1 L p ( x j ) p L ( ω i ) p ( x 1 , x 2 , ... , x L )
(9)

Then making use of the fact that p( ω i | x 1 )= p ( ω i ) p ( x 1 | ω i ) p ( x 1 ) , p( ω i | x 2 )= p ( ω i ) p ( x 2 | ω i ) p ( x 2 ) ,..., p( ω i | x L )= p ( ω i ) p ( x L | ω i ) p ( x 1 ) , we can rewrite Eq. 9 as

p ( ω i | x ) p ( ω i ) = p ( ω i | x 1 ) p ( ω i | x 2 ) ... p ( ω i | x L ) Π j = 1 L p ( x j ) p L ( ω i ) p ( x 1 , x 2 , ... , x L )
(10)

or more compactly as

p ( ω i | x ) p ( ω i ) = Π j = 1 L p ( ω i | x j ) p L ( ω i ) × Π j = 1 L p ( x j ) p ( x 1 , x 2 , .... , x L )
(11)

Clearly Π j = 1 L p ( x j ) p ( x 1 , x 2 , .... , x L ) is common to all classes and therefore plays no role in classification. Thus, in practice (in the Naive Bayes context with which this work is concerned) one is required to estimate p(ω i |x j ) and p(ω i ).

Since generative approaches can be informative and “simpler” than their discriminative counterparts [9], we make use of Bayes’ theorem again, i.e., p( ω i | x j )= p ( ω i ) p ( x j | ω i ) p ( x j ) and then estimate p(ω i |x j ) through p ( ω i ) p ( x j | ω i ) p ( x j ) , where p( x j )= i = 1 C p( x j | ω i )p( ω i ) with C referring to the number of classes. p(ω i ) denotes the a priori class probability, which is relatively easy to estimate. Thus, in our Bayesian context, the estimation of p(ω j |x j ) boils down in practice to estimating p(x j |ω i ).

Estimation of p(x j |ω i ), with x j  = 1 and0

p(x j |ω i ) can be estimated using the given data and assuming a Beta distribution as an a priori distribution for p(x j |ω i ) [10]. (There are other possible prior distributions from which one can choose, but we select the Beta distribution for reasons that will transpire later). As described in Appedix A, a Beta a priori distribution Beta(α i ,β i ) for p(x j |ω i ) results in a p(x j |ω i ) estimator in the form [11]:

p( x j =1| ω i )= N ij + α i N ω i + β i + α i
(12)

and of course

p( x j =0| ω i )=1 N ij + α i N ω i + β i + α i
(13)

where N ω i and N ij , respectively, denote the number compounds in class ω i , and number of compounds in this class with descriptor x j assuming value 1. β i and α i are Beta distribution hyper–parameters per class and the valid range of values that these hyper–parameter can assume are as defined in Appendix A. When α i and β i equal 1, α i and β i  + α i in Eqs. 12–13 can be viewed as a “Laplacian correction”.

Results and discussion

Estimation of p(ω i |x j  = 1) and p(ω i |x j  = 0)

Estimation of p(ω i |x j  = 1): In Our Approach

Remark 1

Assume that we have N chemical compounds (and their activity labels) available for training, where N ω i of these compounds belong to class ω i .

Remark 2

Assume that the class a priori distribution is taken as p( ω i )= N ω i N , where N ω i >> α i + β i (which is a valid assumption as found in any realistic large chemical dataset).

By virtue of Remark 1 and Eq. 12, the estimate of p(ω i |x j  = 1) becomes

p( ω i | x j =1)= p ( x j = 1 | ω i ) p ( ω i ) p ( x j = 1 ) = N ij + α i N ω i + α i + β i × N ω i N i = 1 C N ij + α i N ω i + α i + β i × N ω i N
(14)

(recall that p( x j )= i = 1 C p( x j | ω i )p( ω i )).

Because of Remark 2, Eq. 14 can be simplified to

p( ω i | x j =1)= N ij + α i i = 1 C N ij + i = 1 C α i = N ij + α i N j + + i = 1 C α i
(15)

where N j + is the number of times x j assumes the value 1.

Estimation of p(ω i |x j  = 1): In Xia et al.’s Formulation

In the approach of Xia et al., p(ω i |x j  = 1) is estimated as

p( ω i | x j =1)= N ij + A i N j + + K
(16)

where K is as defined in Xia et al. and in their paper A i is given as

A i =p( ω i )×K,withK= i = 1 C A i as i = 1 C p( ω i )=1

Eq. 16 constitutes what Xia et al. term “the Laplacian–Corrected Modified Naive Bayes (LCMNB)” estimator for p(ω i |x j  = 1).

If α i in Eq. 15 is set to A i , Eq. 15 is exactly equivalent to Xia et al.’s estimator for p(ω i |x j  = 1) as can be seen in Eq. 16.

We note in passing that in Xia et al.’s case, C = 2 and p( ω 2 )= 1 K , which in their nomenclature denoted by p(Active) – that is, A2 = 1 while A1 = K − 1.

Initially we employed the Beta a priori distribution for the class conditional distribution to ascertain the equivalence of Eqs. 15 and 16. Fortunately, however, we have ended up with the general equations (Eqs. 14 – 15) that not only encapsulate the LCMNB scheme of Xia et al., but also subsume the other various variants of LCMNB, such as those discussed in Nidhi et al. and Nigsch et al.’s papers [3, 4].

At any rate, let us proceed to the nub of this work: Identifying the conditions under which the LCMNB algorithm holds with respect to the SNB algorithm. But first we need to describe the estimation of p(ω i |x j  = 0).

Estimation of p(ω i |x j  = 0): In Our Approach

In regard to the case of x j  = 0, we make use of Remark 1, Remark 2 and Eq. 13, which yield an estimator for p(ω i |x j  = 0) as

p( ω i | x j =0)= N ω i ( N ij + α i ) N ( N j + + i = 1 C α i )
(17)

Naive Bayes: scoring function from

For notational convenience let us denote N ij + α i N j + + i = 1 C α i in Eq. 15 and N ω i ( N ij + α i ) N ( N j + + i = 1 C α i ) in Eq. 17 by ξ ij and ν ij , respectively.

Thus, p(ω i |x j  = 1) and p(ω i |x j  = 0) may be written more succinctly as p( ω i | x j )= ξ ij x j ν ij ( 1 x j ) , which allows us to express Eq. 11 more compactly as

p ( ω i | x ) p ( ω i ) = Π j p ( ω i | x j ) p L ( ω i ) × Π j p ( x j ) p ( x 1 , x 2 , .... , x L ) = Π j ξ ij x j ν ij ( 1 x j ) p L ( ω i ) × Π j p ( x j ) p ( x 1 , x 2 , .... , x L )
(18)

Now we come to the core of this work, under which conditions does the LCMNB algorithm hold with respect to the SNB algorithm? Before we answer this question, we deem it instructive and more insightful to map Eq. 18 monotonically to a discriminant function, a “scoring function” (so to speak).

To this end, taking the logarithm of Eq. 18 results in

S ω i ( x ) = ln p ( ω i | x ) p ( ω i ) = ln Π j ξ ij x j ν ij ( 1 x j ) p L ( ω i ) × Π j p ( x j ) p ( x 1 , x 2 , ... , x L )
(19)
= j x j ln ξ ij + j ( 1 x j ) ln ν ij L × ln p ( ω i ) + j ln p ( x j ) ln p ( x 1 , x 2 , ... , x L )
(20)

Self–evidently, the term j ln p ( x j ) ln p ( x 1 , x 2 , ... , x L ) is common to all classes and therefore does not play any role in classifying a given new compound. In other words, for practical classification purposes we are only interested in class dependent terms, i.e.,

D ω i (x)= j x j ln ξ ij lnp( ω i )+ j (1 x j )ln ν ij (L1)×lnp( ω i )
(21)

where S ω i (x)= D ω i (x)+ j ln p ( x j ) ln p ( x 1 , x 2 , ... , x L )

Conditions

In Xia et al.’s approach, the LCMNB algorithm, is none other than j x j ln ξ ij lnp( ω i ) in Eq. 21. This means that in Xia et al.’s scheme the contributions from the terms depending on x j  = 0 for a given class, i.e.,

j (1 x j )ln ν ij (L1)×lnp( ω i ),i,i=1,2,..,C
(22)

are discarded. To the best of our knowledge, neither in Xia et al. nor in any other paper on the LCMNB approach has it been demonstrated that (i) the contribution of Eq. 22 is zero, i.e.,

j (1 x j )ln ν ij (L1)×lnp( ω i )=0,i,i=1,2,..,C
(23)

equally, in these papers, it has not been shown that (ii)

| j x j ln ξ ij ln p ( ω i ) | > > | j ( 1 x j ) ln ν ij ( L 1 ) × ln p ( ω i ) | , i , i = 1 , 2 , .. , C
(24)

nor has it been established that (iii)

j (1 x j )ln ν ij (L1)×lnp( ω i )=constant,i,i=1,2,..,C
(25)

Thus unless one (or more) of the above – (i), (ii) and (iii) – is (are) met, the assumption on which the Modified Naive Bayesian algorithm is based is questionable and therefore its practitioners should pay attention to this discrepancy; clearly it is not justifiable to discard from the onset the contribution of j (1 x j )ln ν ij (L1)×lnp( ω i ) simply because features x j are absent, i.e. x j  = 0.

For completeness, we consider also the case of the highly popular class prior distribution, p( ω i )= 1 C , i.e. p(ω1) = p(ω2) = ... = p(ω C ). We hasten to add that this option was not included in the LCMNB scheme. At any rate, by simply repeating the arguments in the preceding sections, it is straightforward to show that one ends up with Eq. 21. In this scenario, though, L × lnp(ω i ) is common to all classes and therefore does not play a role in classifying a new compound, i.e., D ω i (x) reduces to

D ω i (x)= j x j ln ξ ij + j (1 x j )ln ν ij
(26)

Conclusions

Starting from a standard Naive Bayes (SNB) algorithm, we have derived mathematically the relationship between Xia et al.’s ingenious, but heuristic algorithm, and the standard Naive Bayes approach. We also describe the conditions on which Xia et al.’s crucial assumption – contributions from absent feature can be discarded – holds. It is our hope that, with this new insight, cheminformaticians may now be able to efficiently use the Modified version of the standard Naive Bayes algorithm, as proposed by Xia et al., and subsequently by Nidhi et al. and Nigsch et al.

Appendix

Appendix A: Estimator of p(x j |ω i )

Here we give for completeness the proof that a priori Beta distribution leads to Eqs. 12 and 13 in the text.

For bookkeeping:

ω i : class label indexed by i, i = 1,2,...,C.

C: Number of classes.

N ω i : Number of samples in class ω i .

N ij : Number of samples in class ω i with feature x j  = 1, j = 1,2,...,L.

L: Number of features.

We state from the onset, in the following derivation we follow closely the descriptions given in ref. [10]. We also note, for clarity’s sake, in the following analyses we abuse notation and use x jk for both the random variable and its realization.

In this work, x {0,1}L, i.e. x j  {0,1} and suppose that x j are independent Bernoulli random variables (and this is in fact the assumption made in the Naive Bayesian approach). Thus, in the Naive Bayesian setting p(x|ω i ) can be given as

p(x| ω i )= Π j = 1 L Ber( x j | μ ij )= Π j = 1 L μ ij x j ( 1 μ ij ) 1 x j
(27)

where μ ij is an estimate for the conditional probability that feature j occurs in class ω i , and is what we are trying to estimate given a set of compounds assumed to belong to class ω i . (In our context, μ ij is an estimator for p(x j |ω i ), where p(x j |ω i )is as defined in the text.)

To estimate μ ij in a Bayesian framework, we first view μ ij as a random variable, then choose an “appropriate” prior and likelihood for the random variable μ ij .

Let us suppose that our a priori knowledge about the random variable μ ij indicates that μ ij is described by a Beta distribution, i.e.,

π ( μ ij ) = 1 B ( α i , β i ) μ ij α i 1 ( 1 μ ij ) β i 1 , 0 μ ij 1 , α i , β i > 0 , i = 1 , 2 , ... C
(28)

where B(α i ,β i ) ensures that the Beta distribution is normalised

Using the Bayes’ theorem, then the posterior probability for μ ij on the training data can be given by

π ( μ ij | x j 1 , x j 2 , ... , x jN ω i ) = f ( x j 1 , x j 2 , ... , x jN ω i | μ ij ) π ( μ ij ) 0 1 f ( x j 1 , x j 2 , ... , x jN ω i | μ ij ) π ( μ ij ) d μ ij
(29)

where f( x j 1 , x j 2 ,..., x jN ω i | μ ij )refers to the likelihood, and xj1,xj2,... and x jN ω i denote the jth feature of the N ω i samples/compounds from class ω i . As the samples are assumed independent, then f( x j 1 , x j 2 ,..., x jN ω i | μ ij ) becomes Π k = 1 N ω i f( x jk | μ ij )= Π k = 1 N ω i μ ij x jk ( 1 μ ij ) 1 x jk , i.e.

Π k = 1 N ω i f( x jk | μ ij )= μ ij k = 1 m x jk ( 1 μ ij ) N ω i k = 1 N ω i x jk
(30)

Thus, the posterior π(μ ij |xj1,xj2,...,x jM ) in Eq. 29 modifies to

π ( μ ij | x j 1 , x j 2 , ... , x jN ω i ) = μ ij k = 1 N ω i x jk ( 1 μ ij ) N ω i k = 1 N ω i x jk π ( μ ij ) 0 1 μ ij k = 1 N ω i x jk ( 1 μ ij ) N ω i k = 1 N ω i x jk π ( μ ij ) d μ ij
(31)

i.e.,

π ( μ ij | x j 1 , x j 2 , ... , x jN ω i ) μ ij k = 1 N ω i x jk ( 1 μ ij ) N ω i k = 1 N ω i x jk π ( μ ij )
(32)
= μ ij k = 1 N ω i x jk ( 1 μ ij ) N ω i k = 1 N ω i x jk × μ ij α i 1 ( 1 μ ij ) β i 1
(33)
= μ ij N ij + α i 1 ( 1 μ ij ) N ω i N ij + β i 1
(34)

Clearly, in Eq. 34, the posterior density for μ ij given the samples x j 1 , x j 2 ,..., x jN ω i has the same form as the prior for μ ij [11], i.e.,

π ( μ ij | x j 1 , x j 2 , ... , x jN ω i ) = 1 B ( N ij + α i , N ω i N ij + β i ) μ ij N ij + α i 1 ( 1 μ ij ) N ω i N ij + β i 1
(35)

which is none other than another Beta distribution. This means that the Bayes estimator of μ ij , which is the estimate we are interested in, is the mean of the posterior distribution obtained [11]:

E[ μ ij | x j 1 , x j 2 ,..., x jN ω i ]= N ij + α i N ω i + α i + β i
(36)

In other words,

p( x j | ω i )= N ij + α i N ω i + α i + β i
(37)

QED.

An accessible description of the derivation of Eq. 37 can be found in ref. [10].

References

  1. Murphy KP: Machine Learning: A Probabilistic Perspective. 2012, Cambridge, MA: MIT Press

    Google Scholar 

  2. Xia X, Maliski EG, Gallant P, Rogers D: Classification of kinase inhibitors using a Bayesian model. J Med Chem. 2004, 47: 4463-4470. 10.1021/jm0303195.

    Article  CAS  Google Scholar 

  3. Glick M, Davies JW, Jenkins JL, Nidhi: Prediction of biological targets for compounds using multiple-category Bayesian models trained on chemogenomics databases. J Chem Inf Model. 2006, 46: 1124-1133. 10.1021/ci060003g.

    Article  Google Scholar 

  4. Nigsch F, Bender A, Jenkins JL, Mitchell JBO: Ligand-target prediction using winnow and naive Bayesian algorithms and the implications of overall performance statistics. J Chem Inf Model. 2008, 48: 2313-2325. 10.1021/ci800079x.

    Article  CAS  Google Scholar 

  5. Rogers D, Brown RD, Hahn M: Using extended–connectivity fingerprints with Laplacian-modified Bayesian analysis in high–throughput screening follow–up. J Biomol Screen. 2005, 10: 682-686. 10.1177/1087057105281365.

    Article  CAS  Google Scholar 

  6. Townsend JA, Glen RC, Mussa HY: Note on naive Bayes based on binary descriptors in Cheminformatics. J Chem Inf Model. 2012, 52: 2494-2500. 10.1021/ci200303m.

    Article  CAS  Google Scholar 

  7. Duda RO, Hart PE: Pattern Classification and Scene Analysis. 1973, New York, NY: John Wiley & Sons, Ltd

    Google Scholar 

  8. Koch RK: Introduction to Bayesian Statistics. 2007, Berlin: Springer

    Google Scholar 

  9. Bishop CM: Pattern Recognition and Machine Learning. 2006, New York: Springer

    Google Scholar 

  10. Ross SM: Introduction to Probability and Statistics for Engineers and Scientist. 1987, New York: John Wiley & Sons

    Google Scholar 

  11. Davidson AC: Statistical Models (Cambridge Series in Statistical and Probabilistic Mathematics). 2008, Cambridge: Cambridge University Press

    Google Scholar 

Download references

Acknowledgements

We are in debt to Dr Dave Rogers for his many useful comments on the original LCMNB approach, in particular for helping us understand more about the two–class LCMNB version.

Mussa and Glen would like to thank the Unilever Centre for Molecular Sciences Informatics for its support, whereas Mitchell would like to thank the Scottish Universities Life Sciences Alliance (SULSA).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hamse Y Mussa.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

HYM conceived the idea that constitutes the nub of the presented work. This author also carried out the bulk of the mathematical derivations. RCG contributed to the Bayesian aspect of the work. JBOM conceptually contributed to the derivation given in Appendix A. The three authors participated in drafting the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Mussa, H.Y., Mitchell, J.B. & Glen, R.C. Full “Laplacianised” posterior naive Bayesian algorithm. J Cheminform 5, 37 (2013). https://doi.org/10.1186/1758-2946-5-37

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1758-2946-5-37

Keywords