Bayesian Network in Artificial Intelligence | Examples | Full Tutorial

What is Bayesian Network in Artificial Intelligence

Bayesian Network in Artificial Intelligence: In the previous section, we describe CF as a mechanism to reduce the complexity of the Russian reasoning system by making some approximations to formalism. In section 4, we describe an alternative approach, “Bayesian networks” (Pearl, 1988), in which we preserve formalism and depend on the modularity of the world we try to model, the main idea is to describe the real world. a large joint probability table in which we list the probabilities of all conceivable event combinations. Most events are conditionally independent of most others, so their interactions should not be considered. we will describe groups of events that interact.

Network notation to describe the various types of restrictions on the probabilities that propositions can have among themselves. The idea of restricting visual networks turns out to be very powerful. We expanded it in this section as a way to allow ourselves. represent interactions between events; We also return later in Sections 11. 3. 1 and 14. 3, where we talk about other ways of representing the knowledge assets of the restrictions.

Representing Causality Uniformly 

Let’s return to the example of the sprinkler, the rain and the grass that we present in the last section. Bottom Figure shows the flow of restrictions that we describe in MYCIN – style rules.

Bayesian Network Artificial Intelligence
Representing Causality Uniformly

But remember that the problem Bayesian Network we encountered with that example was that the constraints flowed incorrectly from “sprinklers on” to “rained last night.” The problem was that we could not make a distinction that turned out to be critical. There are two different ways that propositions can influence the probability of the other, the first is that causes influence the probability of their symptoms, the second is that observing a symptom affects the likelihood of all its possible causes. The idea behind the Bayesian network structure is to make a clear distinction between these two types of influence.




Specifically, we constructed a directed acyclic graph (DAG) that represents causality relationships between variables. The idea of a causality graph (or network) has proven to be very useful in several systems, particularly in medical diagnostic systems such as CAS NET (Weiss et al., 1978) and INTERNIST / CADUCEUS [Pople, 1982).

Bayesian Network: The variables in such a graph can be propositional (in which case they can take the values TRUE and FALSE or they can be variables that take values of some other type (t G, a specific disease, body temperature or reading) taken by someone else diagnostic device). In Figure 8. 2 (b), we show a causality graph for the wet grass example. Bayesian Network In addition to the three nodes we have been talking about, the graph contains a new node corresponding to the national variable that tells us if it is currently the rainy season.

To be useful as a basis for problem-solving, we need a mechanism to calculate the influence in any arbitrary or any other way. For example, suppose you observed that it rained last night, what does that tell us about the probability of it raining?.

Conditional Probability Bayesian Network

conditional probability bayesian network
conditional probability bayesian network

rainy season? To answer this question, it is required that the initial DAG is converted into an Airted chart in which your CS can be used to transmit probabilities in any direction, radiating in the place where the evidence is found. We also require a mechanism to use the graph that guarantees that the probabilities are transmitted correctly. For example, Bayesian Network, while it is true that the observation of wet soils may be evidence of rain and rain, is evidence of wet grass, we must ensure that a cycle is never traversed in such a way that the wet grass is evidence of rain, which is then taken as evidence of wet grass, and so on.



There are three broad classes of algorithms to perform these calculations: a message passing method (Pearl, 1988), a clique triangulation method (Lauritzen and Spiegelhalter ter, 1988) and a variety of stochastic algorithms. The idea behind these methods is to take advantage of the fact that the nodes have limited influence domains. Therefore, although in principle the task of updating the probabilities consistently throughout the network is intractable, in practice it may not be. In the click triangulation method, for example, explicit arcs are introduced between pairs of nodes that share a common descendant.

For the case shown in Figure 8. 2 (b), a link between Sprinkler and Rai / 1 will be introduced. This explicit link supports the evaluation of the impact of the observation sprayer in the Rail hypothesis. This is important since the wet grass could be evidence of either Litem, Bayesian network but the wet grass plus one of its causes is not evidence of the competing cause since there is already an alternative explanation for the observed phenomenon.

Bayesian Network in Artificial Intelligence

The message passing approach is based on the observation that to calculate the probability of a node A given what is known about other nodes in the network, it is necessary to know three things.

  • π The total support that reaches A from its main nodes (which represent its causes).
  • The total support that comes to A from your children (which represent your symptoms).
  • The entry in the fixed conditional probability matrix that relates A to its causes.

Several methods have been developed to propagate and A messages and update the probabilities in the nodes. The structure of this network determines which approach can be used. For example, in individually connected networks (those in which there is only one path between each pair of nodes), a simpler algorithm can be used than in the case of several connected ones. For more details, see Pearl (1988).

Finally, there are stochastic or random algorithms to update belief networks. One such algorithm [Chávez, 1989] transforms an arbitrary network into a Markov chain. The idea is to probabilistically protect a given node from most other nodes in the network: stochastic algorithms run quickly in practice, but may not produce absolutely correct results.

Read also

Artificial Neural Networks 2019
Artificial Intelligence Technology in the Future
Procedural | Declarative knowledge in Artificial Intelligence
knowledge Acquisition in Artificial Intelligence | 2019
Problem Reduction AO* Algorithm | AND OR GRAPH
Artificial Intelligence in Big Data Analytics | 2019
Types of Production System in Artificial Intelligence | Benefits| Advantages
Forward Versus Backward Reasoning | Chaining | Integration | in AI
What is.? Types of Semantic Network in Artificial Intelligence