1 Introduction

In reality, decisions are usually made in the state of indeterminacy, which means the outcomes cannot be exactly predicted in advance. In order to deal with this phenomenon, two mathematical systems are frequently used: one is probability theory founded by Kolmogorov in 1933, and the other one is uncertainty theory founded by Liu (2007). When there are no samples available to estimate a probability distribution, uncertainty theory is applied to evaluating the belief degree, at which each event will happen. When applying uncertainty theory to the research on graph theory, although this kind of research skill is not entirely new in mathematics, it is certainly surprising that ideas of uncertainty theory are often useful in tackling extremal problems in graph theory.

Graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. When investigating graphs with certain property in graph theory, normally there are two ways. One way is to obtain exact formulas or structural conditions by combinatorial enumeration or structural analysis. This is the traditional deterministic method. The other way is to estimate the belief degree of the event that graphs have this property. In 1961, Erdos and Renyi (1959) proposed the model of random graph, in which appropriate probability distributions and probabilistic ideas were used to approximate a variety of exact values.

However, probability theory is not suitable for every indeterministic phenomenon. Especially, when there are no sufficient samples or data, it is impossible to determine the distribution functions of random variables. For example, during the coronavirus outbreaks in early 2020, researchers used networks and graphs to estimate the outbreak sizes. As this is a novel virus, the only data they could use were from medical institutions and experts’ experiences. The data were very inaccurate and subjective. So it was not very suitable to apply probability theory to characterize the indeterministic factors in the models. When probability theory is not suitable for some indeterministic phenomenon, some researchers tried to apply other theories there. For example, some researchers applied fuzzy theory (Zadeh 1965) to graphs and proposed the model of fuzzy graph (Rosenfeld 1975). However, as fuzzy theory lacks the property of self-duality, there may be some misunderstandings and contradictions during the application of fuzzy graphs.

So when there are no sufficient samples or data, uncertain theory could be applied to evaluating the belief degrees of indeterministic factors. Gao and Gao (2013) proposed the model of uncertain graph via uncertainty theory. Parameters of uncertain graphs were discussed. The Euler index was discussed by Zhang and Peng (2012), and the connectivity index was discussed by Gao and Gao (2013). Later, Gao discussed the cycle index (Gao 2013), the regularity index (Gao 2014) and the tree index (Gao 2016). In Gao et al. (2019) investigated the \(\alpha \)-connectedness index. In Li et al. (2018) discussed the matching number, and Rosyida et al. (2018) discussed the uncertain chromatic number. In addition, Zhou et al. (2014a) investigated the inverse shortest path problem in an uncertain graph in 2015. Also Gao et al. (2015) investigated the distribution function of the diameter in an uncertain graph. Gao and Qin (2016) presented algorithms for calculating the edge-connectivity.

In many cases, uncertainty and randomness simultaneously appear in one complex system. To deal this kind of systems, Liu (2013b) proposed the chance theory with concepts of uncertain random variable, chance measure and chance distribution. Liu (2014) introduced the model of uncertain random graph and the model of uncertain random network via chance theory. In an uncertain random graph, some edges exist with degrees in probability measure and other edges exist with uncertain measure. Liu (2014) discussed the connectivity index of an uncertain random graph. In 2016, the Euler index of an uncertain random graph was discussed by Zhang et al. (2017). In 2018, the cycle index of an uncertain random graph was discussed by Chen et al. (2018).

Among all properties and structures of graphs, connectivity is the most fundamental one. The measure of the event, that an uncertain graph or an uncertain random graph is connected, is called its connectivity index. Gao and Gao (2013) determined the connectivity index of an uncertain graph. Liu (2014) determined this index of an uncertain random graph, which is a generalization of previous result. In this paper, we will propose a method to evaluate the contributions of edges for the connectivity of an uncertain random graph and will define concepts about the significance of edges for connectivity. Edges will be classified by their significance. An algorithm for calculating significance of edges and some related algorithms will also be presented here.

The remainder of the paper is organized as follows. In Sect. 2, we will give a brief summaries of chance theory and the model of uncertain random graph. Some necessary definitions and notations of graph theory will also be presented. In Sect. 3, we will propose the concepts of the significance of edges for connectivity and will classify edges into different categories. In Sect. 4, algorithms will be presented, and examples will be given to illustrate the method. The last section will be a brief summary.

2 Preliminary

In this section, we will introduce some preliminary knowledge about chance theory, graph theory and the model of uncertain random graph.

2.1 Chance theory

In many cases, uncertainty and randomness both appear in one complex system. In order to solve these complex systems, Liu (2013b) proposed chance theory, which was soon applied to many optimization problems. Interested readers may refer to references (Ke et al. 2015; Liu and Ralescu 2014; Qin 2018; Wen and Kang 2016; Zhou et al. 2014b).

Let \((\varGamma , \mathcal {L}, \mathcal {M})\) and \((\varOmega , \mathcal {A}, \text{ Pr})\) be an uncertainty space and a probability space, respectively. Then the product \((\varGamma , \mathcal {L}, \mathcal {M})\times (\varOmega , \mathcal {A}, \text{ Pr})\) is called a chance space. Elements in \(\mathcal {L}\times \mathcal {A}\) are called events in the chance space. For each event \(\varTheta \), its chance measure was defined by Liu (2013b) as

$$\begin{aligned} \text{ Ch }\{\varTheta \}=\int _0^1\text{ Pr }\{\omega \in \varOmega | \mathcal {M}\{\gamma \in \varGamma | (\gamma , \omega )\in \varTheta \}\ge x\}dx. \end{aligned}$$

An uncertain random variable was defined by Liu (2013b) as a function \(\xi \) from a chance space \((\varGamma , \mathcal {L}, \mathcal {M})\times (\varOmega , \mathcal {A}, \text{ Pr})\) to the set of real numbers such that \(\{\xi \in B\}\) is an event in \(\mathcal {L}\times \mathcal {A}\) for any Borel set B. An uncertain random variable is called a Boolean uncertain random variable if it takes values 0 or 1. Similarly, an uncertain variable is called a Boolean uncertain variable if it takes values 0 or 1. A function with n variables is called a Boolean function if it is a mapping from \(\{0,1\}^n\) to \(\{0,1\}\).

Theorem 1

Liu (2013a) Assume that \(\eta _1, \eta _2, \ldots , \eta _m\) are independent Boolean random variables, i.e.,

$$\begin{aligned} \eta _i=\left\{ \begin{array}{ll} 1\qquad \text{ with } \text{ probability } \text{ measure } a_i\\ 0\qquad \text{ with } \text{ probability } \text{ measure } 1-a_i \end{array}\right. \end{aligned}$$

for \(i=1,2,\ldots , m\), and \(\tau _1, \tau _2, \ldots , \tau _n\) are independent Boolean uncertain variables, i.e.,

$$\begin{aligned} \tau _j=\left\{ \begin{array}{ll} 1\qquad \text{ with } \text{ uncertain } \text{ measure } b_j\\ 0\qquad \text{ with } \text{ uncertain } \text{ measure } 1-a_j \end{array}\right. \end{aligned}$$

for \(j=1,2,\ldots , n\). When f is a Boolean function, \(\xi =f(\eta _1, \eta _2, \ldots , \eta _m,\tau _1, \tau _2, \ldots , \tau _n)\) is a Boolean uncertain random variable such that

$$\begin{aligned} \text{ Ch }\{\xi =1\}=\displaystyle \sum _{(x_1, \ldots , x_m)\in \{0,1\}^m} \Big (\prod _{i=1}^m\mu _i(x_i)\Big )f^{*}(x_1, \ldots , x_m), \end{aligned}$$

where

$$\begin{aligned}&f^{*}(x_1, \ldots , x_m)\\&\quad = \left\{ \begin{array}{ll} \displaystyle \sup _{f(x_1,\ldots , x_m, y_1, \ldots , y_n)=1} \min _{1\le j \le n}v_j(y_j),\\ \qquad \qquad \text{ if } \displaystyle \sup _{f(x_1,\ldots , x_m, y_1, \ldots , y_n)=1} \min _{1\le j \le n}v_j(y_j)<0.5\\ 1-\displaystyle \sup _{f(x_1,\ldots , x_m, y_1, \ldots , y_n)=0} \min _{1\le j \le n}v_j(y_j),\\ \qquad \qquad \text{ if } \displaystyle \sup _{f(x_1,\ldots , x_m, y_1, \ldots , y_n)=1} \min _{1\le j \le n}v_j(y_j)\ge 0.5, \end{array}\right. \\&\quad \mu _i(x_i)=\left\{ \begin{array}{ll} a_i&{}\quad \text{ if } x_i=1\\ 1-a_i&{}\quad \text{ if } x_i=0 \end{array}\right. (i=1, 2, \ldots , m), \\&\quad v_j(y_j)=\left\{ \begin{array}{ll} b_j&{}\quad \text{ if } y_j=1\\ 1-b_j\qquad &{}\quad \text{ if } y_j=0 \end{array}\right. (j=1, 2, \ldots , n). \end{aligned}$$

2.2 Notations of graphs

Graphs in this paper are finite simple graphs, which have no multi-edges and loops. Terms and notations not defined here are referred to Bondy and Murty (2008).

A graph G is an ordered pair (VE), where V is the set of vertices and E is the set of edges. Without loss of generality, we assume that \(V=\{v_1,v_2,\ldots , v_n\}\). Two distinct vertices \(v_i\) and \(v_j\) are called adjacent if there is an edge e linking \(v_i\) and \(v_j\). Then, \(v_i\) and \(v_j\) are called the two endpoints of e. We also use \((v_i,v_j)\) to represent this edge. The adjacency matrix of G, denoted by A(G), is an \(n\times n\) matrix

$$\begin{aligned} \left( \begin{array}{cccc} a_{11}&{} \quad a_{12}&{} \quad \cdots &{} \quad a_{1n}\\ a_{21}&{} \quad a_{22}&{} \quad \cdots &{} \quad a_{2n}\\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ a_{n1}&{} \quad a_{n2}&{} \quad \cdots &{} \quad a_{nn} \end{array}\right) , \end{aligned}$$

where

$$\begin{aligned} a_{ij}= \left\{ \begin{array}{ll} 1,&{} \quad \text{ if } (v_i,v_j)\in E(G)\\ 0,&{} \quad \text{ otherwise. } \end{array}\right. \end{aligned}$$

As G is a simple graph, A(G) is a symmetric matrix with \(a_{ii}=0\) for \(i=1,2,\ldots , n\).

A walk is a sequence \(v_1e_1v_2e_2v_3\ldots v_ke_kv_{k+1}\) such that \(e_i=(v_i,v_{i+1})\) for \(i=1,2,\ldots ,k\). A walk is called a path if \(v_1,v_2,\ldots , v_{k+1}\) are distinct. A graph is called connected if for every pair of distinct vertices, there is a path linking them. The following is a well-known result in graph theory.

Theorem 2

Let G be a simple graph. Then G is connected if and only if \(I +A(G)+A(G)^2+\cdots + A(G)^{n-1} >0\), where I is the identity matrix.

2.3 Uncertain random graphs

In the study of graph theory, edges and vertices of graphs are always deterministic. But in practical problems, indeterminate factors always appear. So when graphs are applied to these problems, it is reasonable to assume that some edges in graphs exist with some degrees. These degrees could be of probability measure or uncertain measure. Liu (2014) introduced the model of uncertain random graph. In an uncertain random graph, all edges are independent, and some edges exist with degrees in probability measure, while other edges exist with degrees in uncertain measure.

A graph is of order n if it has n vertices. Without loss of generality, in the rest of this paper, we assume that graphs are always of order n. Let \(\mathcal {V}\) be a set of n vertices. We assume that \(\mathcal {V}=\{1,2, \ldots , n\}\). So there are \(n(n-1)/2\) possible edges between them. We define two disjoint collections of edges,

$$\begin{aligned} \mathcal {U}= & {} \{(i,j)\,|\,1\le i<j\le n \text{ and } (i,j) \text{ are } \text{ uncertain } \text{ edges }\},\\ \mathcal {R}= & {} \{(i,j)\,|\,1\le i<j\le n \text{ and } (i,j) \text{ are } \text{ random } \text{ edges }\},\\ \end{aligned}$$

with \(\mathcal {U}\cup \mathcal {R}=\{(i,j)\,|\,1\le i<j\le n\}\). Note that deterministic edges are regarded as special uncertain ones.

The uncertain random adjacency matrix is an \(n\times n\) matrix

$$\begin{aligned} \mathcal {A}=\left( \begin{array}{cccc} \alpha _{11}&{} \quad \alpha _{12}&{} \quad \cdots &{} \quad \alpha _{1n}\\ \alpha _{21}&{} \quad \alpha _{22}&{} \quad \cdots &{} \quad \alpha _{2n}\\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ \alpha _{n1}&{} \quad \alpha _{n2}&{} \quad \cdots &{} \quad \alpha _{nn} \end{array}\right) , \end{aligned}$$

where \(\alpha _{ij}\) represent the truth values in uncertain measure or probability measure that the edges between vertices i and j exist, \(i,j=1,2,\ldots ,n\), respectively. As graphs considered in this paper are simple graphs, \(\mathcal {A}\) is a symmetric matrix, and \(\alpha _{ii}=0\) for \(i=1,2,\ldots ,n\).

Definition 1

(Liu (2014)) Assume \(\mathcal {V}\) is the collection of vertices, \(\mathcal {U}\) is the collection of uncertain edges, \(\mathcal {R}\) is the collection of random edges, and \(\mathcal {A}\) is the uncertain random adjacency matrix. Then the quartette \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\) is said to be an uncertain random graph.

For an uncertain random graph \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\), write

$$\begin{aligned} X=\left( \begin{array}{cccc} x_{11}&{} \quad x_{12}&{} \quad \cdots &{} \quad x_{1n}\\ x_{21}&{} \quad x_{22}&{} \quad \cdots &{} \quad x_{2n}\\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ x_{n1}&{} \quad x_{n2}&{} \quad \cdots &{} \quad x_{nn} \end{array}\right) \end{aligned}$$

and

$$\begin{aligned} \mathbb {X}=\left\{ X\,\Big |\!\begin{array}{l}x_{ij}=0 \text{ or } 1,\, \text{ if } (i,j)\in \mathcal {R}\\ x_{ij}=0,\, \text{ if } (i,j)\in \mathcal {U}\\ x_{ij}=x_{ji},\, i,j=1,2,\ldots ,n\\ x_{ii}=0,\, i=1,2,\ldots ,n \end{array}\!\!\right\} . \end{aligned}$$
(1)

For any \(X\in \mathbb {X}\), the extension class of X is defined by

$$\begin{aligned} X^*=\left\{ Y\,\Big |\!\begin{array}{l}y_{ij}=x_{ij},\, \text{ if } (i,j)\in \mathcal {R}\\ y_{ij}=0 \text{ or } 1,\, \text{ if } (i,j)\in \mathcal {U}\\ y_{ij}=y_{ji},\, i,j=1,2,\ldots ,n\\ y_{ii}=0,\, i=1,2,\ldots ,n \end{array}\!\!\right\} . \end{aligned}$$
(2)

As there are \(\frac{n(n-1)}{2}\) edges in \(\mathbb {G}\), there are \(\displaystyle 2^{\frac{n(n-1)}{2}}\) possible realizations of edges. Each realization could be represented by a simple graph, which is called a realization graph. Let H be a realization graph with adjacency matrix Y. Then there exists \(X\in \mathbb {X}\), such that \(Y\in X^*\). The chance measure of the event that the realization graph H appears, is

$$\begin{aligned} \displaystyle \left( \prod _{(i,j)\in \mathcal {R}}\nu _{ij}(Y)\right) \left( \min _{(i,j)\in \mathcal {U}}\nu _{ij}(Y)\right) , \end{aligned}$$

where

$$\begin{aligned} \nu _{ij}(Y)=\left\{ \begin{array}{cl}\alpha _{ij},&{}\text{ if } y_{ij}=1\\ 1-\alpha _{ij},&{}\text{ if } y_{ij}=0. \end{array}\right. \end{aligned}$$
Fig. 1
figure 1

Uncertain random graph \(\mathbb {G}\) with all realization graphs

Example 1

Let \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\) be an uncertain random graph (shown in Fig. 1), where \(\mathcal {V}=\{1,2,3\}\), \(\mathcal {R}=\{(1,2)\}\), \(\mathcal {U}=\{(1,3), (2,3)\}\), and

$$\begin{aligned} \mathcal {A}=\begin{pmatrix} 0 &{} \quad 0.4 &{} \quad 0.9 \\ 0.4 &{} \quad 0 &{} \quad 0.7 \\ 0.9 &{} \quad 0.7 &{} \quad 0 \end{pmatrix}. \end{aligned}$$

As \(\mathbb {G}\) has 3 edges, it has \(2^3\) realizations, whose realization graphs are \(H_1, H_2, \ldots , H_8\). The chance measure of the event that \(H_1\) appears is

$$\begin{aligned} (1-0.4)\times \min \{1-0.7, 1-0.9\}, \end{aligned}$$

which equals to 0.04.

An uncertain random graph \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\) becomes a random graph (Erdos and Renyi 1959; Gilbert 1959) if \(\mathcal {U}=\emptyset \). This is actually the widely used random graph model \(\mathcal {G}\{n, (p_{ij})\}\) (Bollobás 2011). Then,

$$\begin{aligned} \mathbb {X}=\left\{ X\,\Big |\!\begin{array}{l} x_{ij}=0 \text{ or } 1,\, i,j=1,2,\ldots ,n\\ x_{ij}=x_{ji},\, i,j=1,2,\ldots ,n\\ x_{ii}=0,\, i=1,2,\ldots ,n \end{array}\!\!\right\} . \end{aligned}$$

For any \(X\in \mathbb {X}\), X is the adjacency matrix of a realization graph, which appears with probability

$$\begin{aligned} \prod _{1\le i<j \le n }\nu _{ij}(X). \end{aligned}$$

An uncertain random graph \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\) becomes an uncertain graph (Gao and Gao 2013) if \(\mathcal {R}=\emptyset \). Then,

$$\begin{aligned} \mathbb {X}=\left\{ X\,\Big |\!\begin{array}{l} x_{ij}=0 \text{ or } 1,\, i,j=1,2,\ldots ,n\\ x_{ij}=x_{ji},\, i,j=1,2,\ldots ,n\\ x_{ii}=0,\, i=1,2,\ldots ,n \end{array}\!\!\right\} . \end{aligned}$$

For any \(X\in \mathbb {X}\), X is the adjacency matrix of a realization graph, which appears with uncertain measure

$$\begin{aligned} \min _{1\le i<j \le n }\nu _{ij}(X). \end{aligned}$$

3 Significance of edges for connectivity

An uncertain random graph \(\mathbb {G}\) is connected for some realizations and is disconnected for some other realizations. The measure of the event that \(\mathbb {G}\) is connected, denoted by \(\rho (\mathbb {G})\), is called the connectivity index of the uncertain random graph.

Gao and Gao (2013) determined the connectivity index of an uncertain graph. Later, Liu (2014) generalized this result to uncertain random graphs by chance theory.

Theorem 3

Liu (2014) Let \(\mathbb {G}=(\mathcal {V},\mathcal {U},\mathcal {R}, \mathcal {A})\) be an uncertain random graph. Then,

$$\begin{aligned} \rho (\mathbb {G})=\displaystyle \sum _{X\in \mathbb {X}}\Big (\prod _{(i,j)\in \mathcal {R}}v_{ij}(X)\Big )f^{*}(X) \end{aligned}$$

where

$$\begin{aligned} f^{*}(X)= & {} \left\{ \begin{array}{ll} \displaystyle \sup _{Y\in X^{*}, f(Y)=1}\min _{(i,j)\in \mathcal {U}}v_{ij}(Y),\\ \qquad \qquad \text{ if }\displaystyle \displaystyle \sup _{Y\in X^{*}, f(Y)=1}\min _{(i,j)\in \mathcal {U}}v_{ij}(Y)<0.5\\ 1-\displaystyle \sup _{Y\in X^{*}, f(Y)=0}\min _{(i,j)\in \mathcal {U}}v_{ij}(Y), \\ \qquad \qquad \text{ if }\displaystyle \displaystyle \sup _{Y\in X^{*}, f(Y)=1}\min _{(i,j)\in \mathcal {U}}v_{ij}(Y)\ge 0.5, \end{array}\right. \\ v_{ij}(Y)= & {} \left\{ \begin{array}{ll} \alpha _{ij}, &{}\quad \text{ if } y_{ij}=1\\ 1-\alpha _{ij}, &{}\quad \text{ if } y_{ij}=0 \end{array}\right. \qquad (i,j)\in \mathcal {U},\\ f(Y)= & {} \left\{ \begin{array}{ll} 1, &{}\quad \text{ if } I+Y+Y^2+\cdots +Y^{n-1}>0\\ 0, &{}\quad \text{ otherwise }, \end{array}\right. \end{aligned}$$

\(\mathbb {X}\) is the class of matrices satisfying (1), and \(X^{*}\) is the extension class of X satisfying (2).

Corollary 1

Gao and Gao (2013) Let \(\mathbb {G}=(\mathcal {V},\mathcal {A})\) be an uncertain graph. Then,

$$\begin{aligned} \rho (\mathbb {G})=\left\{ \begin{array}{ll} \displaystyle \sup _{X\in \mathbb {X}, f(X)=1}\min _{1\le i<j \le n}v_{ij}(X),\\ \qquad \qquad \text{ if }\displaystyle \displaystyle \sup _{X\in \mathbb {X}, f(X)=1}\min _{1\le i<j \le n}v_{ij}(X)<0.5\\ 1-\displaystyle \sup _{X\in \mathbb {X}, f(X)=0}\min _{1\le i<j \le n}v_{ij}(X)\\ \qquad \qquad \text{ if }\displaystyle \displaystyle \sup _{X\in \mathbb {X}, f(X)=1}\min _{1\le i<j \le n}v_{ij}(X)\ge 0.5, \end{array}\right. \end{aligned}$$

where

$$\begin{aligned} \mathbb {X}= & {} \left\{ X\,\Big |\!\begin{array}{l}x_{ij}=0 \text{ or } 1,\, i,j=1,2,\ldots ,n\\ x_{ij}=x_{ji},\, i,j=1,2,\ldots ,n\\ x_{ii}=0,\, i=1,2,\ldots ,n \end{array}\!\!\right\} ,\\ v_{ij}(X)= & {} \left\{ \begin{array}{ll} \alpha _{ij}, &{}\quad \text{ if } x_{ij}=1\\ 1-\alpha _{ij}, &{}\quad \text{ if } x_{ij}=0, \end{array}\right. \\ f(X)= & {} \left\{ \begin{array}{ll} 1, &{}\quad \text{ if } I+X+X^2+\cdots +X^{n-1}>0\\ 0, &{}\quad \text{ otherwise }. \end{array}\right. \end{aligned}$$

In a simple graph, some edges are critical to its connectivity. Each minimal cut of the graph consists of these edges. Similarly, in an uncertain random graph, some edges are very important to its connectivity. That means the connectivity index is very sensitive to the measure of these edges. Meanwhile, some edges are irrelevant to the connectivity, and the connectivity index is not sensitive to the measure of them. Here, we give a method to evaluate the significance of edges to the connectivity of an uncertain random graph.

For a matrix

$$\begin{aligned} A=\left( \begin{array}{cccc} a_{11}&{} \quad a_{12}&{} \quad \cdots &{} \quad a_{1n}\\ a_{21}&{} \quad a_{22}&{} \quad \cdots &{} \quad a_{2n}\\ \vdots &{} \quad \vdots &{} \quad \ddots &{} \quad \vdots \\ a_{n1}&{} \quad a_{n2}&{} \quad \cdots &{} \quad a_{nn} \end{array}\right) \end{aligned}$$

and for \(1\le i < j \le n\), let \(A_{+}(ij)\) be the matrix obtained from A by changing both \(a_{ij}\) and \(a_{ji}\) to 1. Let \(A_{-}(ij)\) be the matrix obtained from A by changing both \(a_{ij}\) and \(a_{ji}\) to 0, for \(1\le i < j \le n\).

Let \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\) be an uncertain random graph. For any edge (ij) (\(1\le i < j\le n\)), let \(\mathbb {G}_{+}(ij)=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A}_{+}(ij))\) and \(\mathbb {G}_{-}(ij)=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A}_{-}(ij))\) be the (ij)-reinforcing graph and the (ij)-relaxation graph of \(\mathbb {G}\), respectively. Note that edge (ij) always exists in \(\mathbb {G}_{+}(ij)\), while in \(\mathbb {G}_{-}(ij)\), the edge between vertices i and j does not exist.

Theorem 4

Let \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\) be an uncertain random graph. For any edge (ij) (\(1\le i < j\le n\)),

$$\begin{aligned} \rho (\mathbb {G}_-(ij))\le \rho (\mathbb {G})\le \rho (\mathbb {G}_+(ij)). \end{aligned}$$

Proof

For any connected realization graph H of \(\mathbb {G}\), assume that Y is the adjacency matrix of H. By Theorem 2, \(I+Y+Y^2+\cdots +Y^{n-1}>0\). As \(Y_{+}(ij)\ge Y\), \(I+Y_{+}(ij)+Y_{+}(ij)^2+ \cdots +Y_{+}(ij)^{n-1}>0\). Then the graph with adjacency matrix \(Y_{+}(ij)\) is a connected realization graph of \(\mathbb {G}_+(ij)\) and appears with greater measure than H. Thus, \(\rho (\mathbb {G})\le \rho (\mathbb {G}_+(ij))\). Similarly, \(\rho (\mathbb {G}_-(ij))\le \rho (\mathbb {G})\). \(\square \)

Definition 2

Let \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\) be an uncertain random graph. The significance of the edge between vertices i and j (\(1\le i < j\le n\)), denoted by \(\delta (ij)\), is \(\rho (\mathbb {G}_+(ij))-\rho (\mathbb {G}_-(ij))\). The reinforcing significance of edge (ij), denoted by \(\delta _+(ij)\), is \(\rho (\mathbb {G}_+(ij))-\rho (\mathbb {G})\). The relaxation significance of edge (ij), denoted by \(\delta _-(ij)\), is \(\rho (\mathbb {G})-\rho (\mathbb {G}_-(ij))\).

Since \(\mathbb {G}_-(ij)\) is the uncertain random graph obtained from \(\mathbb {G}\) by losing edge (ij), \(\delta _-(ij)\) is the change of connectivity index to this loss. Then, \(\delta _-(ij)\) shows the direct contribution of edge (ij) to the connectivity. Since \(\mathbb {G}_+(ij)\) is the uncertain random graph obtained from \(\mathbb {G}\) by guaranteeing the existence of edge (ij), \(\delta _+(ij)\) shows the gain in connectivity when reinforcing this edge. This is actually the potential contribution of edge (ij) to the connectivity.

The following proposition follows from Theorem 4 and Definition 2.

Proposition 1

Let \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\) be an uncertain random graph. For edge (ij) (\(1\le i < j\le n\)), we have:

  1. (1)

    \(\delta (ij)=\delta _+(ij)+\delta _-(ij)\);

  2. (2)

    \(0\le \delta _+(ij)\), \(0\le \delta _-(ij)\) and \(0\le \delta (ij)\le 1\).

3.1 Significance of uncertain edges

In an uncertain random graph \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\), let us define four collections of edges,

$$\begin{aligned} \mathcal {U}_{00}= & {} \{(i,j)\in \mathcal {U}\,|\,\delta _+(ij)=0 \text{ and } \delta _-(ij)=0\},\\ \mathcal {U}_{01}= & {} \{(i,j)\in \mathcal {U}\,|\,\delta _+(ij)=0 \text{ and } \delta _-(ij)>0\},\\ \mathcal {U}_{10}= & {} \{(i,j)\in \mathcal {U}\,|\,\delta _+(ij)>0 \text{ and } \delta _-(ij)=0\},\\ \mathcal {U}_{11}= & {} \{(i,j)\in \mathcal {U}\,|\,\delta _+(ij)>0 \text{ and } \delta _-(ij)>0\}. \end{aligned}$$

Note that \(\mathcal {U}_{00}\), \(\mathcal {U}_{01}\), \(\mathcal {U}_{10}\) and \(\mathcal {U}_{11}\) form a partition of \(\mathcal {U}\).

For each \((ij)\in \mathcal {U}_{00}\), \(\delta (ij)=\delta _+(ij)+\delta _-(ij)=0\). Then, edge (ij) has no direct or potential contributions to the connectivity. This edge is irrelevant to the connectivity.

For each \((ij)\in \mathcal {U}_{01}\), it is partially significant for \(\rho (\mathbb {G})\) and has a direct contribution to the connectivity. This edge is an essential part for the connectivity, and losing it will decrease \(\rho (\mathbb {G})\). However, this edge is not the weakest part, and there is no need to reinforcing it.

For each \((ij)\in \mathcal {U}_{10}\), it is partially significant for \(\rho (\mathbb {G})\) and has a potential contribution to the connectivity. This edge is not an essential part for the connectivity, and losing it does not decrease \(\rho (\mathbb {G})\). However, when reinforcing this edge, it will reorganize the essential structure for connectivity and make the original weakest part dispensable. Therefore, \(\rho (\mathbb {G})\) will increase.

For each \((ij)\in \mathcal {U}_{11}\), it is highly significant for \(\rho (\mathbb {G})\) and is critical for the connectivity. This edge is an essential part for the connectivity, and also the weakest part. So increasing (or decreasing) \(\alpha _{ij}\) will increase (or decrease) \(\rho (\mathbb {G})\).

3.2 Significance of random edges

For each \((i,j)\in \mathcal {R}\), \(\rho (\mathbb {G})\), \(\rho (\mathbb {G}_+(ij))\) and \(\rho (\mathbb {G}_-(ij))\) satisfy the following theorem.

Theorem 5

Let \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\) be an uncertain random graph. For each random edge (ij),

$$\begin{aligned} \rho (\mathbb {G})=\alpha _{ij}\rho (\mathbb {G}_+(ij))+(1-\alpha _{ij})\rho (\mathbb {G}_-(ij)). \end{aligned}$$

Proof

Choose \((i_0,j_0)\in \mathcal {R}\). Write \(\mathcal {R}'=\mathcal {R}-\{(i_0,j_0)\}\). Let \(\mathbb {X}\), \(\mathbb {X}_+\) and \(\mathbb {X}_-\) be the class of matrices of \(\mathbb {G}\), \(\mathbb {G}_+(ij)\) and \(\mathbb {G}_-(ij)\) satisfying (1), respectively.

Note that \(x_{i_0j_0}=1\) for any \(X\in \mathbb {X}_+\), and \(x_{i_0j_0}=0\) for any \(X\in \mathbb {X}_-\). Thus, \(\mathbb {X}=\mathbb {X}_+\cup \mathbb {X}_-\) and \(\mathbb {X}_+\cap \mathbb {X}_-=\emptyset \). By Theorem 3,

$$\begin{aligned} \rho (\mathbb {G})= & {} \displaystyle \sum _{X\in \mathbb {X}}\Big (\prod _{(i,j)\in \mathcal {R}}v_{ij}(X)\Big )f^{*}(X)\\= & {} \displaystyle \sum _{X\in \mathbb {X}_+}\Big (\prod _{(i,j)\in \mathcal {R}}v_{ij}(X)\Big )f^{*}(X)\\&\qquad +\displaystyle \sum _{X\in \mathbb {X_-}}\Big (\prod _{(i,j)\in \mathcal {R}}v_{ij}(X)\Big )f^{*}(X)\\= & {} \alpha _{i_0j_0}\displaystyle \sum _{X\in \mathbb {X}_+}\Big (\prod _{(i,j)\in \mathcal {R}'}v_{ij}(X)\Big )f^{*}(X)\\&\qquad +(1-\alpha _{i_0j_0})\displaystyle \sum _{X\in \mathbb {X_-}}\Big (\prod _{(i,j)\in \mathcal {R}'}v_{ij}(X)\Big )f^{*}(X)\\= & {} \alpha _{i_0j_0}\rho (\mathbb {G}_+(ij))+(1-\alpha _{i_0j_0})\rho (\mathbb {G}_-(ij)). \end{aligned}$$

\(\square \)

Theorem 6

Let \(\mathbb {G}=(\mathcal {V}, \mathcal {U},\mathcal {R},\mathcal {A})\) be an uncertain random graph. For any \((i,j)\in \mathcal {R}\), either \(\delta _+(ij)=\delta _-(ij)=0\), or both \(\delta _+(ij)\) and \(\delta _-(ij)\) are positive.

Proof

For any \((i,j)\in \mathcal {R}\), by Theorem 5, \(\rho (\mathbb {G})=\alpha _{ij}\rho (\mathbb {G}_+(ij))+(1-\alpha _{ij})\rho (\mathbb {G}_-(ij))\). Then,

$$\begin{aligned} \alpha _{ij}(\rho (\mathbb {G}_+(ij))-\rho (\mathbb {G}))=(1-\alpha _{ij})(\rho (\mathbb {G})-\rho (\mathbb {G}_-(ij))), \end{aligned}$$

which means

$$\begin{aligned} \alpha _{ij}\delta _+(ij)=(1-\alpha _{ij})\delta _-(ij). \end{aligned}$$

Therefore, either \(\delta _+(ij)=\delta _-(ij)=0\), or both \(\delta _+(ij)\) and \(\delta _-(ij)\) are positive. This proves the theorem. \(\square \)

By Theorem 6, random edges could be classified into two categories.

$$\begin{aligned} \mathcal {R}_{00}= & {} \{(i,j)\in \mathcal {R}\,|\,\delta _+(ij)=0 \text{ and } \delta _-(ij)=0\}.\\ \mathcal {R}_{11}= & {} \{(i,j)\in \mathcal {R}\,|\,\delta _+(ij)>0 \text{ and } \delta _-(ij)>0\},\\ \end{aligned}$$

Note that \(\mathcal {R}_{00}\) and \(\mathcal {R}_{11}\) form a partition of \(\mathcal {R}\).

For each \((ij)\in \mathcal {R}_{00}\), \(\delta (ij)=\delta _+(ij)+\delta _-(ij)=0\). Then, edge (ij) has no direct or potential contributions to the connectivity. This edge is irrelevant to the connectivity.

For each \((ij)\in \mathcal {R}_{11}\), it is highly significant for \(\rho (\mathbb {G})\) and has both direct and potential contributions. So increasing (or decreasing) \(\alpha _{ij}\) will increase (or decrease) \(\rho (\mathbb {G})\).

4 Algorithm and example

In this section, we will present an algorithm for calculating connectivity index of an uncertain random graph, and an algorithm for calculating the significance of edges. Examples will be given to illustrate the algorithms.

In graph theory, Prim’s Algorithm and Kruskal’s Algorithm are two well-known algorithms to find a minimum spanning tree in a weighted simple graph. Both are greedy algorithms. For a simple graph with n vertices and m edges, the complexity of Prim’s Algorithm is \(O(n^2)\). The complexity of Kruskal’s Algorithm is \(O(m \log m)\). Therefore, Prim’s Algorithm is more effective for dense graphs, while Kruskal’s Algorithm is more effective for sparse graphs.

Lemma 1

Gao and Gao (2013) In an uncertain graph \(\mathbb {G}=(\mathcal {V}, \mathcal {A})\) of order n, a maximum spanning tree T is a connected subgraph with vertex set \(\mathcal {V}\) and edge set E, such that \(|E|=n-1\) and \(\min _{(i,j)\in E}(\alpha _{ij})\) is maximum. Then, \(\rho (\mathbb {G})=\min _{(i,j)\in E}(\alpha _{ij})\).

By Lemma 1, in order to find the connectivity index of an uncertain graph, it is sufficient to find a maximum spanning tree. Both Prim’s Algorithm and Kruskal’s Algorithm could be modified for finding a maximum spanning tree in an uncertain graph.

Let \(\mathbb {G}=( \mathcal {V}, \mathcal {U},\mathcal {R}, \mathcal {A})\) be an uncertain random graph. Let \(\mathbb {X}\) be the class of matrix satisfying (1) of \(\mathbb {G}\). For each \(X\in \mathbb {X}\), let \(\mathcal {A}_X=(a_{ij})_{n\times n}\) be the matrix satisfying

$$\begin{aligned} a_{ij}=\left\{ \begin{array}{ll} x_{ij}, &{} \quad \text{ if } (i,j)\in \mathcal {R}\\ \alpha _{ij}, &{} \quad \text{ if } (i,j)\in \mathcal {U}\\ a_{ji}, &{} \quad \text{ if } i>j\\ 0, &{} \quad \text{ if } i=j. \end{array}\right. \end{aligned}$$

As deterministic edges could be viewed as special uncertain edges, \(\mathbb {G}_X=(\mathcal {V}, \mathcal {A}_X)\) is an uncertain graph. By Theorem 3 and Corollary 1,

$$\begin{aligned} \rho (\mathbb {G})= & {} \displaystyle \sum _{X\in \mathbb {X}}\Big (\prod _{(i,j)\in \mathcal {R}}v_{ij}(X)\Big )f^{*}(X)\\= & {} \displaystyle \sum _{X\in \mathbb {X}}\Big (\prod _{(i,j)\in \mathcal {R}}v_{ij}(X)\Big )\rho (\mathbb {G}_X). \end{aligned}$$

Therefore, \(\rho (\mathbb {G})\) could be calculated by the following algorithm.

Algorithm 1. Algorithm for calculating connectivity index of an uncertain random graph \(\mathbb {G}=( \mathcal {V}, \mathcal {U},\mathcal {R}, \mathcal {A})\).

Step 1. Let \(X_0=(x_{ij})_n\) be an \(n\times n\) matrix such that \(x_{ij}=1\) if \(\alpha _{ij}>0\), and \(x_{ij}=0\) if \(\alpha _{ij}=0\). If \(I+X_0+X_0^2+\cdots +X_0^{n-1}>0\), then go to Step 2; otherwise, stop and the connectivity index of \(\mathbb {G}\) is 0;

Step 2. Generate the set \(\mathbb {X}\). Set \(\rho (\mathbb {G})=0\);

Step 3. Choose \(X\in \mathbb {X}\). Generate the uncertain graph \(\mathbb {G}_X\). If \(\mathbb {G}_X\) is a dense graph, then determine \(\rho (\mathbb {G}_X)\) by Modified Prim’s Algorithm. If \(\mathbb {G}_X\) is a sparse graph, then determine \(\rho (\mathbb {G}_X)\) by Modified Kruskal’s Algorithm. Reset \(\rho (\mathbb {G})=\rho (\mathbb {G})+(\prod _{(i,j)\in \mathcal {R}}v_{ij}(X))\rho (\mathbb {G}_X)\) and \(\mathbb {X}=\mathbb {X}-\{X\}\);

Step 4. If \(\mathbb {X}\ne \emptyset \), then go to Step 3; otherwise, stop. The value of \(\rho (\mathbb {G})\) is the connectivity index of \(\mathbb {G}\).

Modified Prim’s Algorithm.Algorithm for calculating the connectivity index of an uncertain graph \(\mathbb {G}=(\mathcal {V}, \mathcal {A})\).

Step 1. Let \(X=(x_{ij})_n\) be an \(n\times n\) matrix such that \(x_{ij}=1\) if \(\alpha _{ij}>0\), and \(x_{ij}=0\) if \(\alpha _{ij}=0\). If \(I+X+X^2+\cdots +X^{n-1}>0\), then go to Step 2; otherwise, stop and \(\rho (\mathbb {G})=0\);

Step 2. Set \(V_1=\{1 \}\), \(V_2=\{2, 3, \ldots ,n\}\) and \(E=\emptyset \).

Step 3. Choose \(i\in V_1\) and \(j\in V_2\) such that \(\alpha _{ij}\) is maximum. Reset \(V_1=V_1\cup \{j\}\), \(V_2=V_2-\{j\}\) and \(E=E\cup \{(i,j)\}\);

Step 4. If \(|E|\le n-2\), then go to Step 3; if \(|E|=n-1\), then stop and let \(\rho (\mathbb {G})=\displaystyle \min _{(i,j)\in E}(\alpha _{ij})\).

Modified Kruskal’s Algorithm. Algorithm for calculating the connectivity index of an uncertain graph \(\mathbb {G}=(\mathcal {V}, \mathcal {A})\).

Step 1. Let \(X=(x_{ij})_n\) be an \(n\times n\) matrix such that \(x_{ij}=1\) if \(\alpha _{ij}>0\), and \(x_{ij}=0\) if \(\alpha _{ij}=0\). If \(I+X+X^2+\cdots +X^{n-1}>0\), then go to Step 2; otherwise, stop and \(\rho (\mathbb {G})=0\);

Step 2. Set \(E=\emptyset \) and set \(E_0\) to be the set of all edges of \(\mathbb {G}\). Set each vertex to be a subtree.

Step 3. Choose \((i,j)\in E_0\) such that \(\alpha _{ij}\) is maximum. If i and j are in two different subtrees, then combine the two subtrees into one. Reset \(E=E\cup \{(i,j)\}\) and \(E_0=E_0-\{(i,j)\}\). Otherwise, reset \(E_0=E_0-\{(i,j)\}\).

Step 4. If \(|E|\le n-2\), then go to Step 3; if \(|E|=n-1\), then stop and let \(\rho (\mathbb {G})=\min _{(i,j)\in E}(\alpha _{ij})\).

Fig. 2
figure 2

Calculating connectivity index by Algorithm 1

Example 2

Let \(\mathbb {G}=(\mathcal {V},\mathcal {U},\mathcal {R},\mathcal {A})\) be an uncertain graph (shown in Fig. 2), where \(\mathcal {V}=\{1,2,3,4,5,6,7\}\), \(\mathcal {U}=\{(1,2), (1,3), (1,4), (1,6), (1,7), (2,3), (6,7)\}\), \(\mathcal {R}=\{(1,5), (3,4)\}\) and

$$\begin{aligned} \mathcal {A}=\begin{pmatrix} 0 &{} \quad 0.9 &{} \quad 0.4 &{} \quad 0.6 &{} \quad 0.8 &{} \quad 0.9 &{} \quad 0.3 \\ 0.9 &{} \quad 0 &{} \quad 0.7 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 0.4 &{} \quad 0.7 &{} \quad 0 &{} \quad 0.4 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0.6 &{} \quad 0 &{} \quad 0.4 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0.8 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0.9 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0.1 \\ 0.3 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0.1 &{} \quad 0 \\ \end{pmatrix}. \end{aligned}$$

By Algorithm 1, in Step 1, it is easy to check that when all edges exist, the realization is connected. So \(\rho (\mathbb {G})>0\).

In Step 2, as \(\mathbb {G}\) has 2 random edges, there are \(2^2\) matrices in \(\mathbb {X}\), which are listed as follows:

$$\begin{aligned} X_1= & {} \begin{pmatrix} 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ \end{pmatrix}, X_2=\begin{pmatrix} 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ \end{pmatrix} \\ X_3= & {} \begin{pmatrix} 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ \end{pmatrix}, X_4=\begin{pmatrix} 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 \\ \end{pmatrix} \end{aligned}$$
Fig. 3
figure 3

\(\rho (\mathbb {G}_{X_3})=1\wedge 0.9\wedge 0.9\wedge 0.7\wedge 0.6\wedge 0.3=0.3~~~~~\) \(~~~~~~~~~~~~\rho (\mathbb {G}_{X_4})=1\wedge 1\wedge 0.9\wedge 0.9\wedge 0.7\wedge 0.3=0.3\)

In Step 3, as \(\mathbb {X}=\{X_1,X_2,X_3,X_4\}\), there are four uncertain graphs \(\mathbb {G}_{X_1}\), \(\mathbb {G}_{X_2}\), \(\mathbb {G}_{X_3}\) and \(\mathbb {G}_{X_4}\), which are shown in Fig.  2. Obviously, \(\mathbb {G}_{X_1}\) and \(\mathbb {G}_{X_2}\) are disconnected. Thus, \(\rho (\mathbb {G}_{X_1})=\rho (\mathbb {G}_{X_2})=0\). By Modified Prim’s Algorithm or Modified Kruskal’s Algorithm, maximum spanning trees of \(\mathbb {G}_{X_3}\) and \(\mathbb {G}_{X_4}\) could be found (Fig. 3). Then, \(\rho (\mathbb {G}_{X_3})=\rho (\mathbb {G}_{X_4})=0.3\). Therefore,

$$\begin{aligned} \rho (\mathbb {G})= & {} \displaystyle \sum _{X_k\in \mathbb {X}}\Big (\prod _{(i,j)\in \mathcal {R}}v_{ij}(X_k)\Big )f^{*}(X_k)\\= & {} \displaystyle \sum _{X_k\in \mathbb {X}}\Big (\prod _{(i,j)\in \mathcal {R}}v_{ij}(X_k)\Big )\rho (\mathbb {G}_{X_k}).\\= & {} 0+0+0.8\times (1-0.4)\times 0.3+ 0.8\times 0.4\times 0.3\\= & {} 0.24. \end{aligned}$$

Next, we present an algorithm for calculating the significance of edges.

Algorithm 2. Algorithm for calculating significance of edges in an uncertain random graph \(\mathbb {G}=(\mathcal {V},\mathcal {U},\mathcal {R},\mathcal {A})\).

Step 1. Set \(E=\mathcal {R}\cup \mathcal {U}\). Calculate \(\rho (\mathbb {G})\) by Algorithm 1;

Step 2. Choose \((i,j)\in E\). Calculate \(\rho (\mathbb {G}_+(ij))\) by Algorithm 1;

Step 3. If \((i,j)\in \mathcal {R}\) and \(\rho (\mathbb {G})=\rho (\mathbb {G}_+(ij))\), then set \(\delta _+(ij)=\delta _-(ij)=\delta (ij)=0\) and go to Step 5. Otherwise, go to Step 4;

Step 4. Calculate \(\rho (\mathbb {G}_-(ij))\) by Algorithm 1. Set \(\delta _+(ij)\) \(=\rho (\mathbb {G}_+(ij))-\rho (\mathbb {G})\), \(\delta _-(ij)=\rho (\mathbb {G})-\rho (\mathbb {G}_-(ij))\) and \(\delta (ij)=\rho (\mathbb {G}_+(ij))-\rho (\mathbb {G}_-(ij))\);

Step 5. Reset \(E=E-\{(i,j)\}\). If \(E\ne \emptyset \), then go to Step 2; otherwise, stop.

Example 3

For the uncertain random graph in Example 2, we apply Algorithm 2 to calculate the significance of edges. Recall that \(\rho (\mathbb {G})=0.24\). The results are shown in Table 1.

Table 1 Significance of edges

From values in Table 1, edge (1, 5) and edge (1, 7) are very important for the connectivity. Edge (1, 4), edge (1, 6) and edge (6, 7) are partially significant for the connectivity. The rest four edges actually have no contribution to the connectivity.

Random edge (1, 5) is highly significant, because it is the only edge connecting vertex 5. The graph is disconnected without this edge. In graph theory, this kind of edge is called a bridge. So this edge is critical for the connectivity.

Uncertain edge (1, 7) is also very important for the connectivity. In order to connect vertex 7 to other vertices in \(\mathbb {G}\), at least one of edge (1, 7) and edge (6, 7) must exist. As edge (1, 7) has greater truth value, each maximum spanning tree will choose this edge. Also, this edge has the minimum truth value among all edges of maximum spanning trees. So increasing \(\alpha _{17}\) will increase the measure of each maximum spanning tree. Therefore, the connectivity index will increase. On the contrary, when losing edge (1, 7), each maximum spanning tree has to choose edge (6, 7). Thus, the measure of each maximum spanning tree will decrease, and the connectivity index will decrease.

For uncertain edge (1, 6), \(\delta _+(16)=0\) and \(\delta _-(16)>0\). When reinforcing edge (1, 6), the weakest part for connectivity (which is edge (1, 7)) is still an essential part. So there is no need to reinforcing this edge. However, when losing this edge, in order to connect vertex 6 to other vertices, each maximum spanning tree has to choose edge (6, 7), whose truth value is the smallest. Thus, the measure of each maximum spanning tree will decrease, and the connectivity index will decrease. The significance of edge (1, 4) is similar to that of (1, 6).

For uncertain edge (6, 7), \(\delta _+(67)>0\) and \(\delta _-(67)=0\). As each maximum spanning tree will choose edge (1, 7) to connect vertex 7, deleting edge (6, 7) does not change maximum spanning trees. Therefore, \(\rho (\mathbb {G})\) remains the same. When increasing \(\alpha _{67}\), as long as it is greater than 0.3, each maximum spanning tree will choose edge (6, 7) instead of edge (1, 7). Thus, the measure of each maximum spanning tree will increase, and the connectivity index will increase.

The rest edges are not important for connectivity. Take uncertain edge (1, 2) for example. Without this edge, vertex 2 could still be connected to vertex 1 and other vertices through vertex 3 with relatively big measure. When reinforcing edge (1, 2), the weakest part for connectivity is still an essential part. So edge (1, 2) is an unimportant edge for connectivity.

5 Conclusions

In this paper, we discussed how to evaluate the contributions of edges for connectivity of an uncertain random graph. Concepts to describe the significance of edges were proposed. Edges were classified into different categories based on their significance. The different performances on significance of random edges and uncertain edges were stated. This paper also presented algorithms for calculating connectivity index and significance of edges. Examples were given to illustrate algorithms and methods.

It is worth pointing out that this significance of edges could be applied to some optimization problems, such as the shortest path problems, the Chinese Postman Problems. Significance of edges for other graph properties, such as connectivity, distance, can be further studied in the future for uncertain random graphs.