Next Article in Journal
On Golden Lorentzian Manifolds Equipped with Generalized Symmetric Metric Connection
Previous Article in Journal
Stability Analysis of Unsteady MHD Rear Stagnation Point Flow of Hybrid Nanofluid
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discretization of Fractional Operators: Analysis by Means of Advanced Computational Techniques

by
Jose Tenreiro Machado
1,*,†,
Alexandra M. Galhano
2,† and
Carla S. Cordeiro
2
1
Institute of Engineering, Polytechnic of Porto, Rua Dr. António Bernardino de Almeida, 431, 4249-015 Porto, Portugal
2
Faculdade de Ciências Naturais, Engenharias e Tecnologias, Universidade Lusófona do Porto, Rua Augusto Rosa 24, 4000-098 Porto, Portugal
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2021, 9(19), 2429; https://doi.org/10.3390/math9192429
Submission received: 2 September 2021 / Revised: 22 September 2021 / Accepted: 24 September 2021 / Published: 30 September 2021

Abstract

:
This paper studies the discretization of fractional operators by means of advanced clustering methods. The Grünwald–Letnikov fractional operator is approximated by series generated by the Euler, Tustin and generalized mean. The series for different fractional orders form the objects to be assessed. For this purpose, the several distances associated with the hierarchical clustering and multidimensional scaling computational techniques are tested. The Arc-cosine distance and the 3-dim multidimensional scaling produce good results. The visualization of the graphical representations allows a better understanding of the properties embedded in each type of approximation of the fractional operators.

1. Introduction

Fractional calculus (FC) generalizes the concept of integrals and derivatives to real and complex orders [1,2]. The FC is a mathematical tool that captures non-local effects and phenomena with long-range memory. Due to this key characteristic, FC entails a variety of applications that have been successfully explored during the last years [3,4,5,6,7,8,9]. Nevertheless, the adoption of FC in applied sciences poses some difficulties in the development of efficient algorithms due to the higher complexity required by the calculation of the fractional operators.
In signal processing and control applications, the fractional operators are approximated by recursive algorithms based on the Grünwald–Letnikov (GL) definition [10,11]. The resulting expressions are calculated in discrete-time either using series or Padé fractions often expanded with the Z -transform. Usually the performance of such approximations is assessed through the time and frequency responses [12]. Moreover, we can interpret fractional derivatives and integral in a number of different ways [13,14], which reveals the higher generality of FC and its adaptability to a number of distinct problems. Nonetheless, besides the classical methods of comparison, recent methods of clustering and computational visualization play a key role in the development of new algorithms and should be explored in the scope of FC.
The adoption of computer-based techniques has been an area of fast development during the last decade. In this paper, we study the application of hierarchical clustering (HC) and multidimensional scaling (MDS) methods for computational clustering and visualization [15,16,17,18,19,20,21,22,23]. The HC yields data 2-dim portraits known as dendrograms and trees, where the objects under analysis are the ‘leafs’. In the case of the MDS, the objects are ‘points’ positioned either in 2- or in 3-dim charts. These computational schemes have been adopted successfully in a number of scientific areas and allow to unveil patterns embedded in the data [24,25]. In the fields of dynamical analysis and control systems, the application of these methods is still giving the first steps [26,27,28], and therefore we can foresee a significant development in the near future.
This paper takes advantage of the two computer-based techniques for comparing several series approximating the GL fractional operator. Therefore, this study is organized in four sections. Section 2 introduces the fundamental concepts adopted in the rest of the manuscript and includes two sub-sections. The first summarizes the discretization schemes of fractional derivatives. The second outlines two clustering analysis and visualization technique and is divided into there parts. The ideas behind the concepts of distance, hierarchical clustering (HC) and MDS are discussed briefly. Section 3 applies the HC and MDS techniques to numerical series and includes two sub-sections. The first illustrates the proposed methods by means of a test-bed of 13 well-known series. The second assesses several series approximating fractional operators. Finally, Section 4 summarizes the main conclusions.

2. Fundamental Concepts

2.1. Discretization of Fractional Derivatives

The most common definitions of a fractional derivative or integral are the so-called Riemann–Liouville, Caputo and GL formulations [29,30]. From the point of view of signal processing and real-time control system the GL definition is more straightforward to apply. The GL definition of a derivative of fractional order α R of the signal x ( t ) is given by
D α x ( t ) = lim h 0 1 h α k = 0 1 k Γ α + 1 k ! Γ α k + 1 x t k h ,
where Γ · is the gamma function and h is the time increment.
This formulation leads to a discrete-time algorithm, with the time increment h approximated the sampling period T. After truncating at the r-th term, definition (1) yields the expression in the Z domain:
Z D α x ( t ) = 1 T α k = 0 r 1 k Γ α + 1 k ! Γ α k + 1 z k X z ,
where Z x ( t ) = X z .
Expression (2) is the Euler conversion scheme s z , adopted in continuous to discrete time approximations, with s and z standing for the variables in the Laplace and Z domains. Nonetheless, in the control system design, the Tustin conversion scheme is also often adopted. Therefore, for a fractional operator of order α , the two s z schemes can be written as
Euler : s α = 1 T 1 z 1 α = H 0 α z 1 ,
Tustin : s α = 2 T 1 z 1 1 + z 1 α = H 1 α z 1 ,
respectively. Expressions (3) and (4) are also called generating approximants of zero and first order, H 0 and H 1 , respectively. We can obtain an average of H 0 and H 1 by weighing them by the factors p and 1 p , respectively, such that
M α z 1 = p H 0 α z 1 + 1 p H 1 α z 1 ,
where 0 p 1 . The mean, M α z 1 , involves one parameter p which allows some adaptation to a specific application. Nonetheless, expression (5) can be further generalized using the concept of generalized mean [12,31,32], yielding
M q α z 1 = p H 0 α z 1 q + 1 p H 1 α z 1 q 1 q ,
where 0 p 1 and q R . For  q = 1 , 0 , 1 we obtain the so-called harmonic, geometric and arithmetic means.

2.2. Clustering Analysis and Visualization

2.2.1. Distances

A function d ( · , · ) is a metric that gives the distance between two objects S 1 and S 2 if satisfies the identity, symmetry and triangle inequality axioms [33]:
d ( S 1 , S 2 ) = 0 , if S 1 = S 2 ,
d ( S 1 , S 2 ) = d ( S 2 , S 1 ) ,
d ( S 1 , S 2 ) d ( S 1 , S 3 ) + d ( S 2 , S 3 ) .
These axioms imply the non-negativity (or separation condition) d ( S 1 , S 2 ) 0 . In fact, axioms (7) allow the use of different functions, each one with distinct pros and cons [34,35].
A variety of algorithms [36,37,38] was adopted for comparing data sequences [39,40,41,42]. Nonetheless, we must have in mind that the selection of a specific distance for an application requires some experience and that some preliminary tests are necessary before selecting the ‘best’ [43,44,45,46].
For capturing the characteristics of each object, we construct a 1 × r vector with the coefficients of the numerical series. This means that, for example, the harmonic series k = 1 1 k is described by its truncated form, that is, by the 1 × r vector 1 1 , 1 2 , 1 3 , , 1 r . To compare the vectors we consider four distances, namely the Euclidean, Tchebichef, Jaccard and ArcCosine metrics, given by [35]
d E u ( S i , S j ) = k = 1 r γ i k γ j k 2 ,
d T c ( S i , S j ) = max k γ i ( k ) γ j ( k ) ,
d J a ( S i , S j ) = k = 1 r γ i ( k ) γ j ( k ) k = 1 r γ i ( k ) 2 + k = 1 r γ j ( k ) 2 k = 1 r γ i ( k ) γ j ( k ) ,
d A c ( S i , S j ) = arccos k = 1 r γ i k γ j k k = 1 r γ i k 2 k = 1 r γ j k 2 ,
where γ i k denotes the k-th coefficient of the i = 1 , , N , series.
The Euclidean and Tchebichef distances are special cases of the Minkowski distance d M i = k = 1 m x i k x j k p 1 / p , namely, for p = 2 and p , respectively. The Jaccard distance measures the dissimilarity between two sample sets using a logic similar to that of the Venn diagrams. The Jaccard distance is useful for comparing observations with categorical variables. The ArcCosine distance is not sensitive to the amplitudes of the two vectors and gives the angle between them.
A typical question is what distances to adopt, given the high number of possible expressions. Indeed, we tested other distances such as the Manhattan, Canberra, Clark, Lorentzian, Sørenson and others [47], but the expressions (8) are the best for characterizing numerical series.

2.2.2. Hierarchical Clustering

The HC is a technique that compares a set of N objects in a n-dim space A and yields a graphical portrait highlighting their main similarities in the sense of some metric [18,48].
The algorithm starts by gathering N objects in a high-dimensional data-set A . After defining a metric for comparing the objects a N × N matrix D = d i j , i , j = 1 , , N , of object-to-object distances is calculated. When adopting distances, the  matrix D is symmetric, with zeros in the main diagonal, and positive values in the rest. The HC uses the matrix D and constructs a graphical representation, consisting in a dendrogram or a hierarchical tree, that tries to reflect the input information. In this case, the objects are represented by ‘leafs’.
The HC can use either the agglomerative, or the divisive clustering iterative computational schemes. In the agglomerative, each object starts in a separate cluster and the algorithm merges the most similar objects during the iterations. In the divisive, all objects start in a common cluster and the successive iterations separate the most dissimilar. A linkage criterion, such as the maximum, minimum and average linkages, calculates the dissimilarity between clusters [49]. The clustering quality can be assessed by means of the cophenetic correlation [50]. When the cophenetic correlation is close to 1 (to 0) we have a good (weak) cluster representation of the original data.
Hereafter, we adopt the agglomerative clustering and the average-linkage [51,52], with the program Phylip http://evolution.genetics.washington.edu/phylip.html (accessed on 2 September 2021).

2.2.3. Multidimensional Scaling

The MDS is a technique that reproduces approximately in a space of dimension n objects described in a space of dimensional m > n . In this case, the objects are represented by ‘points’.
The MDS calculates a N × N matrix D ˜ = d ˜ i j , with distances d ˜ , that tries to reproduce the original ones. Usually, the algorithm minimizes some quadratic index, often called ‘stress’, and the problem is converted to a numerical optimization of an index such as S t r e s s = [ i , j = 1 , , N ( d i j d ˜ i j ) 2 ] 1 / 2 . The iterations of the MDS algorithm stop when we have either no further significant improvement of the stress reduction, or the maximum number of iterations is reached. Usually, the dimensions n = 2 or n = 3 are adopted to produce a simple visualization. In the follow-up, n = 3 is selected because the loci allow a superior visualization, at the cost of requiring some successive operations of rotation, shift and amplification for achieving the best perspective.
The quality of the MDS loci, also called a ‘map’, can be assessed by means of the so-called Sheppard and stress diagrams. The Sheppard portrait draws d i j versus d ˜ i j , and a low/high scatter means a good/poor match between the distances. The stress diagram plots S t r e s s versus the number of dimensions n. This representation gives a monotonic decreasing curve, that usually reveals a significant reduction after the initial values.
The axes of the MDS representation have no physical meaning and there is no good/bad interpretation to the high/low values of the point coordinates. Indeed, the interpretation of the map follows the clusters and patterns formed by the points. As for the HC, users must test several distances in advance for capturing the characteristics of the data. In general, the MDS loci are different for each distinct metric, but we can have several distances producing ‘good’ maps.
We calculate the MDS through the Matlab classical multidimensional scaling command cmdscale.

3. HC and MDS Analysis of Series

3.1. A Test-Bed of Numerical Series

We start by considering a test-bed of S i , i = 1 , 13 , numerical series as follows:
S 1 : 1 1 1 2 + 1 3 1 4 + 1 5 = ln 2 ,
S 2 : 1 1 1 3 + 1 5 1 7 + 1 9 = π 4 ,
S 3 : 1 1 2 + 1 2 2 + 1 3 2 + 1 4 2 + = π 2 6 ,
S 4 : 1 1 4 + 1 2 4 + 1 3 4 + 1 4 4 + = π 4 90 ,
S 5 : 1 1 3 1 3 3 + 1 5 3 1 7 3 + = π 3 32 ,
S 6 : 1 1 2 1 2 2 + 1 3 2 1 4 2 + = π 2 12 ,
S 7 : 1 1 · 3 + 1 3 · 5 + 1 5 · 7 + 1 7 · 9 + = 1 2 ,
S 8 1 1 · 3 + 1 2 · 4 + 1 3 · 5 + 1 4 · 6 + = 3 4 ,
S 9 : 1 1 ! + 1 2 ! + 1 3 ! + 1 4 ! + 1 5 ! + = e ,
S 10 : 1 1 + 1 2 + 1 2 2 + 1 2 3 + = 2 ,
S 11 : 1 1 + 3 4 + 3 4 2 + 3 4 3 + = 4 ,
S 12 : 1 1 + 1 4 + 1 4 2 + 1 4 3 + = 4 3 ,
S 13 : 1 1 + 1 2 + 1 3 + 1 4 + 1 5 + .
The series S 1 to S 12 converge to known results with exception of the harmonic series S 13 that diverges. Nonetheless, we have a truncation at r terms. Therefore, all series converge, but the final result will not be identical to the one for the non-truncated expression. For the sake of easing the identification, we denote the series by their ideal value in the plots, while the harmonic series S 13 is denoted by H.
Figure 1 shows the HC portrayed by a dendrogram and a hierarchical tree for the series S i , i = 1 , 13 , using the ArcCosine distance, d A C , for r = 100 .
Figure 2 and Figure 3 show the MDS 3-dim plots for the series S i , i = 1 , , 13 , using the Euclidean, Tchebychev, Jaccard and ArcCosine distances, d E u , d T c , d J a , d A c , for r = 100 .
We verify that for this test-bed all distances and graphical representations give approximately identical conclusions. In general and in global terms, we observe two sets A and B . A close placement of the π -related series A = S 2 , S 3 , S 4 , S 5 , S 6 and some separation with regard to the series B = S 1 , S 7 , S 8 , S 9 , S 10 , S 11 , S 12 , S 13 . As mentioned before, in spite of the truncation, they are denoted A = π 4 , π 2 6 , π 4 90 , π 3 32 , π 2 12 , B = ln 2 , 1 2 , 3 4 , e , 2 , 4 , 4 3 , H , respectively. We must note that the truncated harmonic series, H, leads to a finite sum. It is not obvious from these initial simple tests, but as we shall verify in the follow-up the ArcCosine distance, d A C , gives slightly better display results, and the MDS 3-dim loci allow a better visualization for a larger number of series.
We can also compare the results of using a different truncation order. Figure 4 depicts the MDS 3-dim plots for the series S i , i = 1 , , 13 , using the ArcCosine distance for r = 100 . We observe the minor influence of the truncation order r upon the clustering.
Of course we could discuss further the pros and cons of this test-bed and, eventually, include other series. However, this is not the main purpose of the initial set of experiments that merely have the intention of illustrating the proposed method.

3.2. Series Approximating Fractional Operators

In this sub-section, we compare the series for approximating the fractional operators. Therefore, we apply the HC and MDS techniques to the series produced by the GL (or Euler) and Tustin schemes (3) and (4), that we abbreviate by the symbols G α and T α , with those arising from the generalized mean M q α . We consider (i) the interval between the integer-order integral and derivative of orders 1, that is, 1 α 1 with steps of 0.1 , (ii) the parameter value p = 0.5 , and (iii) the cases q = 1 , 0 , 1 (i.e., the harmonic, geometric and arithmetic means) for α = 1 , 0.5 , 0.5 , 1 .
Figure 5 shows the dendrogram and the hierarchical tree for the N = 54 numerical series using the ArcCosine distance, d A C , for r = 100 . The series G α and T α , with 21 objects each, are depicted in red and blue colors, respectively. The averages M q α , with 3 objects each, are colored in yellow, cyan, green and white for α = 1 , 0.5 , 0.5 , 1 . Without loss of relevant information, in several plots some labels are not included, otherwise their overlap worsens the visualization.
Figure 6 and Figure 7 show the MDS 3-dim plots of the N = 54 numerical series using the Euclidean, Tchebychev, Jaccard and ArcCosine distances, d E u , d T c , d J a , d A c , for r = 100 .
We note clearly that (i) the 3-dim MDS representation in association with the ArcCosine distance, d A c , is superior in the sense of yielding a more clear visualization; (ii) as expected the cases with α = 0 coincide; (iii) the difference between the G α and T α approximations, which is particularly visible for the integral case (i.e., α < 0 ); and (iv) the generalized mean M q α , q = 1 , 0 , 1 , interpolates the G α and T α approximations.
In summary, the proposed method gives clear conclusion and provides a new perspective that complements the classical performance evaluation based on the time and frequency responses.

3.3. The Effect of Series Truncation

The MDS approach allows also assessing other aspects of the discrete approximations. In this perspective, Figure 8 compares the GL and Tustin series after truncating at r = 10 and r = 50 with the ones obtained with r = 100 , using the ArcCosine distance. For clarifying the graphical representation only the integer orders have labels. The straight lines connect plot marks corresponding to the same fractional order. We verify that for small values of r we have a significant impact due to the truncation. The results are particularly remarkable for the fractional integrals, being less significant for derivatives. We note also that, as expected, when r increases, the effect of truncation diminishes.
In conclusion, the use of clustering methods is a relevant option for taking advantage of present day computational resources. While the paper focused mainly on the analysis of the series generated by expanding the fractional operators, we can address other problems such as comparing the time and frequency responses of closed-loop control systems in the presence of nonlinear dynamics. Another possible direction of study is also to explore other clustering algorithms that have been advanced recently [53,54,55].

4. Conclusions

This paper presented a new method for comparing numerical series based on advanced computational techniques. For this purpose, the association of several distances and clustering techniques was explored. The ArcCosine distance and 3-dim MDS representation provided the best visualization for the series under assessment. First, a test-bed involving a collection of 13 well-known numerical series was tested. Afterwards, a collection of 54 objects, representing several discretizations of fractional operators stemming from the GL, Tustin and generalized average, was considered. The results show the minor variability between objects for derivatives, but a larger scattering for the case of integrals. Additionally, the effect of series truncation was also explored under the light of the MDS representation. Therefore, the option of generalized average gives an extra degree of freedom to adapt the discretization to each specific application. In future, this technique can be further applied for analyzing the performance of the complete closed-loop system under distinct controller and discretization designs.

Author Contributions

Conceptualization, J.T.M.; methodology, J.T.M.; software, J.T.M.; validation, J.T.M., A.M.G. and C.S.C.; investigation, J.T.M. and A.M.G.; resources, J.T.M. and A.M.G.; writing—original draft preparation, J.T.M., A.M.G. and C.S.C.; writing—review and editing, J.T.M., A.M.G. and C.S.C.; visualization, J.T.M.; supervision, J.T.M.; project administration, J.T.M.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Oldham, K.; Spanier, J. The Fractional Calculus: Theory and Application of Differentiation and Integration to Arbitrary Order; Academic Press: New York, NY, USA, 1974. [Google Scholar]
  2. Miller, K.; Ross, B. An Introduction to the Fractional Calculus and Fractional Differential Equations; John Wiley and Sons: New York, NY, USA, 1993. [Google Scholar]
  3. Magin, R. Fractional Calculus in Bioengineering; Begell House Inc.: Redding, CA, USA, 2006. [Google Scholar]
  4. Tarasov, V. Fractional Dynamics: Applications of Fractional Calculus to Dynamics of Particles, Fields and Media; Springer: New York, NY, USA, 2010. [Google Scholar]
  5. Mainardi, F. Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models; Imperial College Press: London, UK, 2010. [Google Scholar]
  6. Machado, J.T.; Kiryakova, V.; Mainardi, F. Recent history of fractional calculus. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 1140–1153. [Google Scholar] [CrossRef] [Green Version]
  7. Petráš, I. (Ed.) Handbook of Fractional Calculus with Applications: Applications in Control; De Gruyter Reference; De Gruyter: Berlin, Germany, 2019; Volume 6. [Google Scholar]
  8. Băleanu, D.; Lopes, A.M. (Eds.) Handbook of Fractional Calculus with Applications: Applications in Engineering, Life and Social Sciences, Part A; De Gruyter Reference; De Gruyter: Berlin, Germany, 2019; Volume 7. [Google Scholar]
  9. Băleanu, D.; Lopes, A.M. (Eds.) Handbook of Fractional Calculus with Applications: Applications in Engineering, Life and Social Sciences, Part B; De Gruyter Reference; De Gruyter: Berlin, Germany, 2019; Volume 8. [Google Scholar]
  10. Samko, S.; Kilbas, A.; Marichev, O. Fractional Integrals and Derivatives: Theory and Applications; Gordon and Breach Science Publishers: Amsterdam, The Netherlands, 1993. [Google Scholar]
  11. Kilbas, A.; Srivastava, H.; Trujillo, J. Theory and Applications of Fractional Differential Equations; North-Holland Mathematics Studies; Elsevier: Amsterdam, The Netherlands, 2006; Volume 204. [Google Scholar]
  12. Machado, J.T. Analysis and design of fractional-order digital control systems. Syst. Anal. Model. Simul. 1997, 27, 107–122. [Google Scholar]
  13. Machado, J.A.T. Fractional Derivatives: Probability Interpretation and Frequency Response of Rational Approximations. Commun. Nonlinear Sci. Numer. Simul. 2009, 14, 3492–3497. [Google Scholar] [CrossRef] [Green Version]
  14. Machado, J.A.T. The bouncing ball and the Grünwald-Letnikov definition of fractional derivative. Fract. Calc. Appl. Anal. 2021, 24, 1003–1014. [Google Scholar] [CrossRef]
  15. Kruskal, J. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 1964, 29, 1–27. [Google Scholar] [CrossRef]
  16. Kruskal, J.B.; Wish, M. Multidimensional Scaling; Sage Publications: Newbury Park, CA, USA, 1978. [Google Scholar]
  17. Sammon, J. A nonlinear mapping for data structure analysis. IEEE Trans. Comput. 1969, 18, 401–409. [Google Scholar] [CrossRef]
  18. Hartigan, J.A. Clustering Algorithms; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1975. [Google Scholar]
  19. Borg, I.; Groenen, P.J. Modern Multidimensional Scaling-Theory and Applications; Springer: New York, NY, USA, 2005. [Google Scholar]
  20. De Leeuw, J.; Mair, P. Multidimensional scaling using majorization: Smacof in R. J. Stat. Softw. 2009, 31, 1–30. [Google Scholar] [CrossRef] [Green Version]
  21. Shepard, R.N. The analysis of proximities: Multidimensional scaling with an unknown distance function. Psychometrika 1962, 27, 219–246. [Google Scholar] [CrossRef]
  22. Fernández, A.; Gómez, S. Solving non-uniqueness in agglomerative hierarchical clustering using multidendrograms. J. Classif. 2008, 25, 43–65. [Google Scholar] [CrossRef] [Green Version]
  23. Saeed, N.; Haewoon; Imtiaz, M.; Saqib, D.B.M. A Survey on Multidimensional Scaling. ACM Comput. Surv. 2018, 51, 47. [Google Scholar] [CrossRef] [Green Version]
  24. Machado, J.T. Multidimensional scaling analysis of fractional systems. Comput. Math. Appl. 2012, 64, 2966–2972. [Google Scholar] [CrossRef]
  25. Machado, J.T.; Lopes, A.M. Multidimensional scaling and visualization of patterns in prime numbers. Commun. Nonlinear Sci. Numer. Simul. 2020, 83, 105128. [Google Scholar] [CrossRef]
  26. Machado, J.A.T. Visualizing Non-Linear Control System Performance by Means of Multidimensional Scaling. J. Comput. Nonlinear Dyn. 2013, 8. [Google Scholar] [CrossRef] [Green Version]
  27. Machado, J.A.T.; Lopes, A.M. The N -link pendulum: Embedding nonlinear dynamics into the multidimensional scaling method. Chaos Solitons Fractals 2016, 89, 130–138. [Google Scholar] [CrossRef]
  28. Machado, J.T.; Lopes, A.M. Multidimensional scaling locus of memristor and fractional order elements. J. Adv. Res. 2020, 25, 147–157. [Google Scholar] [CrossRef]
  29. Kochubei, A.; Luchko, Y. (Eds.) Handbook of Fractional Calculus with Applications: Basic Theory; De Gruyter Reference; De Gruyter: Berlin, Germany, 2019; Volume 1. [Google Scholar]
  30. Kochubei, A.; Luchko, Y. (Eds.) Handbook of Fractional Calculus with Applications: Fractional Differential Equations; De Gruyter Reference; De Gruyter: Berlin, Germany, 2019; Volume 2. [Google Scholar]
  31. Machado, J.A.T.; Galhano, A.M.; Oliveira, A.M.; Tar, J.K. Approximating fractional derivatives through the generalized mean. Commun. Nonlinear Sci. Numer. Simul. 2009, 14, 3723–3730. [Google Scholar] [CrossRef] [Green Version]
  32. Lopes, A.M.; Machado, J.T. Multidimensional scaling analysis of generalized mean discrete-time fractional order controllers. Commun. Nonlinear Sci. Numer. Simul. 2021, 95, 105657. [Google Scholar] [CrossRef]
  33. Cilibrasi, R.; Vitany, P.M.B. Clustering by Compression. IEEE Trans. Inf. Theory 2005, 51, 1523–1545. [Google Scholar] [CrossRef] [Green Version]
  34. Cha, S. Taxonomy of Nominal Type Histogram Distance Measures. In Proceedings of the American Conference on Applied Mathematics, Cambridge, MA, USA, 24–26 March 2008; pp. 325–330. [Google Scholar]
  35. Deza, M.M.; Deza, E. Encyclopedia of Distances; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  36. Yin, C.; Chen, Y.; Yau, S.S.T. A measure of DNA sequence similarity by Fourier Transform with applications on hierarchical clustering Complexity for DNA Sequences. J. Theor. Biol. 2014, 359, 18–28. [Google Scholar] [CrossRef] [PubMed]
  37. Kubicova, V.; Provaznik, I. Relationship of Bacteria Using Comparison of Whole Genome Sequences in Frequency Domain. Inf. Technol. Biomed. 2014, 3, 397–408. [Google Scholar] [CrossRef]
  38. Glunčić, M.; Paar, V. Direct mapping of symbolic DNA sequence into frequency domain in global repeat map algorithm. Nucleic Acids Res. 2013, 41. [Google Scholar] [CrossRef] [Green Version]
  39. Hamming, R.W. Error Detecting and Error Correcting Codes. Bell Syst. Tech. J. 1950, 29, 147–160. [Google Scholar] [CrossRef]
  40. Hautamaki, V.; Pollanen, A.; Kinnunen, T.; Aik, K.; Haizhou, L.; Franti, L. A Comparison of Categorical Attribute Data Clustering Methods; Springer: Berlin/Heidelberg, Germany, 2014; pp. 53–62. [Google Scholar] [CrossRef] [Green Version]
  41. Hu, L.Y.; Huang, M.W.; Ke, S.W.; Tsai, C.F. The distance function effect on k-nearest neighbor classification for medical datasets. SpringerPlus 2016, 5, 1–9. [Google Scholar] [CrossRef] [Green Version]
  42. Aziz, M.; Alhadidi, D.; Mohammed, N. Secure approximation of edit distance on genomic data. BMC Med. Genom. 2017, 10. [Google Scholar] [CrossRef] [Green Version]
  43. Yianilos, P.N. Normalized Forms of Two Common Metrics; Technical Report Report 91-082-9027-1; NEC Research Institute: Kovilpatti, India, 1991. [Google Scholar]
  44. Yu, J.; Amores, J.; Sebe, N.; Tian, Q. A new study on distance metrics as similarity measurement. In Proceedings of the IEEE International Conference on Multimedia and Expo, Toronto, ON, Canada, 9–12 July 2006; pp. 533–536. [Google Scholar] [CrossRef] [Green Version]
  45. Guyon, I.; Gunn, S.; Nikravesh, M.; Zadeh, L.A. (Eds.) Feature Extraction: Foundations and Applications; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  46. Russel, R.; Sinha, P. Perceptually Based Comparison of Image Similarity Metrics. Perception 2011, 40, 1269–1281. [Google Scholar] [CrossRef] [Green Version]
  47. Cha, S.H. Measures between Probability Density Functions. Int. J. Math. Model. Methods Appl. Sci. 2007, 1, 300–307. [Google Scholar]
  48. Tenreiro, J.A.; Machado, A.M.L.; Galhano, A.M. Multidimensional scaling visualization using parametric similarity indices. Entropy 2015, 17, 1775–1794. [Google Scholar] [CrossRef] [Green Version]
  49. Aggarwal, C.C.; Hinneburg, A.; Keim, D.A. On the Surprising Behavior of Distance Metrics in High Dimensional Space; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  50. Sokal, R.R.; Rohlf, F.J. The comparison of dendrograms by objective methods. Taxon 1962, 33–40. [Google Scholar] [CrossRef]
  51. Felsenstein, J. PHYLIP (Phylogeny Inference Package), Version 3.5 c; University of Washington: Seattle, WA, USA, 1993. [Google Scholar]
  52. Tuimala, J. A primer to Phylogenetic Analysis Using the PHYLIP Package; CSC—Scientific Computing Ltd.: Fairfax, VA, USA, 2006. [Google Scholar]
  53. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  54. McInnes, L.; Healy, J.; Melville, J. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv 2020, arXiv:1802.03426. [Google Scholar]
  55. Lopes, A.M.; Machado, J.A.T. Uniform Manifold Approximation and Projection Analysis of Soccer Players. Entropy 2021, 23, 793. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Dendrogram (top) and tree (bottom) representations of the numerical series S i , i = 1 , , 13 , produced by the ArcCosine distance for r = 100 .
Figure 1. Dendrogram (top) and tree (bottom) representations of the numerical series S i , i = 1 , , 13 , produced by the ArcCosine distance for r = 100 .
Mathematics 09 02429 g001
Figure 2. MDS loci of the numerical series S i , i = 1 , , 13 , produced by the Euclidean (top) and Tchebychev (bottom) distances, d E u and d T c , for r = 100 .
Figure 2. MDS loci of the numerical series S i , i = 1 , , 13 , produced by the Euclidean (top) and Tchebychev (bottom) distances, d E u and d T c , for r = 100 .
Mathematics 09 02429 g002
Figure 3. MDS loci of the numerical series S i , i = 1 , , 13 , produced by the Jaccard (top) and ArcCosine (bottom) distances, d J a and d A c , for r = 100 .
Figure 3. MDS loci of the numerical series S i , i = 1 , , 13 , produced by the Jaccard (top) and ArcCosine (bottom) distances, d J a and d A c , for r = 100 .
Mathematics 09 02429 g003
Figure 4. MDS loci of the numerical series S i , i = 1 , , 13 , produced by the ArcCosine distance, d A c , for r = 100 .
Figure 4. MDS loci of the numerical series S i , i = 1 , , 13 , produced by the ArcCosine distance, d A c , for r = 100 .
Mathematics 09 02429 g004
Figure 5. Dendrogram (top) and tree (bottom) representations of the N = 54 numerical series produced by the ArcCosine distance, d A c , for r = 100 .
Figure 5. Dendrogram (top) and tree (bottom) representations of the N = 54 numerical series produced by the ArcCosine distance, d A c , for r = 100 .
Mathematics 09 02429 g005
Figure 6. MDS loci of the N = 54 numerical series produced by the Euclidean (top) and Tchebychev (bottom) distances, d T c and d A c , for r = 100 .
Figure 6. MDS loci of the N = 54 numerical series produced by the Euclidean (top) and Tchebychev (bottom) distances, d T c and d A c , for r = 100 .
Mathematics 09 02429 g006
Figure 7. MDS loci of the N = 54 numerical series produced by the Jaccard (top) and ArcCosine (bottom) distances, d A c , for r = 100 .
Figure 7. MDS loci of the N = 54 numerical series produced by the Jaccard (top) and ArcCosine (bottom) distances, d A c , for r = 100 .
Mathematics 09 02429 g007
Figure 8. MDS loci comparing the GL and Tustin series for r = 10 (top) and r = 50 (bottom) with the ones for r = 100 , using the ArcCosine distance d A c .
Figure 8. MDS loci comparing the GL and Tustin series for r = 10 (top) and r = 50 (bottom) with the ones for r = 100 , using the ArcCosine distance d A c .
Mathematics 09 02429 g008
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tenreiro Machado, J.; Galhano, A.M.; Cordeiro, C.S. Discretization of Fractional Operators: Analysis by Means of Advanced Computational Techniques. Mathematics 2021, 9, 2429. https://doi.org/10.3390/math9192429

AMA Style

Tenreiro Machado J, Galhano AM, Cordeiro CS. Discretization of Fractional Operators: Analysis by Means of Advanced Computational Techniques. Mathematics. 2021; 9(19):2429. https://doi.org/10.3390/math9192429

Chicago/Turabian Style

Tenreiro Machado, Jose, Alexandra M. Galhano, and Carla S. Cordeiro. 2021. "Discretization of Fractional Operators: Analysis by Means of Advanced Computational Techniques" Mathematics 9, no. 19: 2429. https://doi.org/10.3390/math9192429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop