WJ-CH10-模式识别-聚类算法-05-英文版-教学课件.ppt

上传人:可**** 文档编号:87681410 上传时间:2023-04-16 格式:PPT 页数:98 大小:3.38MB
返回 下载 相关 举报
WJ-CH10-模式识别-聚类算法-05-英文版-教学课件.ppt_第1页
第1页 / 共98页
WJ-CH10-模式识别-聚类算法-05-英文版-教学课件.ppt_第2页
第2页 / 共98页
点击查看更多>>
资源描述

《WJ-CH10-模式识别-聚类算法-05-英文版-教学课件.ppt》由会员分享,可在线阅读,更多相关《WJ-CH10-模式识别-聚类算法-05-英文版-教学课件.ppt(98页珍藏版)》请在taowenge.com淘文阁网|工程机械CAD图纸|机械工程制图|CAD装配图下载|SolidWorks_CaTia_CAD_UG_PROE_设计图分享下载上搜索。

1、(博士/教授/博导)1/模式识别Pattern Recognition Chapter 10(V)OTHER CLUSTERING ALGORITHMS4/10/20231vThe following types of algorithms will be considered:Graph theory based clustering algorithms.Competitive learning algorithms.Valley seeking clustering algorithms.Cost optimization clustering algorithms based on:B

2、ranch and bound approach.Simulated annealing methodology.Deterministic annealing.Genetic algorithms.Density-based clustering algorithms.Clustering algorithms for high dimensional data sets.OTHER CLUSTERING ALGORITHMSOTHER CLUSTERING ALGORITHMS2GRAPH THEORY BASED CLUSTERING GRAPH THEORY BASED CLUSTER

3、ING ALGORITHMSALGORITHMSvIn principle,such algorithms are capable of detecting clusters of various shapes,at least when they are well separated.In the sequel we discuss algorithms that are based on:The Minimum Spanning Tree(MST).Regions of influence.Directed trees.3vMinimum Spanning Tree(MST)algorit

4、hms Preliminaries:Let G be the complete graph,each node of which corresponds to a point of the data set X.e=(xi,xj)denote an edge of G connecting xi and xj.wed(xi,xj)denote the weight of the edge e.Definitions:Two edges e1 and e2 are k steps away from each other if the minimum path that connects a v

5、ertex of e1 and a vertex of e2 contains k-1 edges.A Spanning Tree of G is a connected graph that:Contains all the vertices of the graph.Has no loops.The weight of a Spanning Tree is the sum of weights of its edges.A Minimum Spanning Tree(MST)of G is a spanning tree with minimum weight(when all wes a

6、re different from each other,the MST is unique).4vMinimum Spanning Tree(MST)algorithms(cont)Example 1:For the MST in the figure and for k=2 and q=3 we have:For e0:we0=17,me0=2.3,e0=0.95.we0 lies 15.5*e0 away from me0,hence it is inconsistent.For e11:we11=3,me11=2.5,e11=2.12.we11 lies 0.24*e11 away f

7、rom me11,hence it is consistent.6vMinimum Spanning Tree(MST)algorithms(cont)Remarks:The algorithm depends on the choices of k and q.The algorithm is insensitive to the order in which the data points are considered.No initial conditions are required,no convergence aspects are involved.The algorithm w

8、orks well for many cases where the clusters are well separated.A problem may occur when a“large”edge e has another“large”edge as its neighbor.In this case,e is likely not to be characterized as inconsistent and the algorithm may fail to unravel the underlying clustering structure correctly.7vAlgorit

9、hms based on Regions of Influence(ROI)Definition:The region of influence of two distinct vectors xi,xjX is defined as:R(xi,xj)=x:cond(d(x,xi),d(x,xj),d(xi,xj),xi xj where cond(d(x,xi),d(x,xj),d(xi,xj)may be defined as:a)maxd(x,xi),d(x,xj)d(xi,xj),b)d2(x,xi)+d2(x,xj)d2(xi,xj),c)(d2(x,xi)+d2(x,xj)d2(x

10、i,xj)OR(mind(x,xi),d(x,xj)d(xi,xj),d)(maxd(x,xi),d(x,xj)d(xi,xj)OR(mind(x,xi),d(x,xj)d(xi,xj),where affects the size of the ROI defined by xi,xj and is called relative edge consistency.8vAlgorithms based on Regions of Influence(cont)Remarks:The algorithm is insensitive to the order in which the pair

11、s are considered.In the choices of cond in(c)and(d),must be chosen a priori.For the resulting graphs:-if the choice(a)is used for cond,they are called relative neighborhood graphs(RNGs)-if the choice(b)is used for cond,they are called Gabriel graphs(GGs)Several results show that better clusterings a

12、re produced when(c)and(d)conditions are used in the place of cond,instead of(a)and(b).10vAlgorithms based on Directed Trees Definitions:A directed graph is a graph whose edges are directed.A set of edges ei1,eiq constitute a directed path from a vertex A to a vertex B,if,A is the initial vertex of e

13、i1 B is the final vertex of eiq The destination vertex of the edge eij,j=1,q-1,is the departure vertex of the edge eij+1.(In figure(a)the sequence e1,e2,e3 constitute a directed path connecting the vertices A and B).11vAlgorithms based on Directed Trees(cont Clustering Algorithm based on Directed Tr

14、eesSet to a specific value.Determine ni,i=1,N.Compute gij,i,j=1,N,ij.For i=1 to NIf ni=0 then-xi is the root of a new directed tree.Else-Determine xr such that gir=maxxji()gij-If gir0 thenoxr is the parent of xi(there exists a directed edge from xi to xr).13vAlgorithms based on Directed Trees(cont)R

15、emarks:The root xi of a directed tree is the point in i()with the most dense neighborhood.The branch that handles the case gir=0 ensures that no circles occur.The algorithm is sensitive to the order of consideration of the data points.For proper choice of and large N,this scheme behaves as a mode-se

16、eking algorithm(see a later section).Example 2:In the figure below,the size of the edge of the grid is 1 and=1.1.The above algorithm gives the directed trees shown in the figure.15COMPETITIVE LEARNING ALGORITHMSCOMPETITIVE LEARNING ALGORITHMSvThe main ideaEmploy a set of representatives wj(in the se

17、quel we consider only point representatives).Move them to regions of the vector space that are“dense”in vectors of X.vCommentsIn general,representatives are updated each time a new vector xX is presented to the algorithm(pattern mode algorithms).These algorithms do not necessarily stem from the opti

18、mization of a cost function.vThe strategyFor a given vector xAll representatives compete to each otherThe winner(representative that lies closest to x)moves towards x.The losers(the rest of the representatives)either remain unchanged or move towards x but at a much slower rate.16Remarks:h(x,wi)is an

19、 appropriately defined function(see below).and are the learning rates controlling the updating of the winner and the losers,respectively(may differ from looser to looser).A threshold of similarity (carefully chosen)controls the similarity between x and a representative wj.-If d(x,wj),for some distan

20、ce measure,x and wj are considered as dissimilar.The termination criterion is|W(t)-W(t-1)|tmax)(max allowable no of iterations)Assign each xX to the cluster whose representative wj lies closest to x.-(*)d(.)may be the Euclidean distance or other distances(e.g.,Itakura-Saito distortion).In addition s

21、imilarity measures may be used(in this case min is replaced by max).(*)is the learning rate and takes values in 0,1.19vBasic Competitive Learning Algorithm(cont)Remarks:In this scheme losers remain unchanged.The winner,after the updating,lies in the line segment formed by wj(t-1)and x.A priori knowl

22、edge of the number of clusters is required.If a representative is initialized far away from the regions where the points of X lie,it will never win.Possible solution:Initialize all representatives using vectors of X.Versions of the algorithm with variable learning rate have also been studied.20vLeak

23、y Learning Algorithm The same with the Basic Competitive Learning Algorithm except part(D),the updating equation of the representatives,which becomes where w and l are the learning rates in 0,1 and wl.Remarks:All representatives move towards x but the losers move at a much slower rate than the winne

24、r does.The algorithm does not suffer from the problem of poor initialization of the representatives(why?).An algorithm in the same spirit is the“neural-gas”algorithm,where l varies from loser to loser and decays as the corresponding representatives lie away from x.This algorithm results from the opt

25、imization of a cost function.21vConscientious Competitive Learning Algorithms Main Idea:Discourage a representative from winning if it has won many times in the past.Do this by assigning a“conscience”to each representative.A simple implementationEquip each representative wj,j=1,m,with a counter fj t

26、hat counts the times that wj wins.At part(A)(initialization stage)of GCLS set fj=1,j=1,m.Define the distance d*(x,wj)asd*(x,wj)=d(x,wj)fj.(the distance is penalized to discourage representatives that have won many times)Part(B)becomesThe representative wj is the winner on x ifd*(x,wj)=mink=1,md*(x,w

27、k)fj(t)=fj(t-1)+1 Parts(C)and(D)are the same as in the Basic Competitive Learning AlgorithmAlso m=minit=mmax22vSelf Organizing Maps(cont.)If wj wins on the current input x all the representatives in Qj(t)are updated(Self Organizing Map(SOM)scheme).SOM(in its simplest version)may be viewed as a speci

28、al case of GCLS ifParts(A),(B)and(C)are defined as in the basic competitive learning scheme.In part(D),if wj wins on x,the updating equation becomes:where(t)is a variable learning rate satisfying certain conditions.After convergence,neighboring representatives also lie“close”in terms of their distan

29、ce in the vector space(topographical ordering)(see fig.(d).24vSupervised Learning Vector Quantization(VQ)In this case each cluster is treated as a class(m compact classes are assumed)the available vectors have known class labels.The goal:Use a set of m representatives and place them in such a way so

30、 that each class is“optimally”represented.The simplest version of VQ(LVQ1)may be obtained from GCLS as follows:Parts(A),(B)and(C)are the same with the basic competitive learning scheme.In part(D)the updating for wj s is carried out as follows25vSupervised Learning Vector Quantization(cont.)In words:

31、wj is moved:Toward x if wj wins and x belongs to the j-th class.Away from x if wj wins and x does not belong to the j-th class.All other representatives remain unaltered.26vValley-Seeking Clustering Algorithms(cont.)Valley-Seeking algorithmFix a.Fix the number of clusters m.Define an initial cluster

32、ing X.RepeatFor i=1 to NFind j:kji=maxq=1,mkqi-Set ci=jEnd ForFor i=1 to N-Assign xi to cluster Cci.End ForUntil no reclustering of vectors occurs.28vValley-Seeking Clustering Algorithms(cont.)The algorithm Moves a window d(x,y)a at x and counts the points from different clusters in it.Assigns x to

33、the cluster with the larger number of points in the window(the cluster that corresponds to the highest local pdf).In other wordsThe boundary is moved away from the“winning”cluster(close similarity with Parzen windows).Remarks:The algorithm is sensitive to a.It is suggested to perform several runs,fo

34、r different values of a.The algorithm is of a mode-seeking nature(if more than enough clusters are initially appointed,some of them will be empty).29CLUSTERING VIA COST OPTIMIZATION(REV.)CLUSTERING VIA COST OPTIMIZATION(REV.)vBranch and Bound Clustering AlgorithmsThey compute the globally optimal so

35、lution to combinatorial problems.They avoid exhaustive search via the employment of a monotonic criterion J.Monotonic criterion J:if k vectors of X have been assigned to clusters,the assignment of an extra vector to a cluster does not decrease the value of J.Consider the following 3-vectors,2-class

36、case 121:1st,3rd vectors belong to class 1 2nd vector belongs to class 2.(leaf of the tree)12x:1st vector belongs to class 1 2nd vector belongs to class 2 3rd vector is unassigned (Partial clustering-node of the tree).31vBranch and Bound Clustering Algorithms How exhaustive search is avoidedLet B be

37、 the best value for criterion J computed so far.If at a node of the tree,the corresponding value of J is greater than B,no further search is performed for all subsequent descendants springing from this node.Let Cr=c1,cr,1 r N,denotes a partial clustering where ci1,2,m,ci=j if the vector xi belongs t

38、o cluster Cj and xr+1,xN are yet unassigned.For compact clusters and fixed number of clusters,m,a suitable cost function is where mci is the mean vector of the cluster Cci with nj(Cr)being the number of vectors xx1,xr that belong to cluster Cj.32vBranch and Bound Clustering Algorithms(cont.)Initiali

39、zationStart from the initial node and go down to a leaf.Let B be the cost of the corresponding clustering(initially set B=+).Main stageStart from the initial node of the tree and go down until either-(i)A leaf is encountered.oIf the cost B of the corr.clustering C is smaller than B then *B=B *C is t

40、he best clustering found so faroEnd if-Or(ii)a node q with value of J greater than B is encountered.ThenoNo subsequent clustering branching from q is considered.oBacktrack to the parent of q,qpar,in order to span a different path.oIf all paths branching from qpar have been considered then *Move to t

41、he grandparent of q.oEnd if-End ifTerminate when all possible paths have been considered explicitly or implicitly.33vBranch and Bound Clustering Algorithms(cont.)Remarks:Variations of the above algorithm,where much tighter bounds of B are used(that is many more clusterings are rejected without expli

42、cit consideration)have also been proposed.A disadvantage of the algorithm is the excessive(and unpredictable)amount of required computational time.34vSimulated AnnealingIt guarantees(under certain conditions)in probability,the computation of the globally optimal solution of the problem at hand via t

43、he minimization of a cost function J.It may escape from local minima since it allows moves that temporarily may increase the value of J.DefinitionsAn important parameter of the algorithm is the“temperature”T,which starts at a high value and reduces gradually.A sweep is the time the algorithm spents

44、at a given temperature so that the system can enter the“thermal equilibrium”.NotationTmax is the initial value of the temperature T.Cinit is the initial clustering.C is the current clustering.t is the current sweep.35The algorithm:Set T=Tmax and C=Cinit.t=0Repeat-t=t+1-RepeatoCompute J(C)oProduce a

45、new clustering,C,by assigning a randomly chosen vector from X to a different cluster.oCompute J(C)oIf J=J(C)-J(C)1 points of X).For each cDsp define a connection with all neighboring cubes cj in Dp for which d(mc,mcj)is no greater than 4,where mc,mcj are the means of c and cj,respectively.Main stage

46、Determine the set Dr that contains:-the highly populated cubes and-the cubes that have at least one connection with a highly populated cube.For each point x in a cube cDr determine Y(x)as the set of points that belong to cubes cj in Dr such that the mean values of cjs lie at distance less than from

47、x(typically=4).57vThe DENCLUE Algorithm(cont.)DENCLUE algorithm(cont.)For each point x in a cube cDrApply a hill climbing method starting from x and let x*be the local maximum to which the method converges.If x*is a significant local maximum(f X(x*)then-If a cluster C associated with x*has already b

48、een created thenox is assigned to C-ElseoCreate a cluster C associated with x*oAssign x to C-End ifEnd ifEnd for58vThe DENCLUE Algorithm(cont.)Remarks:Shortcuts allow the assignment of points to clusters,without having to apply the hill-climbing procedure.DENCLUE is able to detect arbitrarily shaped

49、 clusters.The algorithm deals with noise very satisfactory.The worst-case time complexity of DENCLUE is O(Nlog2N).Experimental results indicate that the average time complexity is O(log2N).It works efficiently with high-dimensional data.59CLUSTERING ALGORITHMS FOR HIGH-CLUSTERING ALGORITHMS FOR HIGH

50、-DIMENSIONAL DATA SETSDIMENSIONAL DATA SETSvWhat is a High-dimensionality space?Dimensionality l of the input space with20 l few thousands indicate high-dimensional data sets.vProblems of considering simultaneously all dimensions in high-dimensional data sets:“Curse of dimensionality”.As a fixed num

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 生活休闲 > 生活常识

本站为文档C TO C交易模式,本站只提供存储空间、用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。本站仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知淘文阁网,我们立即给予删除!客服QQ:136780468 微信:18945177775 电话:18904686070

工信部备案号:黑ICP备15003705号© 2020-2023 www.taowenge.com 淘文阁