Besides as [12], [13] are some useful weighting techniques.

Besides the commonmulti-view clustering, the feature selection can help to provide accurate modelin data clustering 6,7. Y.M. Xu et al. (2016) combined Weighted Multi-viewClustering with Feature Selection in the WMCFS algorithm which in one way  performsmulti-view data clustering and feature selection in the other way6.  However, the feature selection calculationmay increase their objective function convergence.

Their weighting scheme forviews and features can add an extra cost computation cost. The same idea havebeen extended in 7  where an efficient gradient-basedoptimization algorithm is embedded into k-means algorithm. H. Liu et al(2017)used the weighted K-means clustering of a binary matrix to decrease the time from O(InrK) to O(n) and space complexities oftheir Spectral Ensemble Clustering 8.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

Re-weighting such as 9; auto-weighting techniquesuch as 10 , 11; Bi-level weighting such as 12, 13 are some useful weightingtechniques.1.1. Multi-view learningmethods ü Co-Training StyleAlgorithms Co-trainingstyle algorithms train alternately to maximize the mutual agreement on twodistinct views of the data. C. Lee and T. Liu (2016) introduced a new pairwiseco-training procedure in 14.

They used the augmented viewto separately guide the improvement of each view, then updated the augmentedview and repeat this process iteratively. By this way, the proposed algorithmovercomes the inefficiency of conventional co-training. ü Multi-Kernel Learning multiplekernel learning algorithms exploit kernels that naturally correspond todifferent views and combine kernels either linearly or non-linearly to improvelearning performance. D. Guo et al.(2014) combined the kernel matrixlearning and the spectral clustering optimization into one15.

The proposedalgorithm can not only detect the kernel weights but also cluster themulti-view data simultaneously. Y. Ye et al.(2016) have proposed a three-stepalternative algorithm to tackle the multi-view clustering issues with respectto the co-regularized Kernel k-means 16. According to the authors’perspective, Kernel k-means clustering is trying to find the cluster assignmentthat minimizes the sum-of-squares loss between the samples and the clustercentroid. The main problem is formulated as follow: Their algorithmoptimizes the consensus embedding and automatically determines the contributionof each individual embedding to the consensus one.ü  Multi-View Graph Clustering, Multi-View GraphClustering recently reach lot of interest and an arsenal of algorithms 17-24have been proposed.

Limin Li (2014)generalize a single-view penalized graph (SPGraph) clustering approach to aMultiview penalized graph (MPGraph) version to integrate the structural andchemical view of drug data in 17. They used the Laplacian eigenmaps with an extrapenalized term to cluster. Their proposed scheme is pretty similar to idea ofmulti-view co-regularized spectral clustering.

18, X.Zhang et al. (2016) solved the drawbacks of a nonnegativematrix factorization (NMF) based multi-view clustering algorithms then proposedthe one via Graph Regularized Symmetric Nonnegative Matrix Factorization in 19. W. Zhuge et al (2017)incorporated a new graphlearning mechanism into feature extraction and add an interaction between thelearned graph and the low-dimensional representations. The proposed featureextraction with structured graph (FESG) scheme can learn both a transformationmatrix and an ideal structured graph containing the clustering information asunsupervised single view 20. Furthermore, theyextended this framework to the multiple views feature extraction withstructured graph (MFESG), which learns an optimal weight for each viewautomatically without requiring an additional parameter.

More graph clustering are discussed in 21,22,23,24.ü Multi-View SubspaceClustering subspace learning algorithms aim to obtain alatent subspace shared by multiple views by assuming that the input views aregenerated from this latent subspace then cluster the data points accordingly. H. Gao et al.(2015) proposedan algorithm which can clustering on the subspace representation ofeach view simultaneously in 25. X. Cao et al. (2015) extended the existing subspace clusteringinto the multi-view domain, and used the Hilbert Schmidt Independence Criterion(HSIC) as a diversity term to explore the complementarity of multi-viewrepresentations26.

More reading aboutSubspace clustering can be find from27-32.ü Multi-Task Multi-ViewClustering. X.

Zhang et al.(2016) proposed two multi-task multi-view clustering algorithms, the bipartitegraph based multi-task multi-view clustering algorithm (BMTMVC) for nonnegativedata, and the semi-nonnegative matrix tri-factorization based multi-taskmulti-view clustering algorithm (SMTMVC) algorithm. for negative feature values 33. The proposedalgorithm’s framework is based on the co-clustering.

A hierarchical clusteringmulti-task learning (HC-MTL) havebeen proposed in 34. Three challenges inthe area have motivated the authors: 1) Ignoring of the existence of latentrelatedness among actions, 2) Ignoringof the grouping information among actions, 3) Difficulty in groupinginformation discovery. Theyformulated their objective function into the group-wise least square lossregularized by low rank and sparsity with respect to two latent variables,model parameters and grouping information, for joint optimization. However, theproposed method cannot jointly leverage the multimodal and multi-view informationto discover even more latent correlations among different.

Background 2.1.  Multi-view Clustering Themulti-view clustering consists of integrating multiple feature sets together toperform clustering. One can say that, when clustering in multi-view data, wecan either perform a feature extraction or change the information interactionprocess or make a consensus between many single view cluster.

Generallyspeaking, the multi-view clustering partitions a dataset into groups bysimultaneously considering multiple representations (views) for the sameinstances. In real application k-means clustering, spectral clustering,kernel-based clustering, graph-based clustering and hierarchical clusteringhave been widely used.• The correctness of views: How to know whether a view is correct is crucial for MvC.Because MvC exploits all available views to help clustering performance, incorrect views are very harmful. Although somework leverages these views with weights, errors could be propagated from amisleading view to other views. Thus, this problem must be solved or mitigatedto a great extent to ensure that MvC is effective.