當前位置:
首頁 > 知識 > 基於Kmeans演算法的文檔聚類

基於Kmeans演算法的文檔聚類

介紹

給定多篇文檔,如何對文檔進行聚類。我使用的是k-means聚類方法。關於k-means網路上有很多資料介紹其演算法思想和其數學公式。

針對文檔聚類,首先要講文檔進行向量化,也就是說要對文檔進行編碼。可以使用one-hot編碼,也可以使用TF-IDF編碼,也可以使用doc2vec編碼等,總之,要將其向量化。

使用的一個baseline就是k-means文檔聚類。其借鑒的源碼地址為:https://github.com/Hazoom/documents-k-means

在該源碼基礎上做了改進。


輸入數據結構

基於Kmeans演算法的文檔聚類


該輸入文本的第一列為文本的標題,第二列是經過去高頻詞、停用詞、低頻詞之後的數據。源碼

首先,我修改的是文檔的表示,因為我的數據和作者的json數據並不同。

package com.clustering;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import java.util.StringTokenizer;
/** Class for storing a collection of documents to be clustered. */
public class DocumentList implements Iterable<Document> {
private final List<Document> documents = new ArrayList<Document>();
private int numFeatures;
/** Construct an empty DocumentList. */
public DocumentList() {
}
/**
* Construct a DocumentList by parsing the input string. The input string may contain multiple
* document records. Each record must be delimited by curly braces {}.
*/
/*public DocumentList(String input) {
StringTokenizer st = new StringTokenizer(input, "{");
int numDocuments = st.countTokens() - 1;
String record = st.nextToken(); // skip empty split to left of {
for (int i = 0; i < numDocuments; i++) {
record = st.nextToken();
Document document = Document.createDocument(record);
if (document != null) {
documents.add(document);
}
}
}*/
public DocumentList(String input) throws IOException {
BufferedReader reader = new BufferedReader( new InputStreamReader( new FileInputStream( new File(input)),"gbk"));
String s = null;
int i = 0;
while ((s=reader.readLine())!=null) {
String arry[] =s.split(" ");
String content = s.substring(arry[0].length()).trim();
String title =arry[0];
Document document = new Document(i, content, title);
documents.add(document);
i++;
}
reader.close();
}
/** Add a document to the DocumentList. */
public void add(Document document) {
documents.add(document);
}
/** Clear all documents from the DocumentList. */
public void clear() {
documents.clear();
}
/** Mark all documents as not being allocated to a cluster. */
public void clearIsAllocated() {
for (Document document : documents) {
document.clearIsAllocated();
}
}
/** Get a particular document from the DocumentList. */
public Document get(int index) {
return documents.get(index);
}
/** Get the number of features used to encode each document. */
public int getNumFeatures() {
return numFeatures;
}
/** Determine whether DocumentList is empty. */
public boolean isEmpty() {
return documents.isEmpty();
}
@Override
public Iterator<Document> iterator() {
return documents.iterator();
}
/** Set the number of features used to encode each document. */
public void setNumFeatures(int numFeatures) {
this.numFeatures = numFeatures;
}
/** Get the number of documents within the DocumentList. */
public int size() {
return documents.size();
}
/** Sort the documents within the DocumentList by document ID. */
public void sort() {
Collections.sort(documents);
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder();
for (Document document : documents) {
sb.append(" ");
sb.append(document.toString());
sb.append("
");
}
return sb.toString();
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116

其次,針對KMeansClusterer,我們做了如下修改,因為我想要自定義k,而源碼作者提供了自動調節k值的方法。

package com.clustering;
import java.util.Random;
/** A Clusterer implementation based on k-means clustering. */
public class KMeansClusterer implements Clusterer {
private static final Random RANDOM = new Random();
private final double clusteringThreshold;
private final int clusteringIterations;
private final DistanceMetric distance;
/**
* Construct a Clusterer.
*
* @param distance the distance metric to use for clustering
* @param clusteringThreshold the threshold used to determine the number of clusters k
* @param clusteringIterations the number of iterations to use in k-means clustering
*/
public KMeansClusterer(DistanceMetric distance, double clusteringThreshold,
int clusteringIterations) {
this.distance = distance;
this.clusteringThreshold = clusteringThreshold;
this.clusteringIterations = clusteringIterations;
}
/**
* Allocate any unallocated documents in the provided DocumentList to the nearest cluster in the
* provided ClusterList.
*/
private void allocatedUnallocatedDocuments(DocumentList documentList, ClusterList clusterList) {
for (Document document : documentList) {
if (!document.isAllocated()) {
Cluster nearestCluster = clusterList.findNearestCluster(distance, document);
nearestCluster.add(document);
}
}
}
/**
* Run k-means clustering on the provided documentList. Number of clusters k is set to the lowest
* value that ensures the intracluster to intercluster distance ratio is below
* clusteringThreshold.
*/
@Override
public ClusterList cluster(DocumentList documentList) {
ClusterList clusterList = null;
for (int k = 1; k <= documentList.size(); k++) {
clusterList = runKMeansClustering(documentList, k);
if (clusterList.calcIntraInterDistanceRatio(distance) < clusteringThreshold) {
break;
}
}
return clusterList;
}
/** Create a cluster with the unallocated document that is furthest from the existing clusters. */
private Cluster createClusterFromFurthestDocument(DocumentList documentList,
ClusterList clusterList) {
Document furthestDocument = clusterList.findFurthestDocument(distance, documentList);
Cluster nextCluster = new Cluster(furthestDocument);
return nextCluster;
}
/** Create a cluster with a single randomly seelcted document from the provided DocumentList. */
private Cluster createClusterWithRandomlySelectedDocument(DocumentList documentList) {
int rndDocIndex = RANDOM.nextInt(documentList.size());
Cluster initialCluster = new Cluster(documentList.get(rndDocIndex));
return initialCluster;
}
/** Run k means clustering on the provided DocumentList for a fixed number of clusters k. */
public ClusterList runKMeansClustering(DocumentList documentList, int k) {
ClusterList clusterList = new ClusterList();
documentList.clearIsAllocated();
clusterList.add(createClusterWithRandomlySelectedDocument(documentList));
while (clusterList.size() < k) {
clusterList.add(createClusterFromFurthestDocument(documentList, clusterList));
}
for (int iter = 0; iter < clusteringIterations; iter++) {
allocatedUnallocatedDocuments(documentList, clusterList);
clusterList.updateCentroids();
if (iter < clusteringIterations - 1) {
clusterList.clear();
}
}
return clusterList;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
package com.clustering;
/**
* An interface defining a Clusterer. A Clusterer groups documents into Clusters based on similarity
* of their content.
*/
public interface Clusterer {
/** Cluster the provided list of documents. */
public ClusterList cluster(DocumentList documentList);
public ClusterList runKMeansClustering(DocumentList documentList, int k);
}
1
2
3
4
5
6
7
8
9
10
11

針對介面Clusterer ,其包含兩類實現方法,其一是自動確定k數目的方法;其二是用戶自定義k值的方法。


結果輸出部分

該部分,是自己寫的一個類,用於輸出聚類結果,以及類單詞出現的概率(這裡直接計算的是單詞在該類中的頻率),可自行定義輸出topk個單詞。具體代碼如下:

package com.clustering;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.Hashtable;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
public class OutPutFile {
public static void outputdocument(String strDir,ClusterList clusterList) throws IOException{
BufferedWriter Writer = new BufferedWriter( new OutputStreamWriter( new FileOutputStream( new File(strDir)),"gbk"));
for (Cluster cluster : clusterList) {
// System.out.println(cluster1.getDocuments());
String text = "";
for (Document doc: cluster.getDocuments()) {
text +=doc.getContents()+" ";
}
Writer.write(text+"
");
}
Writer.close();
}
public static void outputcluster(String strDir,ClusterList clusterList) throws IOException{
BufferedWriter Writer = new BufferedWriter( new OutputStreamWriter( new FileOutputStream( new File(strDir)),"gbk"));
Writer.write(clusterList.toString());
Writer.close();
}
public static void outputclusterwprdpro(String strDir,ClusterList clusterList,int topword) throws IOException{
BufferedWriter Writer = new BufferedWriter( new OutputStreamWriter( new FileOutputStream( new File(strDir)),"gbk"));
Hashtable<Integer,String> clusterdocumentlist = new Hashtable<Integer,String>();
int clusterid=0;
for (Cluster cluster : clusterList) {
String text = "";
for (Document doc: cluster.getDocuments()) {
text +=doc.getContents()+" ";
}
clusterdocumentlist.put(clusterid,text);
clusterid++;
}
for (Integer key : clusterdocumentlist.keySet()) {
Writer.write("Topic" + new Integer(key) + "
");
List<Entry<String, Double>> list=oneclusterwprdpro(clusterdocumentlist.get(key));
int count=0;
for (Map.Entry<String, Double> mapping : list) {
if (count<=topword) {
Writer.write(" " + mapping.getKey() + " " + mapping.getValue()+ "
");
count++;
}else {
break;
}
}
}
Writer.close();
}
//詞頻統計並排序
public static List<Entry<String, Double>> oneclusterwprdpro(String text){
Hashtable<String, Integer> wordCount = new Hashtable<String, Integer>();
String arry[] =text.split("\s+");
//詞頻統計
for (int i = 0; i < arry.length; i++) {
if (!wordCount.containsKey(arry[i])) {
wordCount.put(arry[i], Integer.valueOf(1));
} else {
wordCount.put(arry[i], Integer.valueOf(wordCount.get(arry[i]).intValue() + 1));
}
}
//頻率計算
Hashtable<String, Double> wordpro = new Hashtable<String, Double>();
for (java.util.Map.Entry<String, Integer> j : wordCount.entrySet()) {
String key = j.getKey();
double value = 1.0*j.getValue()/arry.length;
wordpro.put(key, value);
}
//將map.entrySet()轉換成list
List<Map.Entry<String, Double>> list = new ArrayList<Map.Entry<String, Double>>(wordpro.entrySet());
Collections.sort(list, new Comparator<Map.Entry<String, Double>>() {
//降序排序
public int compare(Entry<String, Double> o1, Entry<String, Double> o2) {
//return o1.getValue().compareTo(o2.getValue());
return o2.getValue().compareTo(o1.getValue());
}
});
return list;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94

主方法

package web.main;
import java.io.IOException;
import com.clustering.ClusterList;
import com.clustering.Clusterer;
import com.clustering.CosineDistance;
import com.clustering.DistanceMetric;
import com.clustering.DocumentList;
import com.clustering.Encoder;
import com.clustering.KMeansClusterer;
import com.clustering.OutPutFile;
import com.clustering.TfIdfEncoder;
/**
* Solution for Newsle Clustering question from CodeSprint 2012. This class implements clustering of
* text documents using Cosine or Jaccard distance between the feature vectors of the documents
* together with k means clustering. The number of clusters is adapted so that the ratio of the
* intracluster to intercluster distance is below a specified threshold.
*/
public class ClusterDocumentsArgs {
private static final int CLUSTERING_ITERATIONS = 30;
private static final double CLUSTERING_THRESHOLD = 0.5;
private static final int NUM_FEATURES =10000;
private static final int k = 30; //自行定義k
/**
* Cluster the text documents in the provided file. The clustering process consists of parsing and
* encoding documents, and then using Clusterer with a specific Distance measure.
*/
public static void main(String[] args) throws IOException {
String fileinput = "/home/qianyang/kmeans/webdata/content";
DocumentList documentList = new DocumentList(fileinput);
Encoder encoder = new TfIdfEncoder(NUM_FEATURES);
encoder.encode(documentList);
System.out.println(documentList.size());
DistanceMetric distance = new CosineDistance();
Clusterer clusterer = new KMeansClusterer(distance, CLUSTERING_THRESHOLD, CLUSTERING_ITERATIONS);
ClusterList clusterList = clusterer.runKMeansClustering(documentList, k);
// ClusterList clusterList = clusterer.cluster(documentList);
//輸出聚類結果
OutPutFile.outputcluster("/home/qianyang/kmeans/result/cluster"+k,clusterList);
//輸出topk個單詞
OutPutFile.outputclusterwprdpro("/home/qianyang/kmeans/result/wordpro"+k+"and10", clusterList, 10);
OutPutFile.outputclusterwprdpro("/home/qianyang/kmeans/result/wordpro"+k+"and15", clusterList, 15);
OutPutFile.outputclusterwprdpro("/home/qianyang/kmeans/result/wordpro"+k+"and20", clusterList, 20);
OutPutFile.outputclusterwprdpro("/home/qianyang/kmeans/result/wordpro"+k+"and25", clusterList, 25);
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48

如下圖所示為結果,我們可以看出每個簇下面的所聚集的文檔有哪些。

基於Kmeans演算法的文檔聚類

如下圖所示為簇下單詞的頻率。

基於Kmeans演算法的文檔聚類

如果感覺基於頻率計算得到的topk個單詞區分度不明顯,可再次使用tf-idf進行處理,這裡就不做過多的介紹了。

喜歡這篇文章嗎?立刻分享出去讓更多人知道吧!

本站內容充實豐富,博大精深,小編精選每日熱門資訊,隨時更新,點擊「搶先收到最新資訊」瀏覽吧!


請您繼續閱讀更多來自 程序員小新人學習 的精彩文章:

ASP 匯總實例
Swift 泛型

TAG:程序員小新人學習 |