python - 使用聚类从文档列表中找出所有潜在的相似文档

我正在处理 quora 问题对 csv 文件,我将其加载到 pd 数据框中并隔离了 qid 和问题,所以我的问题采用这种形式:

0        What is the step by step guide to invest in sh...
1        What is the step by step guide to invest in sh...
2        What is the story of Kohinoor (Koh-i-Noor) Dia...
3        What would happen if the Indian government sto...
.....
19408    What are the steps to solve this equation: [ma...
19409                           Is IMS noida good for BCA?
19410              How good is IMS Noida for studying BCA?

我的数据集实际上更大(50 万个问题),但我将使用这些问题来展示我的问题。

我想找出问同一件事的可能性很高的问题对。我想到了一种天真的方法,即使用 doc2vec 将每个句子变成一个向量,然后为每个句子计算与其他每个句子的余弦相似度。然后,保留具有最高相似度的那个,最后打印所有具有足够高余弦相似度的那些。问题是这需要很长时间才能完成,所以我需要另一种方法。

然后我在另一个问题中找到了建议使用聚类来解决类似问题的答案。因此,以下是我根据该答案实现的代码。

"Load and transform the dataframe to a new one with only question ids and questions"
train_df = pd.read_csv("test.csv", encoding='utf-8')

questions_df=pd.wide_to_long(train_df,['qid','question'],i=['id'],j='drop')
questions_df=questions_df.drop_duplicates(['qid','question'])[['qid','question']]
questions_df.sort_values("qid", inplace=True)
questions_df=questions_df.reset_index(drop=True)

print(questions_df['question'])

# vectorization of the texts
vectorizer = TfidfVectorizer(stop_words="english")
X = vectorizer.fit_transform(questions_df['question'].values.astype('U'))
# used words (axis in our multi-dimensional space)
words = vectorizer.get_feature_names()
print("words", words)


n_clusters=30
number_of_seeds_to_try=10
max_iter = 300
number_of_process=2 # seads are distributed
model = KMeans(n_clusters=n_clusters, max_iter=max_iter, n_init=number_of_seeds_to_try, n_jobs=number_of_process).fit(X)

labels = model.labels_
# indices of preferable words in each cluster
ordered_words = model.cluster_centers_.argsort()[:, ::-1]

print("centers:", model.cluster_centers_)
print("labels", labels)
print("intertia:", model.inertia_)

texts_per_cluster = numpy.zeros(n_clusters)
for i_cluster in range(n_clusters):
    for label in labels:
        if label==i_cluster:
            texts_per_cluster[i_cluster] +=1

print("Top words per cluster:")
for i_cluster in range(n_clusters):
    print("Cluster:", i_cluster, "texts:", int(texts_per_cluster[i_cluster])),
    for term in ordered_words[i_cluster, :10]:
        print("\t"+words[term])

print("\n")
print("Prediction")

text_to_predict = "Why did Donald Trump win the elections?"
Y = vectorizer.transform([text_to_predict])
predicted_cluster = model.predict(Y)[0]
texts_per_cluster[predicted_cluster]+=1

print(text_to_predict)
print("Cluster:", predicted_cluster, "texts:", int(texts_per_cluster[predicted_cluster])),
for term in ordered_words[predicted_cluster, :10]:
    print("\t"+words[term])

我想这样我可以为每个句子找到它最有可能属于的集群,然后计算该集群的所有其他问题之间的余弦相似度。通过这种方式,我不会在所有数据集上执行此操作,而是在更少的文档上执行此操作。 However using the code for an example sentence "Why did Donald Trump win the elections?"我有以下结果。

Prediction
Why did Donald Trump win the elections?
Cluster: 25 texts: 244
    trump
    donald
    clinton
    hillary
    president
    vote
    win
    election
    did
    think

我知道我的句子属于第 25 组,而且我可以看到该组的最前面的词。但是我怎样才能访问这个集群中的句子。有什么办法吗?

最佳答案

您可以使用predict 来获取集群。然后使用 numpy 从特定集群中获取所有文档

clusters = model.fit_predict(X_train)

clusterX = np.where(clusters==0) 

indices = X_train[clusterX]

所以现在 indices 将拥有该集群中所有文档的索引

https://stackoverflow.com/questions/54429050/

相关文章:

javascript - 为什么 Chrome 在使用 Typed.js 时难以呈现表情符号?

ruby-on-rails - rake 数据库 :setup is giving Library

typo3 - 使用流体创建数组和对象 f :variable

angular - 如何在 ngrx 效果中使用 LatestFrom 进行单元测试

python - 如何在 Mac 上将 Python 完全恢复为出厂设置

spring-boot - 如何在 Spring Security oauth2 中调试重定向 ur

gitlab - 如何从私有(private) gitlab repo 创建公开发布?

wpf - ListView 组标题显示多次

java - soap 请求 java 中缺少 header

python - 如何在 django 中使用 ajax 形式验证 django-recaptcha