关键步骤: 1. 分词:将句子 1 和句子 2 分词并去除停用词,得到词汇列表。 2. 获取词汇的同义词集合:使用 WordNet 获取每个词的同义词集合。 3. 计算两个句子的语义相似度:通过比较两个句子中同义词集合的相似度来计算语义相似度。 代码实现及关键代码解释: ```python from nltk.corpus import wordnet as wn from nltk import word_tokenize from nltk.corpus import stopwords # 下载停用词 import nltk nltk.download('stopwords') def get_synsets(word): """ 获取单词的同义词集合 """ synsets = wn.synsets(word) return synsets def remove_stopwords(sentence): """ 去除停用词 """ stop_words = set(stopwords.words('english')) words = word_tokenize(sentence) filtered_words = [word for word in words if word.lower() not in stop_words] return filtered_words def calculate_similarity(sentence1, sentence2): words1 = remove_stopwords(sentence1) words2 = remove_stopwords(sentence2) # 获取所有词的同义词集合 synsets1 = [synset for word in words1 for synset in get_synsets(word)] synsets2 = [synset for word in words2 for synset in get_synsets(word)] # 计算两个句子的语义相似度 max_sim = 0 for synset1 in synsets1: for synset2 in synsets2: similarity = synset1.path_similarity(synset2) if similarity and similarity > max_sim: max_sim = similarity return max_sim # 两个句子 sentence1 = "i am feeling very happy today." sentence2 = "i am feeling extremely sad today." # 计算语义相似度 similarity = calculate_similarity(sentence1, sentence2) print("语义相似度:", similarity) ``` 结果分析: 通过 WordNet 计算得到的语义相似度值越接近 1,表示两个句子的语义越相似。在这个例子中,句子 1 和句子 2 都包含了情感词 happy 和 sad,这些词的同义词集合可能相关性较强,因此可能会有一定的语义相似度。实际结果需要根据具体情况进行分析。 [2024-11-11 23:24:01 | AI写代码神器 | 1241点数解答]