我是编程新手。学习 python 以加快我使用 Anki 学习语言的速度。我想为 Anki 创建网络抓取脚本以更快地创建卡片。这是我的代码:(这不是最终产品,我最终想学习如何发送到 csv 文件,以便我可以导入到 Anki。)
from bs4 import BeautifulSoup
import requests
#get data from user
input("Type word ")
#get page
page = requests.get("https://fr.wiktionary.org/wiki/", params=word)
#make bs4 object
soup = BeautifulSoup(page.content, 'html.parser')
#find data from soup
IPA=soup.find(class_='API')
partofspeech=soup.find(class_='ligne-de-forme')
#open file
f=open("french.txt", "a")
#print text
print (IPA.text)
print (partofspeech.text)
#write to file
f.write(IPA.text)
f.write(partofspeech.text)
#close file
f.close()
它只返回来自维基百科的“今日词汇”,而不返回用户的输入。有什么想法吗?
最佳答案
(1) 读法语,把你想学的单词或句子记在纸上。
(2) 将这些单词/句子写到 {text, json, markdown, ...} 文件中。
(3) 使用带有 I/O 处理的 Python 阅读这些世界。
(4) 使用anki-connect运行网络服务器以与您的 Anki 帐户交互。
(5) 编写 Python 脚本以 HTTP 发布您输入的单词并在 deepl.com 上抓取答案。
(6) 结合这些工具,通过一个命令将学习 session 添加到 Anki。
(7) 快乐学习!
# https://github.com/FooSoft/anki-connect
# https://github.com/FooSoft/anki-connect/blob/master/actions/decks.md
import json
import urllib.request
def request(action, **params):
return {'action': action, 'params': params, 'version': 6}
def invoke(action, **params):
requestJson = json.dumps(request(action, **params)).encode('utf-8')
response = json.load(urllib.request.urlopen(urllib.request.Request('http://localhost:8765', requestJson)))
if len(response) != 2:
raise Exception('response has an unexpected number of fields')
if 'error' not in response:
raise Exception('response is missing required error field')
if 'result' not in response:
raise Exception('response is missing required result field')
if response['error'] is not None:
raise Exception(response['error'])
return response['result']
invoke('createDeck', deck='english-to-french')
result = invoke('deckNames')
print(f'got list of decks: {result}')
invoke('deleteDecks', decks=['english-to-french'], cardsToo=True)
result = invoke('deckNames')
print(f'got list of decks: {result}')
import scrapy
CODES = {
'fr': 'french',
'en': 'english'
}
URL_BASE = "https://www.linguee.com/%s-%s/translation/%s.html"
# these urls can come from another data file
# def get_data_from_file(filepath: string):
# with open('data.json', 'r') as f:
# lines = f.readlines()
#
# return [URL_BASE % (CODES['fr'], CODES['en'], line) for line in lines]
URLS = [
URL_BASE % (CODES['fr'], CODES['en'], 'lascive')
]
class BlogSpider(scrapy.Spider):
name = 'linguee_spider'
start_urls = URLS
def parse(self, response):
for span in response.css('span.tag_lemma'):
yield {'world': span.css('a.dictLink ::text').get()}
for div in response.css('div.translation'):
for span in div.css('span.tag_trans'):
yield {'translation': span.css('a.dictLink ::text').get()}
#!/bin/bash
# setup variables
DATE=$(date +"%Y-%m-%d-%H-%M")
SCRIPT_FILE="/path/to/folder/script.py"
OUTPUT_FILE="/path/to/folder/data/${DATE}.json"
echo "Running --- ${SCRIPT_FILE} --- at --- ${DATE} ---"
# activate virtualenv and run scrapy
source /path/to/folder/venv/bin/activate
scrapy runspider ${SCRIPT_FILE} -o ${OUTPUT_FILE}
echo "Saved results into --- ${OUTPUT_FILE} ---"
# reading data from scrapy output and creating an Anki card using anki-connect
python create_anki_card.py
https://stackoverflow.com/questions/63848799/