酷代码 AI
菜单
全部AI  /  热门  /  AI写代码神器
服务商

爬取彼岸图网的图片并保存在本地,保存三页

要爬取彼岸图网的图片并保存在本地,可以使用Python的requests库和BeautifulSoup库。以下是一个简单的示例代码: ```python import os import requests from bs4 import BeautifulSoup def download_images(url, save_path): response = requests.get(url) soup = BeautifulSoup(response.text, 'html.parser') img_tags = soup.find_all('img') if not os.path.exists(save_path): os.makedirs(save_path) for img in img_tags: img_url = img['src'] try: response = requests.get(img_url, stream=True) with open(os.path.join(save_path, img_url.split('/')[-1]), 'wb') as f: for chunk in response.iter_content(chunk_size=8192): f.write(chunk) except Exception as e: print(f"Error downloading {img_url}: {e}") if __name__ == "__main__": base_url = "https://www.tupianzj.com/meinv/xinggan/list_50_{}.html" save_path = "downloaded_images" for i in range(1, 4): url = base_url.format(i) print(f"Downloading images from page {i}...") download_images(url, save_path) ``` 这个代码会下载彼岸图网的第一页到第三页的图片,并将它们保存在名为"downloaded_images"的文件夹中。这个代码仅适用于彼岸图网的特定页面结构,如果网站结构发生变化,可能需要相应地调整代码。 [2024-07-03 09:03:13 | AI写代码神器 | 434点数解答]

相关提问