大模型API接入指南

一 API清单(截至2026-01-25)

1. 海外大模型API

平台名称API地址获取秘钥最新说明
Geminihttps://generativelanguage.googleapis.comhttps://aistudio.google.com/app/apikey支持Gemini 3系列,需科学上网
OpenAIhttps://api.openai.comhttps://platform.openai.com/api-keys支持GPT-5.2、GPT-4o-mini等,需绑定支付方式
Anthropic Claudehttps://api.anthropic.com/https://console.anthropic.com/settings/keysClaude 4.5已上线,支持更长上下文
Grokhttps://api.x.aihttps://console.x.ai/api-keys通用入口,需X账号登录

2. 海外第三方聚合API

平台名称API地址获取秘钥最新说明
OpenRouterhttps://openrouter.ai/api/v1/https://openrouter.ai/settings/keys聚合Gemini 3、Claude 4.5等,支持自定义模型路由
Groqhttps://api.groq.com/openaihttps://console.groq.com/keys兼容OpenAI格式,推理速度较优,支持Llama 3-70B
Togetherhttps://api.together.xyzhttps://console.together.xyz/settings/api-keys新增Mixtral 9B-8x22B等模型,支持微调部署

3. 国内大模型API

平台名称API地址获取秘钥最新说明
火山引擎https://ark.cn-beijing.volces.com/api/v3/https://console.volcengine.com/ark/api-key支持豆包种子模型,国内低延迟,需实名认证
DeepSeekhttps://api.deepseek.comhttps://platform.deepseek.com/api_keysDeepSeek-2已上线,数学推理能力提升
腾讯混元https://api.hunyuan.cloud.tencent.comhttps://cloud.tencent.com/login?s_url=https%3A%2F%2Fconsole.cloud.tencent.com%2Fhunyuan%2Fapi-key支持混元大模型4.0,多模态能力增强
阿里云百炼https://dashscope.aliyuncs.com/compatible-mode/v1/https://bailian.console.aliyun.com/?tab=model#/api-key通义千问Qwen-Omni-Realtime支持实时音视频会话
Kimihttps://api.moonshot.cnhttps://platform.moonshot.cn/console/api-keysMoonshot-4已发布,长文本处理更优
智谱https://open.bigmodel.cn/api/paas/v4/https://open.bigmodel.cn/usercenter/proj-mgmt/apikeysGLM-4.7-Flash开源,支持本地部署与API调用
即梦AI(字节跳动)https://dream.volcengine.com/api/v1/https://dream.volcengine.com/console/api-keys字节旗下AI创作平台,支持文生图、视频生成,国内直连

4. 国内第三方聚合API

平台名称API地址获取秘钥最新说明
兔子APIhttps://api.tu-zi.com/v1/https://api.tu-zi.com/token国内聚合,支持多模型切换,有免费额度
AiHubMixhttps://aihubmix.com/v1/https://aihubmix.com/token兼容OpenAI格式,支持国内主流模型
302AIhttps://api.302.ai/v1/https://dash.302.ai/login登录后创建密钥,支持按量付费与套餐购买
硅基流动https://api.siliconflow.cnhttps://cloud.siliconflow.cn/account/ak提供AK/SK密钥对,支持模型微调与推理部署

5. 本地部署

平台名称API地址获取秘钥最新说明
Ollamahttp://localhost:11434/v1/-支持本地运行Llama 3、Mistral等,无需密钥,Windows 11可直接安装

二、Python快速接入示例代码

前置准备

先安装必备依赖包,执行以下终端命令:

1
pip install openai requests dashscope volcengine-python-sdk

1. 海外大模型(以OpenAI为例,兼容Groq/OpenRouter)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import os
from openai import OpenAI

# 配置(建议将密钥存入环境变量,避免硬编码)
# 方式1:直接设置环境变量(Windows:set OPENAI_API_KEY=你的密钥;Mac/Linux:export OPENAI_API_KEY=你的密钥)
client = OpenAI(
    api_key=os.getenv("OPENAI_API_KEY") or "你的OpenAI密钥",
    base_url="https://api.openai.com/v1"  # 替换为Groq地址:https://api.groq.com/openai/v1
)

def openai_chat_demo():
    """OpenAI 聊天接口示例"""
    try:
        response = client.chat.completions.create(
            model="gpt-4o-mini",  # 可选模型:gpt-5.2、llama-3-70b(Groq)
            messages=[
                {"role": "system", "content": "你是一个简洁的助手"},
                {"role": "user", "content": "请简要介绍Python的优势"}
            ],
            temperature=0.7,
            max_tokens=500
        )
        # 提取回复内容
        print("OpenAI回复:\n", response.choices[0].message.content)
    except Exception as e:
        print("调用失败:", str(e))

if __name__ == "__main__":
    openai_chat_demo()

2. 国内大模型(以Kimi为例,无科学上网需求)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import os
from openai import OpenAI

# Kimi 兼容OpenAI格式,配置专属base_url
client = OpenAI(
    api_key=os.getenv("KIMI_API_KEY") or "你的Kimi密钥",
    base_url="https://api.moonshot.cn/v1"
)

def kimi_chat_demo():
    """Kimi 聊天接口示例"""
    try:
        response = client.chat.completions.create(
            model="moonshot-v4",  # 可选模型:moonshot-v4-32k、moonshot-v3
            messages=[
                {"role": "system", "content": "你是一个专业的技术助手,回答简洁易懂"},
                {"role": "user", "content": "请简要说明本地部署Ollama的步骤"}
            ],
            temperature=0.6,
            max_tokens=1000
        )
        print("Kimi回复:\n", response.choices[0].message.content)
    except Exception as e:
        print("调用失败:", str(e))

if __name__ == "__main__":
    kimi_chat_demo()

3. 国内大模型(以阿里云百炼为例,官方SDK)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
import os
import dashscope
from dashscope import Generation

def dashscope_chat_demo():
    """阿里云百炼 聊天接口示例"""
    # 配置密钥
    dashscope.api_key = os.getenv("DASHSCOPE_API_KEY") or "你的阿里云百炼密钥"
    
    try:
        response = Generation.call(
            model="qwen-turbo",  # 可选模型:qwen-max、qwen-omni-realtime
            messages=[
                {"role": "system", "content": "你是一个简洁的助手"},
                {"role": "user", "content": "请简要介绍阿里云百炼的优势"}
            ],
            result_format="message",
            temperature=0.7,
            max_tokens=500
        )
        # 提取回复内容
        if response.status_code == 200:
            print("阿里云百炼回复:\n", response.output.choices[0].message.content)
        else:
            print("调用失败:", response.code, response.message)
    except Exception as e:
        print("调用失败:", str(e))

if __name__ == "__main__":
    dashscope_chat_demo()

4. 本地部署(Ollama为例,无需密钥,需先安装Ollama并拉取模型)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
from openai import OpenAI

# Ollama 本地接口,兼容OpenAI格式
client = OpenAI(
    api_key="ollama",  # 占位符,无需真实密钥
    base_url="http://localhost:11434/v1"
)

def ollama_chat_demo():
    """Ollama 本地聊天接口示例(需先运行:ollama run llama3)"""
    try:
        response = client.chat.completions.create(
            model="llama3",  # 本地拉取的模型名称(可替换为mistral、qwen等)
            messages=[
                {"role": "system", "content": "你是一个简洁的本地助手"},
                {"role": "user", "content": "请简要介绍本地部署大模型的优势"}
            ],
            temperature=0.7,
            max_tokens=500
        )
        print("Ollama本地回复:\n", response.choices[0].message.content)
    except Exception as e:
        print("调用失败:", str(e), "(请确认Ollama已启动并拉取了llama3模型)")

if __name__ == "__main__":
    ollama_chat_demo()

5. 国内第三方聚合API(以兔子API为例,兼容OpenAI格式)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("TUZI_API_KEY") or "你的兔子API密钥",
    base_url="https://api.tu-zi.com/v1"
)

def tuzi_chat_demo():
    """兔子API 聚合聊天接口示例"""
    try:
        response = client.chat.completions.create(
            model="gpt-4o",  # 可选国内模型:kimi-moonshot-v4、glm-4
            messages=[
                {"role": "user", "content": "请简要说明聚合API的优势"}
            ],
            temperature=0.7,
            max_tokens=500
        )
        print("兔子API回复:\n", response.choices[0].message.content)
    except Exception as e:
        print("调用失败:", str(e))

if __name__ == "__main__":
    tuzi_chat_demo()
原文链接: https://www.17you.com/ai/%E5%85%A8%E7%90%83%E5%A4%A7%E6%A8%A1%E5%9E%8Bapi-%E5%92%8C%E7%A7%98%E9%92%A5%E8%8E%B7%E5%8F%96%E5%9C%B0%E5%9D%80/ 已复制!
一起薅AI羊毛

保持关注,记得把网址 (17you.com) 加收藏夹!有空经常来网站看看!我们每天都分享最新鲜、最实用的AI知识、最新动态、最新技术,以及最新的应用场景。

请点击联系我


相关内容