偏好数据集¶
偏好数据集用于奖励建模,其中下游任务是微调基础模型以捕获某些潜在的人类偏好。目前,这些数据集在 torchtune 中用于直接偏好优化 (DPO) 秘籍。
偏好数据集中的基本事实通常是针对同一提示的两个补全之间二元比较的结果,人类标注者根据某些预设标准指出其中一个补全比另一个更可取。这些提示-补全对可以是指令式(单轮,可选带单个提示)、聊天式(多轮)或用户与模型之间的其他交互集(例如,自由形式文本补全)。
在 torchtune 中使用 DPO 秘籍进行偏好数据集微调的主要入口点是 preference_dataset()。
本地偏好数据集示例¶
# my_preference_dataset.json
[
{
"chosen_conversations": [
{
"content": "What do I do when I have a hole in my trousers?",
"role": "user"
},
{ "content": "Fix the hole.", "role": "assistant" }
],
"rejected_conversations": [
{
"content": "What do I do when I have a hole in my trousers?",
"role": "user"
},
{ "content": "Take them off.", "role": "assistant" }
]
}
]
from torchtune.models.mistral import mistral_tokenizer
from torchtune.datasets import preference_dataset
m_tokenizer = mistral_tokenizer(
path="/tmp/Mistral-7B-v0.1/tokenizer.model",
prompt_template="torchtune.models.mistral.MistralChatTemplate",
max_seq_len=8192,
)
column_map = {
"chosen": "chosen_conversations",
"rejected": "rejected_conversations"
}
ds = preference_dataset(
tokenizer=tokenizer,
source="json",
column_map=column_map,
data_files="my_preference_dataset.json",
train_on_input=False,
split="train",
)
tokenized_dict = ds[0]
print(m_tokenizer.decode(tokenized_dict["rejected_input_ids"]))
# user\n\nWhat do I do when I have a hole in my trousers?assistant\n\nTake them off.
print(tokenized_dict["rejected_labels"])
# [-100,-100,-100,-100,-100,-100,-100,-100,-100,-100,-100,-100, -100,-100,\
# -100,-100,-100,-100,-100,128006,78191,128007,271,18293,1124,1022,13,128009,-100]
这也可以通过 yaml 配置来实现
# In config
tokenizer:
_component_: torchtune.models.mistral.mistral_tokenizer
path: /tmp/Mistral-7B-v0.1/tokenizer.model
prompt_template: torchtune.models.mistral.MistralChatTemplate
max_seq_len: 8192
dataset:
_component_: torchtune.datasets.preference_dataset
source: json
data_files: my_preference_dataset.json
column_map:
chosen: chosen_conversations
rejected: rejected_conversations
train_on_input: False
split: train
在这个例子中,我们还展示了当“chosen”列和/或“rejected”列的名称与数据集中相应列不同时,如何使用 column_map。
偏好数据集格式¶
偏好数据集应包含两列:“chosen”表示人类标注者偏好的响应,“rejected”表示人类标注者不偏好的响应。这两列中的每一列都应包含一个消息列表,且具有相同的提示。消息列表可以包含系统提示、指令、用户与助手之间的多轮对话,或者工具调用/返回。让我们以 Hugging Face 上的 Anthropic 乐于助人/无害数据集为例,看看多轮聊天式格式
| chosen | rejected |
|---------------------------------------|---------------------------------------|
|[{ |[{ |
| "role": "user", | "role": "user", |
| "content": "helping my granny with her| "content": "helping my granny with her|
| mobile phone issue" | mobile phone issue" |
| }, | }, |
| { | { |
| "role": "assistant", | "role": "assistant", |
| "content": "I see you are chatting | "content": "Well, the best choice here|
| with your grandmother about an issue | could be helping with so-called 'self-|
| with her mobile phone. How can I | management behaviors'. These are |
| help?" | things your grandma can do on her own |
| }, | to help her feel more in control." |
| { | }] |
| "role": "user", | |
| "content": "her phone is not turning | |
| on" | |
| }, | |
| {...}, | |
|] | |
目前,只支持 JSON 格式的对话,如上例所示。你可以通过 hh_rlhf_helpful_dataset() 在 torchtune 中直接使用此数据集。
从 Hugging Face 加载偏好数据集¶
要从 Hugging Face 加载偏好数据集,你需要将数据集仓库名称传递给 source。对于大多数 HF 数据集,你还需要指定 split。
from torchtune.models.gemma import gemma_tokenizer
from torchtune.datasets import preference_dataset
g_tokenizer = gemma_tokenizer("/tmp/gemma-7b/tokenizer.model")
ds = chat_dataset(
tokenizer=g_tokenizer,
source="hendrydong/preference_700K",
split="train",
)
# Tokenizer is passed into the dataset in the recipe so we don't need it here
dataset:
_component_: torchtune.datasets.preference_dataset
source: hendrydong/preference_700K
split: train