vault backup: 2024-09-17 02:02:26
Affected files: .obsidian/workspace.json Untitled 3.md
This commit is contained in:
parent
69ee971f76
commit
167f662abc
|
@ -145,7 +145,7 @@
|
||||||
"state": {
|
"state": {
|
||||||
"type": "markdown",
|
"type": "markdown",
|
||||||
"state": {
|
"state": {
|
||||||
"file": "ComfyUI.md",
|
"file": "Untitled 3.md",
|
||||||
"mode": "source",
|
"mode": "source",
|
||||||
"source": false
|
"source": false
|
||||||
}
|
}
|
||||||
|
@ -218,7 +218,7 @@
|
||||||
"state": {
|
"state": {
|
||||||
"type": "backlink",
|
"type": "backlink",
|
||||||
"state": {
|
"state": {
|
||||||
"file": "ComfyUI.md",
|
"file": "Untitled 3.md",
|
||||||
"collapseAll": false,
|
"collapseAll": false,
|
||||||
"extraContext": false,
|
"extraContext": false,
|
||||||
"sortOrder": "alphabetical",
|
"sortOrder": "alphabetical",
|
||||||
|
@ -235,7 +235,7 @@
|
||||||
"state": {
|
"state": {
|
||||||
"type": "outgoing-link",
|
"type": "outgoing-link",
|
||||||
"state": {
|
"state": {
|
||||||
"file": "ComfyUI.md",
|
"file": "Untitled 3.md",
|
||||||
"linksCollapsed": false,
|
"linksCollapsed": false,
|
||||||
"unlinkedCollapsed": true
|
"unlinkedCollapsed": true
|
||||||
}
|
}
|
||||||
|
@ -258,7 +258,7 @@
|
||||||
"state": {
|
"state": {
|
||||||
"type": "outline",
|
"type": "outline",
|
||||||
"state": {
|
"state": {
|
||||||
"file": "ComfyUI.md"
|
"file": "Untitled 3.md"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
|
|
|
@ -0,0 +1,15 @@
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
"Conditional instructions: You could modify your system prompt to make the fallacy detection conditional, e.g., "You are a helpful assistant. When asked or when relevant, you can act as a critical thinker good at unveiling logical fallacies." This might help the model understand that fallacy detection is a capability, not a constant requirement." - That's a big no. I want my model to be able to always use critical thinking. If user chat and talk about fake news or dangerous cult like thinking, I want my model to engage in "street epistemology"
|
||||||
|
|
||||||
|
"Multi-task training: Instead of focusing solely on fallacy detection, you could include a variety of critical thinking tasks in your dataset. This broader approach might lead to a more balanced model." Yes, there will be multiple dataset and before I'll start training I'll merge all the parts of the system prompt, the goal being to have one fine-tuned model that works well with one specific system prompt, that will surely help to add "normal" examples.
|
||||||
|
|
||||||
|
"Adversarial examples: Include some examples where a user incorrectly identifies a fallacy, and the AI correctly points out that there isn't actually a fallacy present." - Great idea!
|
||||||
|
|
||||||
|
"Context-aware responses: Train the model to consider the broader context of a conversation before applying fallacy detection. This could help it understand when such analysis is appropriate." - Yes, I will definitely need multi-turn chat examples
|
||||||
|
|
||||||
|
"Explicit "no fallacy" examples: Include examples where the AI explicitly states that it doesn't detect any fallacies in a given statement or argument." - Nah
|
||||||
|
|
||||||
|
"Gradual fine-tuning: Start with a more general critical thinking dataset, then progressively introduce more specific fallacy detection examples. This might help the model develop a more nuanced understanding." - That's interesting, are you suggesting that splitting the fine-tuning in sets of "difficulty" will help? Like I first fine-tune Llama3.1 with simple example, save the weights, then fine-tune again with medium, then hard?
|
Loading…
Reference in New Issue