But you should be able to get it to do stuff just by telling it what you want. Note that it's completion tuned (not chat-tuned), so it should perform better on single-turn conversations.
I used the llama.cpp quantize q_0 option. Seemed to vary widely just adjusting the temperature parameters. Was giving it an example db query saving to file in json and asking create a similar function did multiple queries.
I’m assuming typo and you mean 4? What model size? As mentioned above, I had serious issues making GGUF work well. I’m gonna post an article talking about a better way to set it up.
Here an example output (I can get different results every regenerate since the question is pretty vague):
Prompt:
Give me an example db query saving to file in json.
Result:
Sure! Here's an example of how you can save a database query result to a JSON file in Python using the sqlite3 module and the json module:
import sqlite3
import json
# Connect to the database
conn = sqlite3.connect("example.db")
cursor = conn.cursor()
# Execute a query that returns a single row
cursor.execute("SELECT \* FROM mytable WHERE id=123")
result = cursor.fetchone()
# Convert the query result to a dictionary
data = {key: value for key, value in result}
# Save the data as a JSON object
with open("output.json", "w") as f:
json.dump(data, f)
This code connects to a SQLite database named example.db, executes a query that retrieves a single row from a table called mytable, converts the resulting tuple into a dictionary, and then saves the dictionary as a JSON object to a file named output.json.