Installation
pip install jsonthat
Start by setting up your LLM providers and enter your API keys when prompted:
jt --setup
Display usage
jt -h
Usage
Here are various ways to use the JSON THAT CLI tool:
Basic Usage
Process text into JSON using the default provider:
$ echo "My name is Jay and I'm 30 years old" | jt
{
"name": "Jay",
"age": 30
}
Using a plain TEXT Schema
Guide the output using a JSON schema:
$ cat person_schema.txt
name: string
occupation: string
yearsOfExperience: number
$ echo "Alice is a software engineer with 5 years of experience" | jt --schema person_schema.txt
{
"name": "Alice",
"occupation": "software engineer",
"yearsOfExperience": 5
}
Using a JSON Schema
Guide the output using a JSON schema:
$ cat person_schema.json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"name": {
"type": "string"
},
"occupation": {
"type": "string"
},
"yearsOfExperience": {
"type": "number"
}
},
"required": ["name", "occupation", "yearsOfExperience"]
}
$ echo "Alice is a software engineer with 5 years of experience" | jt --schema person_schema.json
{
"name": "Alice",
"occupation": "software engineer",
"yearsOfExperience": 5
}
Processing a File
Convert the contents of a file to JSON:
$ cat input.txt | jt
{
"content": "This is the content of input.txt converted to JSON format."
}
Using a Specific Provider
Specify a provider for a single command:
$ echo "Convert this sentence into JSON" | jt --provider openai
{
"action": "Convert",
"subject": "sentence",
"target_format": "JSON"
}
Combining Options
Use multiple options together:
$ echo "Transform this using Ollama and a custom schema" | jt --provider ollama --schema custom_schema.json
{
"transformed_text": "This has been transformed using Ollama and a custom schema",
"provider_used": "ollama",
"schema_applied": true
}
Stream output
Stream the output of the LLM in real-time:
$ echo "Explain the process of photosynthesis" | jt --stream
This will display the JSON output as it's being generated, allowing you to see the results in real-time.
Features
- Supports json schema to guide outputs
- Pipe content directly
- Multi-LLM provider support: OpenAI, Claude, Mistral, Ollama
- Local LLM support with Ollama
- Store multiple provider configurations
- Set a default provider
- Choose provider via command-line flag
- Set model name per provider
- Stream output in real-time
Configuration
Config file
Configurations are managed through:
~/.config/jsonthat/config.yaml
Command to display configuration:
$ jt --config
The config file stores multiple provider configurations and the default provider.
default_provider: openai
providers:
claude:
api_key: *****
mistral:
api_key: *****
model: open-mistral-nemo
ollama:
api_url: http://localhost:11434
model: llama3.1
openai:
api_key: *****
model: gpt-4o-mini
Environment Variables
To configure the CLI tool, you can set the following environment variables:
export LLM_PROVIDER='openai'
export LLM_API_KEY='your_api_key_here'
export LLM_MODEL='gpt-4o-mini'
export OLLAMA_API_URL='http://127.0.0.1:11434'
export OLLAMA_MODEL='llama3'
Specifies the large language model provider (e.g., 'openai', 'claude', 'mistral' or 'ollama'), API key for the specified LLM provider, and Ollama-specific configurations.
Command-line Provider Selection
You can specify the provider to use for a single command:
echo "Your sample text" | jt --provider ollama
Feedback
Share your use case, something to improve or a question, join the discord: