Arkor is in alpha, so this page is intentionally sparse. Items are grouped by what state they’re in: actively being built, scoped and waiting their turn, or under consideration. We don’t commit to dates yet.Documentation Index
Fetch the complete documentation index at: https://arkor-92aeef0e-function-calling-structured-outputs.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
In progress
Broader base model support
Expand support beyond Gemma 4 to additional open-weight model families, so the
model field becomes a real menu instead of a single supported value.Structured output
Let a fine-tuned model return typed fields (for example
{ urgency, nextAction } from a triage run) instead of free-form text, with the output schema declared in your project as a TypeScript type.Hosted inference endpoint URL
Surface a stable HTTPS endpoint URL for each fine-tuned model so you can call it from your product without going through the SDK.
Template-free project creation
Scaffold a project without picking a starter template. Today
create-arkor and arkor init require choosing triage, translate, or redaction; this opens the door to starting from a blank trainer.Up next
Auth0 token auto-refresh
Silent refresh on expiry, so long-running sessions stop getting interrupted by re-login.
Bring your own dataset (JSONL)
Upload a local JSONL file as the training dataset, alongside the existing HuggingFace name and blob URL paths.
Train on a local GPU
Run training on your own GPU instead of routing every job through Arkor’s managed GPUs.
Dry-run from Studio
Surface the existing dry-run option in the Studio UI for fast smoke tests before kicking off a full training run.
Backlog
Self-hosted training backend
Run the training backend on your own infrastructure, with a documented
ARKOR_CLOUD_API_URL knob and versioned API guarantees.deploy and eval slots
Grow
createArkor into an umbrella for shipping and evaluating models, not only training.Download trained models
Export a trained model as a file you can run on your own machine or deploy target, instead of staying on Arkor’s managed inference.
Synthetic data from a seed set
Generate training data from a small seed set, for cases where you don’t already have a labeled dataset.
Distillation templates
Templates that pair compatible teacher and student models so distillation runs work out of the box.
On-device model templates
Templates aimed at small models suitable for WebGPU and mobile targets.
Multimodal training
Fine-tune on image (and eventually audio) inputs alongside text. Today every template is text-in, text-out.