vcli-agent , virtual cli agent, pronounced as Wiekly Agent (a wiek is a rotor of a windmill in Dutch, that makes it spin). It is a minimal pure Python agent harness where the model writes Unix pipelines instead of JSON tool calls.
$ echo "ask_agent ask me for a github repo (owner/name), \
then fetch its latest release tag and report it back to me" |
python3 -m vcli.llm_agent
llm> ask_agent ask me for a github repo (owner/name), then fetch its latest release tag and report it back to me
cat AGENT.md | pack | curl -X POST https://api.openai.com/v1/chat/completions \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer **********' \
-m 300 -d @-
llm> cat AGENT.md | pack | curl -X POST https://api.openai.com/v1/chat/completions ...
CMD: ask_human Please provide a GitHub repository in the format owner/repo \
(e.g., ggerganov/llama.cpp): | pack | curl -X POST https://api.openai.com/v1/chat/completions ...
llm> ask_human Please provide a GitHub repository in the format owner/repo ...
[agent asks] Please provide a GitHub repository in the format owner/repo (e.g., ggerganov/llama.cpp):
human> nextcloud/server
CMD: curl https://api.github.com/repos/nextcloud/server/releases/latest \
| grep tag_name | pack | curl -X POST https://api.openai.com/v1/chat/completions ...
llm> curl https://api.github.com/repos/nextcloud/server/releases/latest | grep tag_name | pack | curl ...
DONE: fetched the latest release tag for the nextcloud/server repository, which is v33.0.2.
llm>
An AGENT.md file explains the concept so the LLM understands how to use it:
produce a Unix text-stream pipeline instead of JSON tool calls.
curl handles all external calls, including the LLM API itself, eliminating the need for MCP or any dedicated tool-call protocol.
The agentic loop is bootstrapped by performing the first LLM call via curl, after which the LLM takes over the control of the loop and performs tool calls, including calling itself again using curl
The control flow is readable left to right:
ask_agent → cat AGENT.md | pack | curl(llm) → ask_human | pack | curl(llm) → curl(github) | grep | pack | curl(llm) → DONE
The agent is given the task to write an executable Unix pipeline where tool calls are all implemented in pure Python, including curl (via urllib), to return the message history and tool results back to itself.
If a command's output is itself a command, the REPL runs it next without reading stdin. That one mechanism is what turns a REPL into an agent loop.
LLMs know Unix pipelines and tools like grep, curl, sed, and cat from their training data.
Wiekly exploits this to write agentic tool calls in a language the model already speaks fluently.
The benefit of a predefined set of tools all implemented in pure Python eliminates the need for sandboxing, which is one of the requirements that is needed in other agentic frameworks that shell out to real CLI tools. Instead of letting the model do anything and then bolting on guardrails (Docker, seccomp, prompt-injection defenses), vcli inverts this: the model can only call functions that exist in the registry. There is no subprocess, no eval, no shell-out. The "sandbox" isn't a wall built around an agent that could do anything, it's the set of verbs exposed to the agent.
Agentic tools can be registered in pure Python like this:
@agent.cmd(name="upper", help="Uppercase piped input")
def _upper(args):
return " ".join(args).upper()
The registry of tools is a plain dict. The loop calls registry[command_name](args).
Which makes the tool available in any pipeline:
# demo agent: run a vcli pipeline from stdin
$ echo "echo hello world | upper" | python -m vcli
vcli> echo hello world | upper
HELLO WORLD
vcli>
What's next, the virtual filesystem
The last seam is cat and tee touching real disk.
The fix is a virtual filesystem: a Python dict mapping paths to contents,
with cat, tee, ls, cd, and find
rebuilt as pure functions over the dict.
This closes the sandbox property completely: zero filesystem side effects by construction,
no Docker or venvs required.
Maybe later it can be a persistent system using object storage… via curl
The agent gets the full mental model of a Unix filesystem without ever touching the OS.
Conclusion
The basis of an agentic framework is a loop, a parser, and a registry of functions. With a few available commands the agent is able to bootstrap itself and perform agentic tasks with a simple implementation and without the need for sandboxing. If this approach checks out it would be straightforward to implement it asynchronously and in a faster language, including the much more stable and efficient original implementations of these tools.
I got inspired by the Lex Fridman podcast with Peter Steinberger, founder of OpenClaw which uses pi-mono. Peter's conviction that CLI is all that agents need gave me the idea to build this and understand its implications.