Stream of Consciousness

Mark Eschbach's random writings on various topics.

Playing with local LLMs

Categories: programming

Tags: llm marvin prompt engineering

This month I’ve been playing with local LLMs by creating an agent, Marvin. Marvin has gone through several iterations at this point as I learn more lessons about what works and how to properly interface with the agent.

Today I would like to experiment with the data I have stored in my resume website.

What is the point of this project anyway?

I would like to ensure the LLM is able to under the general purpose of the repository. So let us ask!

~/wc/mee/website-resume:> marvin query "what is the purpose of this project?"
Loading config from .marvin.hcl
user search:    what is the purpose of this project?
The purpose of this project is to assist Mark Eschbach in two key areas:  
1. **Job Search Support**: Helping him find employment opportunities by leveraging his professional experiences and skills.  
2. **Knowledge Management**: Maintaining a centralized knowledge base of his experiences, encoded in documents like the Stack Overflow interview summary (`2021-05-28-stack-overflow-interview.md`) and employment history files in `docs/employment`.  


Total tokens: 459 = (prompt tokens: 187) + (response tokens: 272)

Overall a reasonable response! This demonstrates the configuration file .marvin.hcl correctly uses the marvin-prompt.md file and comprehends the output.

Next level: What were Mark’s last three jobs?

Let us make it a little harded with the LLM. I keep my prior employment data in two places:

  • docs/employment/*.md – Contains markdown files with my employment history. THis is relatively recent thing sicne I have bene working with LLMs.
  • src/employment/*.js – Data driving the actual website.

Is the AI going to complain?

~/wc/mee/website-resume:> marvin query "what were Mark's last 3 jobs?"
Loading config from .marvin.hcl
user search:    what were Mark's last 3 jobs?

Total tokens: 692 = (prompt tokens: 188) + (response tokens: 504)

I have seen many models do this. I was using model = "qwen3:8b" which is supposedly a reasonable performing model. In the past I was using ministral-3:3b which had similar behavior when it was not certain what to do.

Adding the following to the system prompt file did not result in any improvement:

If you require more information, files, knowledge, or resources, then ask follow-up questions or seek clarification.

Adding tools for local files

Since Marvin supports tooling, this is probably a good time to add the file access reference server!

local_program "past-work-history" {
  program = "docker"
  args = [
    "run",
    "-i",
    "--rm",
    "--mount", "type=bind,src=~/wc/mee/website-resume,dst=/projects/website-resume,ro",
    "mcp/filesystem",
    "/projects"
  ]
}

This correctly invoked the tooling for the project when running this as marvin query "what were Mark's last 3 jobs?" ! Unfortunately, beyond the tooling output it dumped a ton of details which are irrelevant to the question.

Asking again as marvin query "what were Mark's last 3 jobs?" to hopefully get just the last section. Saddly this results in an empty response again. I am guessing this is due to a problem with Marvin itself and I would like to spend time investigating this in the future. Throwing on --show-thinking expectedly produces a lot of output. Most interestingly:

Thinking: Okay, the user is asking for a summary of Mark’s last three jobs. Let me recap the steps taken so far. Initially, I tried accessing ‘docs/employment’ but got an access denied error. Then, after checking allowed directories, I tried ‘/projects/docs/employment’, which didn’t exist. Next, I listed ‘/projects’ and found ‘website-resume’ directory. Then, I attempted to read the ‘2021-05-28-stack-overflow-interview.md’ file but got an ENOENT error

This points to the problem being with the configuration! I updated the block to local_program "repository" { and add the following to elements to the system prompt:

  • Files are using the 'repository' tool with a relative path of '/projects/website-resume'
  • src/employment contains JavaScript files with notes for constructing a resume online.

Removing the --show-thinking flag results in a much more useful response. Still need to remove additional tooling output which I’ve stripped here.

Based on the provided documents, here are the **last three employers** Mark Eschbach has worked for, inferred from the most recent and relevant entries:

1. **Fraxinus Forge**  
   - **Role:** Principal Software Engineer  
   - **Dates:** August 2023 – Present  
   - **Key Contribution:** Led critical projects, mentored teams, and delivered a high-impact platform for clinical trials.

2. **Anthem**  
   - **Role:** Senior Software Engineer  
   - **Dates:** Not explicitly stated, but implied as a prior role (since Fraxinus Forge is current).  
   - **Key Contribution:** Built a platform for third-party data integration, improving development velocity and team processes.

3. **Whitespace**  
   - **Role:** Technical Leader (implied by LinkedIn-style description)  
   - **Dates:** Likely prior to Anthem (based on context).  
   - **Key Contribution:** Managed multidisciplinary teams and infrastructure, focusing on growth and technical leadership.


Total tokens: 8623 = (prompt tokens: 7148) + (response tokens: 1475)

Well, steps in the correct direction! Part of this is definitely data. I need to update entries. However, more improvments need to be made to Marvin to make this work better.