I'm running Llama3.2 locally and I'm curious if there's a way to include a .js file to provide context for my AI prompts. Any tips on how to do this with Docker?
2 Answers
I’m not sure about direct file inclusion either. However, while using Llama3.2, you could possibly manipulate it to read from a shared volume. Just keep an eye on Docker updates, as file inclusion might be something they add in the future!
You might find some workarounds, but as of now, Docker doesn't directly support including external files in models. However, you can try using volume mounts to share files between your host and the Docker container. Just make sure you reference the mounted path correctly in your model's code. It could potentially give your AI extra context!
Related Questions
How To: Running Codex CLI on Windows with Azure OpenAI
Set Wordpress Featured Image Using Javascript
How To Fix PHP Random Being The Same
Why no WebP Support with Wordpress
Replace Wordpress Cron With Linux Cron
Customize Yoast Canonical URL Programmatically