Session requirement to run pretrained models

Hello,
I’m doing a project where I have to try some models for accuracy and performance.
I would like to use Ollama from ollama.com because it seems an easy to use local method to run/test different models.

I tried to create the docker file and it should be correct (there were no build errors) but then when I start a session it tells me that I need 16GB of RAM and the Renku session enable only up to 8GB.

My question now is what do you advice me to do, should I try some other methods to run/test models? Is it possible to increase the RAM?

Any advice is welcome since I’m stuck and have no idea how to continue, especially since I’m not sure anymore if it is even possible to do such work with Renku.

Thanks in advance for any help

Hi @nic , Indeed, Renku can provide higher compute resources. It would be great if you can contact us via our email, and we can share with you the options.