Dear Renku Community:
In pre amalthea times it was possible to start and run renku sessions via API call and JUPYTERHUBTOKEN from an other Renku Session and than run a post-init.sh script. Is such a process still possible with amalthea?
@paedikoller there are two separate issues here:
Run a post-init.sh script in a session.
This happens regardless of how a session is started in Renku. As long as there is a script called
post-init.sh at the root of the project it will be executed right before the session starts every time a session is created.
Is there a way to skip the UI and launch a session programatically?
Yes. Every standard deployment will have a swagger page exposed at
http://<renku-domain>/swagger where users can look at the different endpoints Renku exposes. You can see the details on which path you need to start a session there. For example on renkulab here is the swagger page. The swagger page is also setup to run you through the necessary login steps so that you can authenticate through it and send requests to Renku directly in it. Simply click on the little padlock icon, select
oauth2-swagger (OAuth2, authorization_code with PKCE), select the
openid scope, put
swagger as the
client_id and click
If you are trying to launch Renku sessions through a python script for example, separately from the swagger page things get more complicated. This is because you have to do the right oauth2 authentication steps so that renku can authenticate you.
Is the swagger page sufficient for what you need or do you need more information on programatically starting sessions? Also may I ask why do you want to start sessions programatically?
This is what the swagger page that I mentioned looks like:
Thanks for the Exploration. I would be happy to know more about the Python way to start sessions. If you have an example or docs that would be great.
What our DataScientists want to achieve is the following. Suppose we have project A witch contains a generic data gathering algorithm (eg a web scrawler). Than they have a project B which we can be used to eg. trigger project A or run analysis with actual data from project A and maybe other similar projects.
I see. I think that using gitlab CI jobs for this purpose may be easier. Renku has no concept of workflows that span different projects or of scheduling the execution of jobs/workflows.
You can setup a CI job that runs the web crawler and saves the data periodically. Then you can launch your analysis and download the latest data from wherever the crawler saved it.
The other option is simply combining the two projects into one project. Then you can have a single Renku workflow that contains the web crawling and analysis and knows that they are dependent. This way you can run the crawling right before the analysis and get the latest data. Again you will have to trigger all this by hand after you start a session. Or you can try to automate the process through a CI job that would run this workflow without launching a session. Checking the gitlab CI docs it seems that you can setup a specific docker image that will be used to run the CI job. So you can setup the image built by the project to be used. When you use the project image for the CI job you should get access to all the packages as well as the renku cli.
Does that help?